## Using the Active Directory Powershell module with Windows Server 2003

In order to perform this you will need to firstly install Powershell 2.0 on the server 2003 instance:

Unfortunately we have to issue our powershell commands from a Windows 7 (or Server 2008 R2+) if we wish to use the 'ActiveDirectory' module.

Ensure that the following hotfix is installed on the Server 2003 instance:

*** Firstly ensure that the LATEST version of the .NET framework is installed before proceeding! ***

You should also install the hotfix for .NET Framework 3.5.1 (KB969166) which can be downloaded below:

and install the following on the server 2003 instance:

In order to perform this on the Windows 7 machine we should firstly download and install the Remote Server Administration Tools for Windows 7 from:

Proceed by activating the feature from: Control Panel >> Programs and Features >> 'Turn Windows features on or off' >> Remote Server Administration Tools >> Role Administration Tools >> AD FS and AD LDS Tools >> and ensure that the 'Active Directory Module for Windows Powershell' node is ticked.

We proceed by launching powershell from cmd on our Windows 7 box:
powershell.exe
Import-Module ActiveDirectory
You should then be able to run your AD related commands e.g.:
This will search an OU for locked accounts on a specific server, display them and then automatically unlock them for you.

## Remoting into Windows Server 2003/2008/2012 using Powershell / PSSession

On the server you wish to remote to we should ensure that the server is setup for powershell remote session (e.g. firewall etc.) by running the following from powershell:
Enable-PSRemoting
and authorize the connecting machine (the Windows 7 box) to connect to the server instance:
Set-Item wsman:\localhost\Client\TrustedHosts client.my.domain -Concatenate -Force
We then initiate the session with:
$securePassword = ConvertTo-SecureString "Password" -AsPlainText -force$credential = New-Object System.Management.Automation.PsCredential("domain\username",$securePassword)$session = New-PSSession -ComputerName Remote2003Server -authentication default -credential $credential Enter-Pssession$session

## Thursday, 27 August 2015

We shall firstly install and configure LinOTP from thier repositories (I will be using Debian for this tutorial)

deb http://www.linotp.org/apt/debian jessie linotp
and then install the linotp packages:
Install mysql server and client:
apt-get install mysql-server mysql-client
Setup useraccount called 'linotp2' and database named 'LinOTP2' with password.

Go to LinOTP management panel: https://10.0.3.128/manage/

Add LDAP user directory: LinOTP Config >> Useridresolvers >> New >> LDAP and fill in as below:
Resolver Name: MyDomain
Server-URI: <domaincontroller-hostname>
BaseDN: OU=Users,DC=my,DC=domain

#arbitrary name of the authentification asking client (i.e. VPN server)
client vpn {
ipaddr  = 10.0.0.0 #IP of the client
secret  = 'mysecret' #shared secret, the client has to provide
}
set default module:

DEFAULT Auth-type := perl
Insert:

into /etc/freeradius/modules/perl (between perl parenthesis / nest)
Configure the linotp module:
nano /etc/linotp2/rlm_perl.ini

#IP of the linotp server
URL=https://10.1.2.3:443/validate/simplecheck
#optional: limits search for user to this realm
REALM=my-realm
#optional: only use this UserIdResolver
#RESCONF=flat_file
#optional: comment out if everything seems to work fine
Debug=True
#optional: use this, if you have selfsigned certificates, otherwise comment out
SSL_CHECK=False
Create the virtual server for linotp:

authorize {

#normalizes maleformed client request before handed on to other modules (see '/etc/freeradius/modules/preprocess')
preprocess

#  If you are using multiple kinds of realms, you probably
#  want to set "ignore_null = yes" for all of them.
#  Otherwise, when the first style of realm doesn't match,
#  the other styles won't be checked.

#allows a list of realm (see '/etc/freeradius/modules/realm')
IPASS

#understands something like USER@REALM and can tell the components apart (see '/etc/freeradius/modules/realm')
suffix

#understands USER\REALM and can tell the components apart (see '/etc/freeradius/modules/realm')
ntdomain

#  Read the 'users' file to learn about special configuration which should be applied for
files

# allows to let authentification to expire (see '/etc/freeradius/modules/expiration')
expiration

pap
}

#here the linotp perl module is called for further processing
authenticate {
perl
}

Activate the virtual server:

You should now ensure you DELETE the inner-tunnel and default configuration within the sites-enabled folder to get this working properly.
** Note: If you get an error like follows when starting freeradius e.g.:

freeradius  Unknown value perl for attribute Auth-Type

try commenting out the default auth type in /etc/freeradius/users **

You can also test with https://linotp-server>/validate/check?user=myuser&pass=<pin><access-code>

## Checking VMFS for file system errors with VOMA

Checking VMFS for errors might arise when you are unable to modify or erase file on a VMFS datastore or problems accessing specific files.

Typically using VOMA should be done when one of the following occurs:

SAN outage
Rebuilt RAID
Disk replacement

** Important ** Before running voma ENSURE that all VM's are turned off on the datastore or ideally migrated onto a completely different datastore.

You should also ensure that the datastore is unmounted on ** ALL ** ESXI hosts (you can do this through vSphere)!

SSH into the ESXI host and run the following:

voma -m vmfs -d /vmfs/devices/disks/naa.00000000000000000000000000:1 -s /tmp/analysis.txt

(Replacing 'naa.00000000000000000000000000:1' with the LUN NAA ID and partition to be checked.)

## Installing a self-signed certifcate as a trusted root CA in Debian

Firstly copy your certifcate to /usr/share/ca-certifcates:

cp /path/to/certificate.pem /usr/share/ca-certificates

and then ensure the ca-certifcates package is installed with:

apt-get install ca-certificates

and finally install the certifcate with:

dpkg-reconfigure ca-certificates

Select 'Yes' at the dialog prompt and ensure that your certifcate is checked.

## Windows Server Enterprise 2003 x86 on VMWare

Windows Server Enterprise 2003 x86 can support upto 64GB of RAM (with PAE enabled) - although be aware that when running under the ESXI hypervisor / vSphere having the 'Memory Hot Plug' enabled under Memory section in the VM settings can cause issues if you are using over 4GB of RAM and you should disable it as it is not compatible with ESXI 6.0 (at least in my testing.)

If you have it enabled you get all sorts memory errors, services etc. failnig to start at boot and a partially working OS!

This scenerio can arise when importing Server 2003 VM's from VMWare Player with the use of an OVA package.

## Enabling isakmp and ipsec debugging on Cisco ASA and IOS Router

On the ASA / router run:

config t
monitor logging 7 // This allows you to see the output on vty lines e.g. telnet / SSH sessions

debug crypto isakmp 127
debug crypto ipsec 127

We can also filter the logging to a specific VPN peer e.g.:

debug crypto condition peer 1.1.1.1

If you are not seeing any expected output verify whether syslog is turned on with:

show logging

If it is you can use ADSM under Monitoring >> Logging to view / filter etc. the logs.

To help debug any VPN issues you can also use the following command to troubleshoot ISAKMP:

show isakmp sa

show ipsec sa

and

show isakmp sa detail

## Setting up log shipping with MSSQL

Log shipping is a process that allows you to create a secondary copy of a database for failover / backup purposes by transporting logs from one database to the other by means of backing up and restoring logs between the primary and secondary (AKA standby) database.

TYpically log shipping should be performed between two of the same MSSQL version, although it is possible to perform it between different versions - not all variations are supported (check firstly!)

The SQL Server Agent handles and processes the log shipping and is typically setup on the primary source.

You should ensure the database you wish to mirror has it's recovery plan to either 'Full' or 'Bulk Loggged'

To setup log shipping you should firstly go to the database's properties >> Transaction Log Shipping >> and tick 'Enable this as a primary database in a transaction log shipping configuraiton'. Proceed by hitting the 'Backup Settings' button to specify when you wish to backup the database logs e.g. every 15 minutes.

Specify the backup location, job name, compression settings and so on and hit OK.

Specify the secondary database in the 'Secondary Databases' section by clicking on 'Add'. Specify the secondary database and server. Within the 'Initialize Secondary Database' tab we will select "Yes generate a full backup of the database...", within the "Copy Job" tab specify the destination folder for where the coppied logs from the primary server will be stored on the secondary server and give the copy job a name. Within the 'Restore Transaction Log' tab select whether the (secondary) database can be read during a restore operation ('No Recovery' or 'Standby' Mode.)

Hit OK and within the database properties >> 'Transaction Log Shipping' tab again and ensure 'Use a monitor service instance' is ticked (under Monitor Service Instance section) - this will provide you will details of transaction log history and help you keep track of operations.)

## Full vs Simple Recovery Model in MSSQL

MSSQL offers a number of recovery models that can be employed - of which 'Full' and 'Simple' are the most common types.

The simple recovery model provides you with a complete backup of the database that you can restore - although does not provide point in time recovery.

Typically it should be used for transient databases or very hot databases and where data loss is not critical.

Where as a 'full' recovery model keeps all of the logs for the database until a backup occurs or logs are truncated. This provides you with the ability to perform point-in-time recovery and is typically used for databases where data loss can't be afforded - although disk space from backed up logs comes at a premium with this model.

## Resolved: Resolving the CID mismatch error: The parent virtual disk has been modified since the child was created.

When performing snapshotting operations within VMware a delta disk is created for all compatible disks. The delta vmdk file holds all of the changes made from the point-in-time that the snapshot was performed.

Typically you would have a base disk e.g. CDRIVE.VMDK - the delta disk would look something like CDRIVE-0000000001.vmdk, incrementing the integer with each snapshot taken.

I came accross a situation the other day where VMWare was complaning that my the parent disk had a different CID (a unique identifer for a disk / VMDK that changes whenever turned on):

"Resolving the CID mismatch error: The parent virtual disk has been modified since the child was created"

I was unable to consolidate the snapshots with the VMWare client - in they weren't even visable within the client - I had to browse the datastore to find them.

This can be caused by problems with vSphere Replication, adding snapshotted disks to an existing VM, expanding disks with existing snaphsots and so on.

In order to remedy this we should firstly identify the CID of each disk - the CID is located in the disks discriptor file. Lets say we have the following parent disk:

CDRIVE.VMDK

and the following delta (snaphots) drives:

CDRIVE-00000001-DELTA.VMDK
CDRIVE-00000002-DELTA.VMDK

* Warning: Do not run 'cat' on any files with *flat* or *delta* in their title - these are NOT discriptor files *

*** BEFORE PERFORMING ANY OF THE BELOW ENSURE YOU BACKUP ALL OF THE FILES AFFECTED AS THIS OPERATION HAS THE POTENTIAL OF CORRUPTING THE DISKS! ***

We will firstly identify the CID of all of the VMDK's:

SSH into the ESXI host and cd to the vm's directory e.g.:

cd /vmfs/datastore1/myvm

There will be an assortment of files - but we are looking for CDRIVE-00000001.VMDK in our case.

To view the descriptor file we should run something like:

cat CDRIVE.VMDK

We might get something like:

# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=2776c5f6
parentCID=ffffffff

The parentCID for this disk is 'ffffffff' informing us that this is the the highest disk in the hierarchy.

The CID '2776c5f6' is what the delta disk CDRIVE-00000001.VMDK should have for it's 'parentCID' attribute, so we will now check the descriptor file for 'CDRIVE-00000001.VMDK':

cat CDRIVE-00000001.VMDK

# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=45784545j
parentCID=2776c5f6

As expected the parentCID is correct - now CDRIVE-00000002.VMDK's parentSID attribute should be that of CDRIVE-00000001.VMDK's CID (45784545j)

So we proceed by checking CDRIVE-00000002.VMDK:

cat CDRIVE-00000002.VMDK

# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=KKF88944
parentCID=3HHF343F

Although we find that CDRIVE-00000002.VMDK's parentCID does not match that of CDRIVE-00000001.VMDK and hence is where the break in the chain is. So we should change CDRIVE-00000002.VMDK's parentCID to '45784545j'.

We can now ensure that the snapshots are valid with the following command against the highest level snapshot e.g.:

vmkfstools -e CDRIVE-00000001.VMDK

## SBC (Session Border Controller) vs Traditional Firewall

To summerise a firewall typically works on layer 3, 4 and sometimes partially on 7 (e.g. FTP / limited SIP awareness)

A SBC works on layer 7 and can fully understand VOIP traffic, meaning that it can:

- Block denial of service attacks
- Detect spoofed / malicious SIP packets
- Close and open RTP ports as nesacasery
- Transcode between different media protocols

The major seeling point of an SBC is used to provide interopability between different SIP solutions, since SIP implmentations quite often devaite from each other.

SIP utilizes RTP (Real-time traffic protocol) that unfortuanely typically requires UDP ports 6000 - 40000 to be available (much like the downfalls of FTP) - a traditional firewall will not be able to protect the SIP traffic.

They can be placed behind a firewall (that works properly with NAT an SIP) or simply presented directly to the router.

## Throttling the I/O of Windows Server Backup

I noticed by default that Windows Backup / Windows Server Backup does not have any built-in way to throttle disk IO during backups. When backups overun this can cause a pretty severe impact on services / users relying on the machine.

Although fortunatly windows provides a native API to control I/O on processes - now lucky for us Process Hacker can do this for us...

Firstly select the process and right-hand click selecting I/O Priority and choosing a specific level e.g. 'Low' - there is also a 'save for myprocess.exe' button in the save dialog window allowing you to make the changes permenemet for future instances of the process:

The specific process you should throttle is called wbengine.exe. I usually set the I/O prioirty to 'Low' which seems to do the job for me at least - but you might need to experiment a little first :)

## Restore a Windows Server Backup job

There was an occasion where for some reason a Windows Server backup job had mysteriously dissapeared from the Windows Server Backup GUI and was not visible from wbadmin.

Although we need not worry as we can simply use the wbadmin utility to import the job's catalog file and viola - we are on track again:

wbadmin restore catalog -backupTarget:C:\path\to\backupdestination

## Fiber Optics and Switches 101

Typically when dealing with fiber connections you will have either an LC or SC cable that will go over a patch panel and terminate on both sides on a switch with a transiever module such as the MGBSX1 transiever commonly used on Cisco switches.

Two main types single mode and multimode:

Single Mode - Offers less bandwidth, although can transmit further distances (around 5KM maximum)

Multimode - Offers more bandwdith but at a reduced maximum distance (between 300 - 600 meters approx)

Two common connector types:

SC - 1.25mm in diameter

LC - Older 2.5mm in diameter

Typically as good practise the transit cables (the casbles connecting the two patch panels together) should be straight through (e.g. Fiber 1 on Patch Panel A should be Fiber 1 on Patch Panel B and so on)

One of the patch cables should be Fiber 1 = A and Fiber 2 = B. The other patch cable on the other site should be crossed e.g. Fiber 1 = B and Fiber 2 = A.

** The fibers must be crossed at one end for a connection to be established on the switches. **

## Importing your physical and virtual machines into AWS

AWS provides you with the ability to import on-premise machines into thier cloud.

https://www.vmware.com/products/converter

Once you have converted your physical machine into a virtualized format you should download and install the AWS Command Line Interface from:

http://aws.amazon.com/cli/

There are also some pre-requisites on importing / exporting machines from AWS - including the operating system support:

- Microsoft Windows Server 2003 (with at least SP1)
- Microsoft Windows Server 2003 R2
- Microsoft Windows Server 2008
- Microsoft Windows Server 2008 R2
- Microsoft Windows Server 2012
- Microsoft Windows Server 2012 R2
- Windows 7
- Windows 8
- Windows 8.1
- Various Linux Versions

Disk images must be in either VHD, VMDK or OVA containers.

For a more detailed list please see:

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/VMImportPrerequisites.html

We wil proceed by uploading our VMDK image to AWS via the AWS Command Line Interface by opening up a command prompt:

cd C:\Program Files\Amazon\AWSCLI
aws --version

Run the following to configure your AWS command line client:

aws configure

We will also need to create a specific role that will allow us to perform the import process - so we should create a file named "role.json" containing the following:

{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"",
"Effect":"Allow",
"Principal":{
"Service":"vmie.amazonaws.com"
},
"Action":"sts:AssumeRole",
"Condition":{
"StringEquals":{
"sts:ExternalId":"vmimport"
}
}
}
]
}

and create the role, specifying the file we just created:

aws iam create-role --role-name vmimport --assume-role-policy-document file://role.json

We will also have to create a role policy - so create another file called "policy.json" and insert the following (replacing <disk-image-file-bucket> with the bucket where you VMDK file is stored:

{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":[
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource":[
"arn:aws:s3:::<disk-image-file-bucket>"
]
},
{
"Effect":"Allow",
"Action":[
"s3:GetObject"
],
"Resource":[
"arn:aws:s3:::<disk-image-file-bucket>/*"
]
},
{
"Effect":"Allow",
"Action":[
"ec2:ModifySnapshotAttribute",
"ec2:CopySnapshot",
"ec2:RegisterImage",
"ec2:Describe*"
],
"Resource":"*"
}
]
}
And then run the following command to apply the policy:

aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://policy.json

Proceed by creating a new S3 bucket (if needed) for the VMDK file:

aws s3 mb s3://bucket-name

And copy the VMDK on your local file system to the newly created bucket:

aws s3 cp C:\path-to-vmdk\vm.vmdk s3://bucket-name/test2.txt

Finally verify it has uploaded successfuly:

aws s3 ls s3://mybucket

We can now use the import-image command to import the image into AWS.

For importing multiple VMDK's we can use:

$aws ec2 import-image --cli-input-json "{ \"Description\": \"Windows 2008 VMDKs\", \"DiskContainers\": [ { \"Description\": \"Second CLI task\", \"UserBucket\": { \"S3Bucket\": \"my-import-bucket\", \"S3Key\" : \"my-windows-2008-vm-disk1.vmdk\" } }, { \"Description\": \"First CLI task\", \"UserBucket\": { \"S3Bucket\": \"my-import-bucket\", \"S3Key\" : \"my-windows-2008-vm-disk2.vmdk\" } } ] }" or for importing a single OVA file we can use:$ aws ec2 import-image --cli-input-json "{  \"Description\": \"Windows 2008 OVA\", \"DiskContainers\": [ { \"Description\": \"First CLI task\", \"UserBucket\": { \"S3Bucket\": \"my-import-bucket\", \"S3Key\" : \"my-windows-2008-vm.ova\" } } ]}"

For more detailed information please refer to:

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ImportVMImportImage.html

We can monitor the import process with:

We are then able to launch an instance of the VM we imported as an AMI:

ec2-run-instances ami-1111111 -k my-key-pair --availability-zone us-east-1a

## Setting up Microsoft SQL Server backups with Veeam Backup and Replication 8

Veeam allows you to perform point-in-time recovery of application such as SQL server and Exchange. Although in order to setup this built-in capability you need to meet the following pre-requisites:

You should firstly ensure that application-aware image processing is setup correctly on the VM's backup job.

Application-aware image processing allows you to create a transactionally consistent backup of a VM running VSS-aware applications such as SQL Server or Microsoft Exchange.

Veeam does not actually require a agent to be installed on the machine to help with the backup (unlike other solutions) and instead does the using a runtime coordination process.

The Windows VSS (Volume Shadow Copy) ensures that all VSS-aware applications assosiated IO has been quiesced before the backup is performed. (Avoiding any incomplete transactions)

To enable application-aware image processing simply go to the Veeam backup job properties >> Guest Processing >> Ensure 'Enable application-aware processing' is ticked and the guest OS credentials have been filled in (and have the relevent permissions to access the database (should have the sysadmin role!) and filesystem.)

We should now configure the relevent VM guest OS's processing settings - by clicking on the the 'Applications' button under the 'Enable application-aware processing section', selecting the relevent VM object and clicking the 'Edit' button. Click on the SQL tab and select 'Backup logs periodically' so you are able to perform a more granular point in time recovery.