Wednesday, 27 May 2015

Lab: WAN Acelleration with WANOS and WANEM

WAN links can be expensive and unless you have a lot of money to shell out it is likely that you don't have dedicated gigabit bandwidth available to you - so it makes sense to optimize the performance in any way we can.

While there are dozens and dozens of different solutions out there all offering similar results I chose WANOS (http://wanos.co) since it seems to be fairly well established product and provides a reasonable pricing model - either free (with limited support) or paid (with support and updates.)

WANOS can also be downloaded as a virtual appliance enabling me to lab it up nice and easily!

I have outlined a few of the main / interesting features of WANOS below:

- Packet Loss Recovery: Ensures that data is delivered reliability over the LAN.
- Universal Deduplication: Provides cross-flow deduplication on byte patterns (hence is protocol independent)
- Compression: Compresses data on the fly which is going over the WAN link
- Quality of Service: Provides features such as traffic shaping, tagging and classification.

The lab will consist of two sites (A and B) which go over a capped WAN link of 10mbps.

To be continued...

http://openmaniak.com/wanem_network.php

Microsoft Exchange Autodiscover Process Flow

I think the actual process of how the autodiscover process is often overlooked - simply because "it just works", by adding an A or CNAME such as autodiscover.domain.com and pointing it to our mailbox server the process is done.

Although a client - whether it be an activesync device or an Outlook client firstly looks for the autodiscover configuration in the following order:

- Firstly tries POST request to: https://domain.com/Autodiscover/Autodiscover.xml
- If fails tries POST request to: https://autodiscover.domain.com/Autodiscover/Autodiscover.xml
- If fails tries GET request (to check for a redirect): http://autodiscover.domain.com/Autodiscover/Autodiscover.xml
- If fails tries a DNS SRV lookup on: autodiscover.tcp.example.org which returns mail.domain.com
- Proceeds by sending POST to: https://mail.domain.com/autodiscover/autodiscover.xml
- POST Request is successful.

Benifits of using a SRV record instead of an A or CNAME record are that you do not have to have a dedicated IP / SSL certficate for the autodiscover subdomain - but on the other hand using a SAN certifcate should mitigate this problem in the first place.

Understanding AWS Storage Solutions: Ephemeral, EBS and S3 Storage

Amazon Web Services provides two main storage solutions - S3 and Ephemeral Storage that I have outlined below:

Ephemeral Storage: This is a non-persisent block level storage solution that Amazon offers free of charge. When allocated if a VM instance is turned off (not restarted) all data on the storage is lost.

Ephemeral Storage is bundled with EC2 instances and can be used in some of the following scenerios:

- Used as a buffer, caches for applications
- Replication data for load-balanced virtual machines

The storage does not have any access speed or availaiblity garuntees and should therefore not be used for anyting mission critical - as man different AWS users typically share a server within the Ephemeral Storage platform.

Amazon Simply Storage (S3): This is a persistent storage solution which is accessable via a web API over HTTP and HTTPS.

Data is held within "buckets" similar to Azure's "containers" and can be of unlimited size - although object sizes are limited to 5TB.

You are priced on the amount of storage used and the outgoing bandwdith - although not the incoming bandwidth.

There are two types of storage types:

- Standard storage: That provides very high data redundancy via replicating your data between availability zones - providing quote "99.999999999%" durability.

- Reduced Redundancy Storage: Provides reduced redundancy / durability for storage at 99.99% - although is cheaper than standard storage.

Elastic Block Storage (EBS): Is part of EC2 and is not accessable via any web API's - it is used as a persistent storage type for EC2 instances.

For example the system drive within a newly provisioned virtual machine will utilize EBS - although additional EBS volumes can be added, removed and moved between different virtual machines and availability zones.

You are able to take snapshots of EBS - that are stored within S3 storage, although not directly accessable to us.

Tuesday, 26 May 2015

Manually forcing the removal of a cluster node in WFE

The following should be used as a last resort when a cluster has been destroyed but a rouge cluster node has failed to disassosiate itself with the cluster.

Firstly ensure that the Cluster Service (ClusSvc) is running before attempting the following.

From PowerShell (running with administrative priviliages) we import the Failover Clustering module:

Import-Module FailoverClusters

Confirm you can see your cluster:

Get-Cluster

and finally remove the cluster node forcefully:

Remove-ClusterNode -Cluster "Name-Of-Cluster" "Node-Name" -Force

If the cluster is unavailable you will get an RPC error - this is OK.

Finally we run the Clear-ClusterNode cmdlet that is used to remove cluster configuration on an evicted node:

Clear-ClusterNode -Cluster "Name-Of-Cluster" "Node-Name" -Force

Thursday, 21 May 2015

Enabling and testing DRAC with VNC using DracTools

For this tutorial we will need the DRAC Tools package that can be downloaded below:

http://www.dell.com/support/home/us/en/19/Drivers/DriversDetails?driverId=K7F2N

When installed we shall open a elevated command prompt and cd to the following directory:
cd C:\Program Files\Dell\SysMgt\rac5
and firstly query our DRAC controller:
racadm -r <ip-address-of-drac-card> -u <username> -p <password> getsysinfo
You can access the console view by going to the web interface e.g.:
https://<drac-card-ip-address>
and going to System >> Console / Media >> Launch Virtual Console.

Wednesday, 20 May 2015

Disabling SSL 3.0 support and weak ciphers in IIS 7.0 for Exchange 2013

Firstly launch regedit and go to the following key:

HKLM\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Server

There should be a REG_DWORD entitled "Enabled" with a value of 0x0 / 0 (if not create one)

We should then proceed to the following key:

HKLM\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Server

Ensure the value of "Enabled" is 0! (This will turn of SSL 3.0 support)

We will also want to disable insecure ciphers - to do this we should save the following into a .reg file and import it into our registry (obviously be sure to make a backup of your registry before doing anything like this!):

Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\DES 56/56]
"Enabled"=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\NULL]
"Enabled"=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC2 40/128]
"Enabled"=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC2 56/128]
"Enabled"=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 40/128]
"Enabled"=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 56/128]
"Enabled"=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 128/128]
"Enabled"=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 64/128]
"Enabled"=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\PCT 1.0\Server]
"Enabled"=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Server]
"Enabled"=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Server]
"Enabled"=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Client]

"DisabledByDefault"=dword:00000001

Finally restart the computer and verify your setup with an online based solution such as ServerSniff.net

Setting up SQL Server 2012 Clustering with vSphere 5.5: Part 2

In the first part of this tutorial we configured both of our VMs, identified the storage requirements and installed our operating system. In this part we will focus on setting up the Windows Failover Cluster and the SQL Cluster.

We shall now install the Failover Clustering feature on both servers:

** Tips to remember before running the cluster validation wizard **
- Ensure hardware such as shared NICs and shared drives have the same name.
- Ensure networks on shared NICs are classified with the "Domain" profile.

Server Manager >> Add New Roles and Features >> Failover Clustering >> Validate a Failover Cluster (and include the two nodes)

If all goes well we can proceed by creating the cluster. Upon completion we should go to: Failover Cluster Manager >> Storage >> and ensure that the quorum is assigned to the correct shared drive.

We should also ensure that

We will now ensure that only one of our NIC's is dedicated for cluster communication (e.g. heartbeats) by going to: Failover Cluster Manager >> Expand the Networks tree >> Right-hand click on the relevent cluster network and select "Properties" >> Untick "Allow clients to connect through his network" to ensure it will only be used by the nodes in the cluster.

We will now pre-create the SQL server computer object and assign the relevent rights to it:

ADUC >> New Computer Object e.g. "SQLCLUSTER01" >> Ensure the WFC Computer Object has full permissions on it.

We will now create the relevent services accounts for our SQL cluster (one of each instance of SQL server):

ADUC >> New User >> Create a service account for the database engine, one for the SQL Server analysis services and another for the sql agent. (Ensure password is set too "Never Expire" - unless you have any regulatory requirements that prohibit this!)

Now on the first node we will install SQL Server 2012 - insert / mount the disc, run the installer and select "Installation" >> "New SQL Server failover cluster installation" from the menu.

Go through the wizards selecting the features, installation paths and so on until we come to the "Cluster Resource Group" - do not worry if there are no qualified cluster resource groups - this is normal with our setup - click next >> Select your cluster disk >> Setup the cluster TCP/IP information and finish the installation.

We shall now install SQL Server on the other nodes that will be installed within the cluster as follows:

SQL Server Installation Server >> "Add note to a SQL server failover cluster" >> Complete the wizard, specifying the service accounts, network settings etc.

Upon completion of the installation we should configure the Windows Firewall accordingly, so we add the following rules on both nodes:

Allow TCP Inbound on Port 1433 (Domain Only)
Allow UDP Inbound on Port 1434 (Domain Only)

** Now ensure that TCP/IP is turned on and properly bound to the relevent interface within SQL Server Configuration Manager **

Then restart the server and go to Failover Cluster Manager >> Services and Applications >> SQL Server and verify your cluster configruation.

Tuesday, 19 May 2015

Error: Rule "Network binding order" generated a warning. The domain network is not the first bound network.

I came across the following error message during the cluster validation stage of setting up SQL Server 2012:

The domain network is not the first bound network. This will cause domain operations to run slowly and can cause timeouts that result in failures. Use the Windows network advanced configuration to change the binding order.

To ensure the domain network is the first bound network we should firstly identify what order they are in by going to Control Panel >> Network Connections > Press alt (to display the menu bar) and select "Advanced" >> "Advanced Settings" >> Use the up (in Connections) arrow to ensure that the connection that services the domain network is at the top.

We should also ensure that the "Microsoft Failover Cluster Virtual Adapter" is also at the top - for this though you will have to do some registry editing! (As it is a hidden adapter.) So we shall firstly identify its device ID either from the device manager (Device Manager >> View >> Show Hidden Devices >> Network Adapters) or using wmic (as this will work in 2008 R2 as well!):

wmic nicconfig get description, SettingID

Then we go to the following registry key and ensure that the device ID of the Microsoft Failover Cluster Virtual Adapter is right at the top:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\Tcpip\Linkage\Bind

The preceeding MUST be performed on BOTH (or all) servers in the cluster!

Finally restart servers re-run the validation checks!

Monday, 18 May 2015

127.0.53.53 DNS Name Collision

When attempting to ping a hostname I was recieved an ICMP reply from 127.0.53.53 - which as I am sure you are aware is reserved for the local loop back adapter! (127.0.0.0/8):


127.0.53.53 DNS Name Collision

This occurs due to the fact that there is a name collision - for example if you decided a long time ago to name your domain something like "my-domain.int" and with the new introduction of the ICANN reserved TLD .int (meant for international companies) you would get a reply from 127.0.53.53 as the name is reolving internally and externally!

ICANN have been known to assign this IP address to TLD's that are in the process of becoming available publically - to ensure that / give some time to people that already use an internal domain-name of the same TLD so that they can change it in time (hopefully). 

The recycle is corrupted on drive E: (VMware)

If you get a message stating that your recycle bin has been corrupted within a Windows VM in vSphere and if you are using shared drives - ensure that you disable recycle bin on the drives for each VM sharing the shared disks!

Right-hand click recycle bin >> Properties >> Select Shared Drive >> Select "Don't move files to the Recycle bin..."

Re-creating arbitration mailboxes in Exchange Server 2013

Sometimes due to reasons such as mailbox corruption, general odditities and so forth - you might be in the situation where you are missing a system mailbox and need to re-create all of them.

By default the following lists all of the default arbitration mailboxes created during the installation of Exchange 2013:
SystemMailbox{1f05a927-eac1-46e7-9a47-611e1a81bb50}
SystemMailbox{e0dc1c29-89c3-4034-b678-e6c29d823ed9}
SystemMailbox{bb558c35-97f1-4cb9-8ff7-d53741dc928c}
Migration.8f3e7716-2011-43e4-96b1-aba62d229136
 We should re-run a portion of the Exchange 2013 setup as follows:

setup.exe /prepareAd
And finally ensure all of the newly created arbitration inboxes have been enabled - so for each mailbox we run:

Enable-Mailbox –Arbitration "SystemMailbox<Guid-Value>"

Exchange 2013: Cannot open mailbox cn=Microsoft System Attendant.

Cannot open mailbox /o=mydomain/ou=Exchange Administrative Group (H8I0BOHFHJ736)/cn=Configuration/cn=Servers/cn=MBX01/cn=Microsoft System Attendant.
This message appeared while attempting to create a room mailbox within ECP during a migration. To resolve this error we run the following to re-enable the migration arbitration mailbox:

Enable-Mailbox -Arbitration -Identity "Migration.8f3e7716-2011-43e4-96b1-aba62d229136"
Set-Mailbox "Migration.8f3e7716-2011-43e4-96b1-aba62d229136" -Arbitration –Management:$true

Friday, 15 May 2015

System Mailbox / Arbitration Placement in Exchange 2013

Typically within an Exchange organization - when it is initially setup the relevent arbitration mailboxes are created e.g. your System Mailboxes.

By default only one system mailbox is created on your first database - this is fine - however if you need to perform maintainence on this database it effectively takes the system mailbox out-of-action. So as a reccomendation I would be enclined to create a dedicated mailbox database (or at least one which is unlikely to need regular maintainence) to host the system mailboxes.

To view the current arbitration mailboxes and the databases they are situated on we can issue:

Get-Mailbox -Arbitration | fl name, alias, database

You can also simply move your current arbitration mailboxes another database as follows:

Get-Mailbox -Arbitration -Identity "SystemMailbox{e0dc1c29-89c3-4034-b678-e6c29d823ed9}" | New-MoveRequest -TargetDatabase MailboxDatabase01

Exchange 2013: A valid Migration mailbox could not be found for this organization.

Although the migration arbitration mailbox is created upon initial installation of your Exchange organization, due to corruption or general lack of the account (randomly goes missing!) you can easily re-create it as follows:

Firstly verify the account (Migration.8f3e7716-2011-43e4-96b1-aba62d229136) does not exist:

Get-Mailbox -Arbitration | fl name, alias

Re-run the following Exchange command (don't worry it shouldn't harm anything in your existing environment!):
setup /preparead/IAcceptExchangeServerLicenseTerms

Proceed by enabling the arbitration account used for migration (not enabled by default):
Enable-Mailbox -Arbitration -Identity "Migration.8f3e7716-2011-43e4-96b1-aba62d229136"

and configure it as follows:
Set-Mailbox "Migration.8f3e7716-2011-43e4-96b1-aba62d229136" -Arbitration –Management:$true

Finally we can verify the account with:
Get-Mailbox -Arbitration | fl name, alias

and

Get-MigrationUser

Speeding up intersite DNS replication with an Active Directory environment

The following will demonstrate how you can manually invoke the replication of DNS zones within an intersite active directory environment. For this tutorial we will have a AD domain with two domain controllers: DC01 and DC02.

We shall firstly use the repadmin utility to invoke a manual sync of the DC we want to update with all other DC's in the domain (where 'DC01' is the DC we wish to update and the following is the naming context(s) we wish to update):

repadmin /syncall DC01 dc=DomainDnsZones,dc=you-domain,dc=com /d /e

Finally on the target domain controller we wish to update - we manually poll for any updated zones from the naming context:

dnscmd /zoneupdatefromds your-domain.com

Creating an alias of an SMB server in Windows Server 2012

During the process of a file server migration I wanted to temporarily create an alias for a file server I was decommisioning - although after adding a DNS entry in AD and pointing to the old DNS server I was unable to access its UNC path via Windows Explorer (the error message simply stated the network recourse was not found) - so I decided to mount it via the 'net' command and got a little for information:
net use \\serveralias
You were not connected because a duplicate name exists on the network. If joining a domain, go to System in Control Panel to change the computer name and try again. If joining a workgroup, choose another workgroup name.

This error is occuring because the server you are attempting to access is addressable by two different names - in my case "fileserver01" and "tempserver01." By default Windows Servers / the SMB implementation checks the host header / hostname of the the connecting (client) SMB session - if it is not that of the servers hostname it will simply decline the connection.

Now in order to get around this we can disable strict name checking on the lanman service from the servers registry. In order to do this we should open regedit and go to the following location:
HKEY_LOCAL_MACHINE > SYSTEM > CurrentControlSet > Services > lanmanserver > Parameters
Create a new 32bit DWORD named DisableStrictNameChecking and set its value to 1 (Decimal).

We should now restart the lanman server as follows within an elevated command prompt:
sc stop LanmanServer && sc start LanmanServer
You should now be able to connect to the alias - although I would verify using the 'net' command firstly.

Additionaly the users on the specific network accessed the file server by it's NETBIOS name, rather than its FQDN - so in order to ensure users could still access the file server I utlilized single label DNS entries.

Changing from a Paravirtual to LSI SAS Controller (and visa-versa)

If you are in the situation where you have decided to use the paravirtual SCSI driver and then for some reason found out that you no longer wish to use it (for example if you are using Microsoft Clustering) - there is a way of getting Windows to boot correctly without throwing a BSOD.

This is a bit of a hack and slash method but (with my limited testing) appears to work OK for Windows Server 2012 R2. To get this too work firstly create an additional hard drive and attach it too SCSI port 1:0:0 (this will create an additional LSI SAS SCSI controller) and boot into Windows using the paravirtual SCSI controller. Once you have booted into Windows you should see "New Hardware Detected" - this is hopefully be installing the driver for the new (SAS) SCSI controller. Now if everything goes according to plan you should be able to shutdown your VM, change the paravirtual SCSI controller to SAS and successfully boot into Windows!

Thursday, 14 May 2015

Setting up SQL Server 2012 Clustering with vSphere 5.5: Part 1

This two-part series will explain and guide you through the process of creating a SQL Server 2012 cluster with vSphere 5.5.

Storage Considerations:
Typically for an intensive database you want to use RAID10 (strongly recommended) - although if you are on a tight budget and are not overly concerned about redundancy you could try RAID1.

Recommended Base VM Specs:
Hardware Version: 11
RAM: 8GB+ (Ensure all of it is reserved!)
vCPU: x2 (2 Cores Each)
Hot plugging: Configure memory and CPU hot plugging.
NIC: x2 VMXNET3 (One for cluster heartbeat and one for cluster access)
Disk Controllers: One for each drive - LSI SAS.
SCSI Bus Sharing Mode: Physical
Disk: Thickly provisioned - eager zeroed.
Misc: Remove unnecessary hardware components (e.g. floppy disk, serial ports, etc.)

Disk type considerations for the cluster:
Now dependent on how you wish to design your cluster will have a big bearing on what kind of disk access you will choose - for example:

- If you plan to put the all of the nodes of the cluster within the ESXI host (Cluster in a box) you can utilize RDM (Virtual), RDM (Physical)

- If you plan on placing nodes of the cluster on more than one ESXI host (Cluster Across Boxes) you are limited to iSCSI or RDM (Physical)

Because in this lab we have two ESXI hosts we will be using a physical mode RDM.
Please refer to the following table to identify which disk type you should utilize:

Storage
Cluster in a box (CIB)
Cluster Across a box (CAB)
Physical and Virtual Machine
VMDK’s
Yes (Recommended)
No
No
RDM (Physical Mode)
No
Yes (Recommended)
Yes
RDM (Virtual Mode)
Yes
Yes (Windows 2003 Servers only!)
No
In-guest SCSI
Yes
Yes
Yes
In-guest SMB 3.0
Yes (Server 2012 R2 Only)
Yes (Server 2012 R2 Only)
Yes (Server 2012 R2 Only)

We also need to consider whether we want to have the ability to live-migrate (vMotion) nodes within the Windows Cluster to other nodes in the ESXI cluster. This is because when live migrating VM's that are part of a Windows cluster between different hosts in a ESXI cluster it causes serious issues with the cluster. Although in ESXI 6.0+ you now have the ability to do exactly this - providing the following criteria is met:

- Hosts in the ESXI cluster are running 6.0 or above.
- The VM's must have Hardware version 11 in compatibility mode
- The disks must be connected to virtual SCSI controllers that have been configured for "Physical" SCSI Bus Sharing mode
- The shared disk type MUST be RDM (Raw Disk Mapping)

The major downside to this setup though is that it will not be possible to perform storage vMotion.

Because we will be using Raw Disk Mapping in Physical compatibility mode we will not be able to take any snapshots / backups of the drives that use this configuration.

We should also take into consideration are drive design. In this scenario our SAN has two storage groups available to us - one group with fast disks (15k) and another with slow storage (7.2k) - we will create three LUN's: one on the fast storage group for the Database and Database Log Files (LUN01) and two on the slower storage for the OS, Backups, SQL Installation Disk (LUN02) and for the Cluster Quorum Disk which will be presented as a RDM (LUN03.)

Disk 1: Cluster Quorum (Physical RDM: LUN03)
Disk 2: Databases (Physical RDM: on LUN01)
Disk 3: Database Log Files (Physical RDM: on LUN02)
Disk 4: SQL Backups (Physical RDM:  on LUN03)
Disk 5: OS Disk (VMDK on LUN: LUN04)
Disk 6: SQL Installation Disk (VMDK on LUN04)

The quorum LUN should be presented as a RDM to the VM's in order for Microsoft Clustering to work correctly! Other drives can be VMDK files...

We will map the LUN's on the ESXI host for this lab and configure multi-pathing:

To do this we must firstly setup a VMKernel port *** Note: The storage and VMKernel port must have an IP address in the same network ***. To do this go to:

Configuration >> Networking >> Add Networking >> VMKernel >> Create a vSphere standard switch >> Select the relevant NICs you want to use for the multi-pathing >> Assign network label e.g. "Storage Uplink" >> Finish.

Proceed by going to the newly created VSwitch's properties - now create an additional VMKernal port group for each other adapter assigned to the vSwitch (assigning a network label and defining the IP address settings.) Now go to each port and select "Edit" and go to the "NIC Teaming" tab, tick "Override switch failover order" and ensure ONLY one NIC is in Active mode and the rest are in Unused mode. (So effectively each port group is assigned a single unique NIC.)\

Finally we should configure the software based iSCSI adapter:

ESXI Host >> Configuration >> Storage Adapters >> iSCSI Software Adapter >> Properties >> Network Configuration >> Add >> Select the VMKernel network adapters we created previously.

For more detailed information on setting up multi-pathing in vSphere please consult the following whitepaper: http://www.vmware.com/files/pdf/techpaper/vmware-multipathing-configuration-software-iSCSI-port-binding.pdf

We should now create the relevant RDM for our shared disk and also create our VMDK disk for the base operating system.

Now we will install Windows Server 2012 R2 - you should also ensure VMWare tools is installed straight after installation as otherwise windows will not recognize your NIC.

We should now configure the following areas within the OS:

Network: Enable the virtual NIC for RSS (Receive Side Scaling)
Swap File: Limit to 2GB (since SQL will use a large amount of RAM - it will waste a lot of disk with the page file)
Performance Counters: Enable them by going to Server Manager >> Local Server >> Performance >> Start Performance Counters.

We can now clone the VM we have created and generalize the cloned copy - although take into account that our RDM will be converted to a VMDK during the cloning process and shared disks will also be copied - so ideally we should detach these disks before the cloning process.

The next part of the tutorial will focus on the setup of the cluster and installation of SQL server - stay tuned for part two!

Using Paravirtual SCSI Driver with Windows Server 2012 R2

By default Windows Server 2012 R2 does not have any support for VMware's specialized paravirtual SCSI driver - and hence will not detect any disks upon install the server!

In order to install the drivers we should ensure that we have a virtual floppy disk drive attached to our VM and attach the "pvscsi-Windows2008.flp" image to the drive (within the root datastore under the "vmimages" >> "floppies" folder. During the Windows setup you can then specify third party drivers - pointing it to the drivers on the floppy disk drive.

If your anything like me your proberly wondering why you are using a driver intended for Server 2008 - well I am not sure to be pefectly honest - but vSphere 5.5 does not appear to have a dedicated driver for 2012 - although the one mentioned above does work with 2012 R2.

Tuesday, 12 May 2015

Battling poor network performance and/or high network latency with Receive Side Scaling

Receive Side Scaling (RSS) is a mechanism that allows the networking driver to spread TCP traffic accross acorss multiple CPUs - hence allowing multi-core efficiency and processeros cahce utilization.

Although your operating system and driver must support it - if it is not supported all network traffic is handled by the 1 CPU.

In order to utilize RSS you must firstly meet the following conditions:

- Virtual Machine hardware must be version 7 or higher
- The virtual network card you want to utilize it on MUST be VMXNET3
- Run a supported operating system: Server 2003 SP2 - Server 2012 R2 and Linux 2.6.37+.

Considerations: Although it is supported and enabled by default in the above Windows Operating systems - it is more often than not enabled by default on the virtual nic itself! So we must enable it on the virtual nic by going to NIC Properties >> Advanced >> Recieve Side Scaling >> Enabled.

Monday, 11 May 2015

Adding send-as permissions for a user (or distribution group) in Exchange 2013

In order to allow a mailbox user to send as another user (oposed to sending on behalf of a user) we should change the extended rights of the user or distribution groups AD object as follows:

Add-ADPermissions -Identity <Distribution Group> -User <User\Security Group> -ExtendedRights Send-As

In order to apply changes immideialy you should restart the information store service:

sc stop MSExchangeIS && sc start MSExchangeIS

Friday, 8 May 2015

Federation between two Exchange organizations

A federated trust in Exchange allows two organizations to share free/busy (i.e. calender) information among other things. Within Exchange 2013 you can now also created a federated trust between Exchange Online as well.

Creating a federated trust can be broken down into the following three steps:

1. When creating a federated trust both parties use the Microsoft Federation Gateway (MFG) as a secure trust broker between the other organization.

2. An organizational relationship is a one to one relationship that allows an organization to share calender information with one another.

3. Sharing Policies (optional) : By configuring a share policy you a#

For more information about the setup of federated trust please refer to the link below:

http://blogs.technet.com/b/exchange/archive/2012/10/30/managing-federated-sharing-with-the-eac.aspx

Creating and assigning retention policies in Exchange 2013

Retention of data from a legal point of view can have some serious implications - using retention tags can help users / the administrator easily manage this problem.

By default retention policies are available to end users with the Outlook client by right-hand clicking on a mail item or folder, selecting "Assign policy" and choosing one of the pre-defined policies e.g. 1 Month Delete.

Administrators also have the ability to create custom retention policies to meet business needs - although you must firstly be a member of the Organization Management, Recipient Management and Records Management role groups to perform the following operation.

We will firstly create / define a policy tag:

New-RetentionPolicyTag "JBloggs-DeletedItems" -Type DeletedItems -Comment "Deleted Items are purged after 30 days" -RetentionEnabled $true -AgeLimitForRetention 30 -RetentionAction PermanentlyDelete

This creates a policy tag called "JBloggs-DeletedItems" that is scoped to the DeletedItems folder which will permanently delete emails in this folder after 30 days.

We should proceed by then creating a retention policy (which allows us to combine all of the relevant policy tags together) - and define our newly created policy tag:

New-RetentionPolicy "RetentionPolicy01" -RetentionPolicyTagLinks "JBloggs-DeletedItems"

Finally we can assign the retention policy to the user's mailbox as follows:

Set-Mailbox "Joe Bloggs" -RetentionPolicy "RetentionPolicy01"

Setup and search mailbox auditing logs in Exchange 2013

Mailbox auditing is used for monitoring actions taken within a mailbox, including users who have delegated access.

Auditing is setup on a per mailbox basis and has varying scope levels - for example you can audit specifically users who have been delegated access to the mailbox only.

The following information is recorded as part of an audit log:

Client IP Address
Hostname
User Agent / Client
...

The audit logs are stored within the recoverable items folder of the audited user's mailbox for (by default) a period of 90 days.

In order to enable auditing on a specific mailbox we can use the Set-Mailbox cmdlet:

Set-Mailbox -Identity "Joe Bloggs" -AuditEnabled $true

or for a delegated:

Set-Mailbox -Identity "Joe Bloggs" -AuditDelegate SendAs,SendOnBehalf -AuditEnabled $true

Once we have enabled auditing we will likely want to export those logs at some point:

From ECP go to: Compliance Management > Auditing. Click Export mailbox audit logs.

Finally we can search through the mailbox audit logs with the New-MailboxAuditLogSearch cmdlet:

New-MailboxAuditLogSearch "Admin and Delegate Access" -Mailboxes "Joe Bloggs" -LogonTypes Admin,Delegate -StartDate 05/20/2015 -EndDate 05/31/2015 -StatusMailRecipients [email protected]

The above command looks for any logins from Admins / Delegates to the Joe Bloggs mailbox and sends the results to [email protected]

Thursday, 7 May 2015

In-place hold vs Litigation hold (Exchange 2013)

A litigation hold is used for eDiscovery and puts a hold on the mailbox as a whole oposed to an in-place hold (introduced in Exchange 2013) that uses a more granular hold - by putting items on hold based on a search query, specific dates and so on.

A litigation hold can be created by enabling the LitigationHoldEnabled parameter on the user's mailbox:

Set-Mailbox [email protected] -LitigationHoldEnabled $true

While an in-place hold can be enabled as follows:

New-MailboxSearch "Hold-CaseId002" -SourceMailboxes "[email protected]" -InPlaceHoldEnabled $true

A parameter cannot be found that matches parameter name 'StartDate'.

Setting up In-Place eDiscovery and an In-Place hold with Exchange 2013

Due to regulatory requirements you might sometimes be faced with the scenerio where someone within or outside of your organization needs to perform a search for specific content within users mailboxes. eDiscovery was designed for exactly this!

In order to setup eDiscovery we must firstly add the relevent user to the "Discovery Management" role group:

Add-RoleGroupMember -Identity "Discovery Management" -Member jbloggs

we can verify this with:

Get-RoleGroupMember -Identity "Discovery Management"

We should proceed by creating a discovery mailbox:

New-Mailbox -Name "Discovery Search Mailbox" -Discovery

and assign the permissions:

Add-MailboxPermission "Discovery Search Mailbox" -User jbloggs -AccessRights FullAccess -InheritanceType all

We can then create an eDiscovery search with the New-MailboxSearch cmdlet:

New-MailboxSearch "Discovery-CaseId001" -StartDate "05/20/2015" -EndDate "05/27/2015" -SourceMailboxes "Joe Bloggs" -TargetMailbox "Discovery Search Mailbox" -SearchQuery '"Games" AND "Downloads"' -MessageTypes Email -IncludeUnsearchableItems -LogLevel Full

Finally we can invoke the search - hence copying the results to the discovery mailbox we created:

Start-MailboxSearch "Discovery-CaseId001"

We also have the ability to create an in-place hold which will allow us to retain any emails that might be deleted, moved etc. by the users - to ensure everything is accessable / searchable. We can create an in-place hold by using the InPlaceHoldEnabled parameter with the New-MailboxSearch cmdlet:

New-MailboxSearch "Hold-CaseId002" -SourceMailboxes "[email protected]" -InPlaceHoldEnabled $true

Setting up Data Prevention Loss Policies / transport rules to protect sensitive data

Preventing sensitive information from leaving your organization can be accomplished with the user of Data Loss Prevention policies - in this example we will be ensuring that credit card data does not get sent anywhere inside or outside the organizaion.

We should firstly create a new data loss prevention policy:

New-DlpPolicy -Name "PCI-CreditCard" -Mode Enforce

We then want to decide which data classification property we wish to use - you can either use one of the built-in properties (from the "Microsoft Rule Pack"):

Get-DataClassification

Or we can create and import our own:

https://technet.microsoft.com/en-GB/library/jj674703%28v=exchg.150%29.aspx

and to import the custom rule:

Import-DlpPolicyCollection -FileData ([Byte[]]$(Get-Content -Path " C:\My Documents\DLP Backup.xml " -Encoding Byte -ReadCount 0))

We will now create the transport rule that will perform the DLP check:

New-TransportRule -Name "Notify in Outlook:External Recipient Credit Cards" -NotifySender RejectMessage -RuleSubType DLP -DlpPolicy "PCI-CreditCard" -Mode Enforce -SentToScope NotInOrganization -MessageContainsDataClassification @{Name="Credit Card Number"}

And if we wish to remove the DLP and transport rule we can use the following cmdlets:

Remove-DlpPolicy "PCI-CreditCard"
Remove-TransportRule "Notify in Outlook:External Recipient Credit Cards"

Wednesday, 6 May 2015

Performing a dial tone recovery in Exchange 2013

Dial tone portability allows you to quickly get users up and running by providing them a temporary mailbox for sending and recieving email. 

Follow the instruction on TechNet here:

https://technet.microsoft.com/en-us/library/dd979810%28v=exchg.150%29.aspx

Checking and seting up Autoresponders / Out-of-office on Exchange 2013

You can easily check any autoresponders / out-of-office configurations for a user by issuing:

Get-MailboxAutoReplyConfiguration -Identity "[email protected]"

In order to setup an out-of-office for a user from Exchange we can issue:

Set-MailboxAutoReplyConfiguration -Identity "[email protected]" -AutoReplyState Enabled -InternalMessage "<b>Internal out-of-office message</b>" -ExternalMessage "<b>External out-of-office message</b>" -EndTime 07/05/2015 15:00:00 -StartTime 10/05/2015 23:00:00

Recovery Databases (RDB) in Exchange 2013

A recovery database is a special type of mailbox database that allows you to mount and extract data from a restored mailbox database.

The main purpose of a recovery database is to allow data to be recovered without the need to dismount / disrupt data access to the original database. Another way to look at is that if User A who accesses Database B deletes some emails he needs access - so you need to extract the emails from an old backup: Now you could simply mount the backup - although if you did this you would need to dismount the current version of the database (mounting the same database - albeit an older version will simply not work) - so creating / using a recovery database allows us to mount the older version of the database safely while running the current version - hence not disrupting users.

You have the ability to recover single items (i.e. email message), whole mailboxes or perform a dail tone recovery.

In order to recover data from our mailbox database backup we will firstly make sure our database is in a clean shutdown state / is up to date with logs:

Eseutil /R E01 /l C:\Recovery\RecoveryDB01\Logs /d C:\Recovery\RecoveryDB01

We will proceed by creating a recovery database to do this we must use the Exchange Shell:

New-MailboxDatabase -Recovery -Name RecoveryDB01 -Server MBX1 -EdbFilePath "C:\Recovery\RecoveryDB01\RecoveryDB01.edb" -LogFolderPath "C:\Recovery\RecoveryDB01\Logs"

We should then restart the Microsoft Exchange Information Store:
Restart-Service MSExchangeIS

We should then mount the recovery database:
Mount-Database RecoveryDB01

Ensure the mailbox we are after is present in the recovery database:
Get-MailboxDatabase -Identity "RecoveryDB01" | Get-Mailbox

And then perform the restore operation:

New-MailboxRestoreRequest -SourceDatabase DB1 -SourceStoreMailbox "Joe Bloggs" -TargetMailbox "[email protected]"

or for restoring to an archive mailbox we append the -TargetIsArchive attribute:

New-MaiboxRestoreRequest -SourceDatabase DB1 -SourceStoreMailbox "Joe Bloggs" -TargetMailbox "[email protected]" -TargetIsArchive

We can check on restoration requests with the Get-MailboxRestoreRequest cmdlet:

Get-MailboxRestoreRequest