Thursday 14 May 2015

Setting up SQL Server 2012 Clustering with vSphere 5.5: Part 1

This two-part series will explain and guide you through the process of creating a SQL Server 2012 cluster with vSphere 5.5.

Storage Considerations:
Typically for an intensive database you want to use RAID10 (strongly recommended) - although if you are on a tight budget and are not overly concerned about redundancy you could try RAID1.

Recommended Base VM Specs:
Hardware Version: 11
RAM: 8GB+ (Ensure all of it is reserved!)
vCPU: x2 (2 Cores Each)
Hot plugging: Configure memory and CPU hot plugging.
NIC: x2 VMXNET3 (One for cluster heartbeat and one for cluster access)
Disk Controllers: One for each drive - LSI SAS.
SCSI Bus Sharing Mode: Physical
Disk: Thickly provisioned - eager zeroed.
Misc: Remove unnecessary hardware components (e.g. floppy disk, serial ports, etc.)

Disk type considerations for the cluster:
Now dependent on how you wish to design your cluster will have a big bearing on what kind of disk access you will choose - for example:

- If you plan to put the all of the nodes of the cluster within the ESXI host (Cluster in a box) you can utilize RDM (Virtual), RDM (Physical)

- If you plan on placing nodes of the cluster on more than one ESXI host (Cluster Across Boxes) you are limited to iSCSI or RDM (Physical)

Because in this lab we have two ESXI hosts we will be using a physical mode RDM.
Please refer to the following table to identify which disk type you should utilize:

Storage
Cluster in a box (CIB)
Cluster Across a box (CAB)
Physical and Virtual Machine
VMDK’s
Yes (Recommended)
No
No
RDM (Physical Mode)
No
Yes (Recommended)
Yes
RDM (Virtual Mode)
Yes
Yes (Windows 2003 Servers only!)
No
In-guest SCSI
Yes
Yes
Yes
In-guest SMB 3.0
Yes (Server 2012 R2 Only)
Yes (Server 2012 R2 Only)
Yes (Server 2012 R2 Only)

We also need to consider whether we want to have the ability to live-migrate (vMotion) nodes within the Windows Cluster to other nodes in the ESXI cluster. This is because when live migrating VM's that are part of a Windows cluster between different hosts in a ESXI cluster it causes serious issues with the cluster. Although in ESXI 6.0+ you now have the ability to do exactly this - providing the following criteria is met:

- Hosts in the ESXI cluster are running 6.0 or above.
- The VM's must have Hardware version 11 in compatibility mode
- The disks must be connected to virtual SCSI controllers that have been configured for "Physical" SCSI Bus Sharing mode
- The shared disk type MUST be RDM (Raw Disk Mapping)

The major downside to this setup though is that it will not be possible to perform storage vMotion.

Because we will be using Raw Disk Mapping in Physical compatibility mode we will not be able to take any snapshots / backups of the drives that use this configuration.

We should also take into consideration are drive design. In this scenario our SAN has two storage groups available to us - one group with fast disks (15k) and another with slow storage (7.2k) - we will create three LUN's: one on the fast storage group for the Database and Database Log Files (LUN01) and two on the slower storage for the OS, Backups, SQL Installation Disk (LUN02) and for the Cluster Quorum Disk which will be presented as a RDM (LUN03.)

Disk 1: Cluster Quorum (Physical RDM: LUN03)
Disk 2: Databases (Physical RDM: on LUN01)
Disk 3: Database Log Files (Physical RDM: on LUN02)
Disk 4: SQL Backups (Physical RDM:  on LUN03)
Disk 5: OS Disk (VMDK on LUN: LUN04)
Disk 6: SQL Installation Disk (VMDK on LUN04)

The quorum LUN should be presented as a RDM to the VM's in order for Microsoft Clustering to work correctly! Other drives can be VMDK files...

We will map the LUN's on the ESXI host for this lab and configure multi-pathing:

To do this we must firstly setup a VMKernel port *** Note: The storage and VMKernel port must have an IP address in the same network ***. To do this go to:

Configuration >> Networking >> Add Networking >> VMKernel >> Create a vSphere standard switch >> Select the relevant NICs you want to use for the multi-pathing >> Assign network label e.g. "Storage Uplink" >> Finish.

Proceed by going to the newly created VSwitch's properties - now create an additional VMKernal port group for each other adapter assigned to the vSwitch (assigning a network label and defining the IP address settings.) Now go to each port and select "Edit" and go to the "NIC Teaming" tab, tick "Override switch failover order" and ensure ONLY one NIC is in Active mode and the rest are in Unused mode. (So effectively each port group is assigned a single unique NIC.)\

Finally we should configure the software based iSCSI adapter:

ESXI Host >> Configuration >> Storage Adapters >> iSCSI Software Adapter >> Properties >> Network Configuration >> Add >> Select the VMKernel network adapters we created previously.

For more detailed information on setting up multi-pathing in vSphere please consult the following whitepaper: http://www.vmware.com/files/pdf/techpaper/vmware-multipathing-configuration-software-iSCSI-port-binding.pdf

We should now create the relevant RDM for our shared disk and also create our VMDK disk for the base operating system.

Now we will install Windows Server 2012 R2 - you should also ensure VMWare tools is installed straight after installation as otherwise windows will not recognize your NIC.

We should now configure the following areas within the OS:

Network: Enable the virtual NIC for RSS (Receive Side Scaling)
Swap File: Limit to 2GB (since SQL will use a large amount of RAM - it will waste a lot of disk with the page file)
Performance Counters: Enable them by going to Server Manager >> Local Server >> Performance >> Start Performance Counters.

We can now clone the VM we have created and generalize the cloned copy - although take into account that our RDM will be converted to a VMDK during the cloning process and shared disks will also be copied - so ideally we should detach these disks before the cloning process.

The next part of the tutorial will focus on the setup of the cluster and installation of SQL server - stay tuned for part two!

0 comments:

Post a Comment