I’ve been working on a SQL cluster for a few weeks now and thought I’d share the build process for anyone that wants to do the same.
Configuration Differs depending on Single or Multi-host
Single Host – Multinode
Creating the First Cluster node virtual machine.
- Create the virtual machine with the required specs.
- Install Windows Data Centre
- Add 2nd Network Card (used for Cluster-to-Cluster communications)
- Set IP address for
- Production Network
- Cluster Heartbeat Network
- Join server to the domain
- Move to Correct AD OU
- Install Windows Updates
- Enable Remote Desktop
- Activate Windows License
- Shutdown Server
- Add additional drive for SQL Installs/Files etc
- Add the shared disks (DB, Quorum* etc)
- Edit the Virtual Machine settings.
- Add a “Hard Disk” – Select “Create a new virtual disk”
- Set the required size.
- Select Thick Provisions Eager Zeroed
- Leave location option on “Store with the virtual machine”
- Change Virtual Device Mode, from the drop down list select any address starting with (1:x)
- Add this disk will automatically create a new SCSI Controller card – finish the process
- Now select the newly added SCSI Card and check the following
- SCSI Controller Type = LSI Logic SAS
- Set SCSI Bust Sharing to “Virtual
- Quorum disk should be set to 1GB
Creating the Second cluster node virtual machine
The settings that allow the sharing of the VMDK files form the shared storage are all done via the second VM
- Create the virtual machine with the required specs.
- Install Windows Data Centre 2012
- Add 2nd Network Card (used for Cluster-to-Cluster communications)
- Set IP address for
- Production Network
- Cluster Heartbeat Network
- Join Server to Domain
- Move to Correct AD OU
- Install Windows Updates
- Enable Remote Desktop
- Activate Windows License
- Shutdown Server
- Add additional drive for SQL Installs/Files etc (non-shared cluster resources)
- Add the shared disks (DB, Quorum etc)
- Edit the Virtual Machine settings.
- Add a “Hard Disk” – Select “Use an existing disk”
- Browse to the Datastore and Folder of the first cluster node and select the shared disks (Quorum, DB etc)
- Change Virtual Device Mode, from the drop down list select any address starting with (1:x)
- Add this disk will automatically create a new SCSI Controller card – finish the process
- Now select the newly added SCSI Card and check the following
- SCSI Controller Type = LSI Logic SAS
- Set SCSI Bust Sharing to “Virtual”
- Repeat step 12 for all shared disks
Network Card Binding.
As the servers have two network cards it’s important adjust the binding order so the secondary Cluster node communication isn’t used for any other communications.
- Click Start, click Network, click Network and Sharing Centre, and then click Change Adapter Settings.
- Press the ALT key, click Advanced, and then click Advanced Settings. If you are prompted for an administrator password or confirmation, type the password or provide confirmation.
- Click the Adapters and Bindings tab, and then, under Connections, click the connection you want to modify.
- Move the Production Adapter to the top of the list.
Initialise Shared Disks
Now the shared disks must be brought online.
- Open Disk Management on Cluster Node 1
- Select the Quorum Disk
- Bring it online
- Initialise the Disk (MBR)
- Create New Simple Volume
- Set Drive Letter to Q:\
- Change Volume Label to Quorum.
- Repeat these steps for all the shared disks
AD Accounts and Permissions
Before installing the Cluster services, you will need to create AD accounts as noted below
Once created add the Cluster admin AD account to the local administrator group on both cluster nodes.
Component | Permissions Required |
Cluster Admin AD User Domain User | Local Admin on Cluster Nodes Create Computer Objects – OU Container Read All – OU Container |
SQL Accounts
Component | Windows Server 2008 R2 or higher |
Database Engine | Provide a domain user account. |
SQL Server Agent | Provide a domain user account. |
SSAS | Provide a domain user account. |
SSIS | Virtual Account |
SSRS | Virtual Account |
FD Launcher (Full-text Search) | Virtual Account |
SQL Server Browser | LOCAL SERVICE |
SQL Server VSS Writer | LOCAL SYSTEM |
Install Cluster Services – Node 1
On the first cluster node run the “Add Roles and Features Wizard”
- Login with the Cluster admin user AD account.
- Select “Role-based or features-based installation”
- Select the First cluster node server
- Skip the “Server Roles” and proceed to Features
- In the Features menu select “Failover Clustering” and all required sub-features.
- Select “Install”
- Complete a restart when required
- Once to Restarted select “Tools” from the server manager and select “Failover Cluster Manager”
- Validate Cluster – Select Validate Cluster from the Actions menu
- Enter the server name
- Select All tests
- Wait for tests to finish
- Resolve any errors that are displayed.
Begin cluster Install
Select “Create Cluster” from Actions menu
- Enter Cluster name
- Enter Cluster IP Address – This is the cluster management IP address.
- Select Next and follow the process through
You will receive the following error (we’ll address this in the next step.
Warnings
An appropriate disk was not found for configuring disk witness. The cluster is not configured with a quorum witness. As a best practice, configure a quorum witness to help achieve the highest availability of the cluster.
This will be corrected later on in the build process.
Now Set Permission for Computer Object
Now set the OU permissions to allow the Computer Object permission to create computer objects.
Cluster Computer Account | Create Computer objects – OU Container Read All Properties – OU Container Full Control – Cluster Admin Account |
Quorum Disk
To set the quorum disk you must bring the disk online and then add the storage
First make sure the Quorum disk is still online, if not bring it online.
- Expand the cluster from the “Failover Cluster Manager” and open Storage, then Disks
- Select Add Disk from the “Actions” menu
- This will launch the “Add Disks to a Cluster” wizard
- Select the Disk you’ve previously configured for the Quorum
- This disk will now be added and come online.
- Right Click the newly formed cluster and select “More Actions -> Configure Cluster Quorum Settings”
- Select “Advanced quorum configuration”
- Select “All Nodes”
- Select “Configure a disk witness”
- Select the Disk you brought online in 4
- Finish the setup.
Install Cluster Services – Node 2
On the first cluster node run the “Add Roles and Features Wizard”
- Login with the Cluster admin user AD account.
- Select “Role-based or features-based installation”
- Select the First cluster node server
- Skip the “Server Roles” and proceed to Features
- In the Features menu select “Failover Clustering” and all required sub-features.
- Select “Install”
- Complete a restart when required
- Once to Restarted select “Tools” from the server manager and select “Failover Cluster Manager”
- Validate Cluster – Select Validate Cluster from the Actions menu
- Enter the server name
- Select All tests
- Wait for tests to finish
- Resolve any errors that are displayed.
Joining the nodes together.
- Connect to Node 1
- Open Failover Cluster manager
- Select Nodes
- Select Add Node from Actions window
- Add the name of the Node 2
- Select “Yes..” on the Validation warning
- Select to include Quorum Disk
- Correct any warnings and then complete the install process.
Test Failover
The nodes should now be showing lone with the Quorum disk connect to one of the nodes. You should now test if the Quorum moves and fails correctly.
- Open Failover Cluster manager
- Open nodes
- Right click Node 1 and select “Pause -> Drain Roles”
- This should now move the Quorum disk to the second node.
- Click Node and check the “Disks” tab in the bottom pane. It should now show the Quorum drive.
- Right click Node 1 and select “Resume -> Do not failback roles back”
- Now repeat the same steps with Node 2
You should see the disk fails between the two nodes.
Now repeat this process for all other shared drives.
SQL Server Install Node 1
- Login with admin account and add the Clusteradmin AD account to the Local Admin group then logout
- Select new SQL Server failover cluster
- Enter Product key
- Accept the license.
- Select the “Use Microsoft updates to check for updates”
- Select the product updates it there are any.
- Resolve any errors that occur (Windows Firewall and Microsoft Cluster Service (MSCS) can be ignored)
- Select the required Instance features.
- Set the Instance Root Directory, Share feature directory and Shared feature directory (x86) to be the Program Files drive you setup previously.
- Set the SQL Server Network Name
- Set an Instance name if required other
- Ignore the Cluster Resource Group Name errors is any show.
- Next Select the Shared cluster disk – This should be the drive you’d planned to contain the SQL Database
- Next configure the SQL Server IP Address
- Enter the usernames and password for the configured accounts.
- Select your method out Authentication and the SQL Server Administrators
- Check the Data directories are configured to use the shared drive
- Add any other settings you required for TempDB and Filestream
- Finish the install
SQL Server Install Node 2
- Login with admin account and add the Clusteradmin AD account to the Local Admin group then logout
- Select Add Node to a SQL server failover cluster
- Enter Product key
- Accept the license.
- Select the “User Microsoft updates to check for updates
- Select the product updates it there are any.
- Resolve any errors that occur (Windows Firewall and Microsoft Cluster Service (MSCS) can be ignored)
- Most of the settings now will be automatically detected from the first node.
- Wait for the installer to finish.
Testing Failover
Once the install has finished try failing the cluster nodes between the two hosts.
- Open Failover Cluster Manager
- Expand Roles.
- Right Click on the SQL Server cluster role
- Select Move -> Select Node
- Select the other cluster node and wait for role to move.
- Now complete the action in reverse.
Multi-Host – Multinode
Creating the First Cluster node virtual machine.
- Create the virtual machine with the required specs.
- Install Windows Data Centre
- Add 2nd Network Card (used for Cluster-to-Cluster communications)
- Set IP address for Production Network
- Set IP address Cluster Heartbeat Network
- Join server to the domain
- Move to Correct AD OU
- Install Windows Updates
- Enable Remote Desktop
- Activate Windows License
- Shutdown Server
- Add additional drive for SQL Install/Files etc (non-shared cluster resources)
- Add the shared disks (DB, Quorum* etc)
- Edit the Virtual Machine settings.
- Add a “Hard Disk” – Select “Create a new virtual disk”
- Set the required size.
- Select Thick Provisions Eager Zeroed
- Leave location option on “Store with the virtual machine”
- Change Virtual Device Mode, from the drop down list select any address starting with (1:x)
- Add this disk will automatically create a new SCSI Controller card – finish the process
- Now select the newly added SCSI Card and check the following
- SCSI Controller Type = LSI Logic SAS
- Set SCSI Bust Sharing to “Physical”
- Quorum disk should be set to 1GB
Creating the Second cluster node virtual machine
The settings that allow the sharing of the VMDK files form the shared storage are all done via the second VM
- Create the virtual machine with the required specs.
- Install Windows Data Centre 2012
- Add 2nd Network Card (used for Cluster-to-Cluster communications)
- Set IP address for Production Network
- Set IP address Cluster Heartbeat Network
- Join Server to Domain
- Move to Correct AD OU
- Install Windows Updates
- Enable Remote Desktop
- Activate Windows License
- Shutdown Server
- Add additional drive for SQL Installs/Files etc (non-shared cluster resources)
- Add the shared disks (DB, Quorum etc)
- Edit the Virtual Machine settings.
- Add a “Hard Disk” – Select “Use an existing disk”
- Browse to the Datastore and Folder of the first cluster node and select the shared disks (Quorum, DB etc)
- Change Virtual Device Mode, from the drop down list select any address starting with (1:x)
- Add this disk will automatically create a new SCSI Controller card – finish the process
- Now select the newly added SCSI Card and check the following
- SCSI Controller Type = LSI Logic SAS
- Set SCSI Bust Sharing to “Virtual”
- Repeat step 11 for all shared disks
Network Card Binding.
As the servers have two network cards it’s important adjust the binding order so the secondary Cluster node communication isn’t used for any other communications.
- Click Start, click Network, click Network and Sharing Centre, and then click Change Adapter Settings.
- Press the ALT key, click Advanced, and then click Advanced Settings. If you are prompted for an administrator password or confirmation, type the password or provide confirmation.
- Click the Adapters and Bindings tab, and then, under Connections, click the connection you want to modify.
- Move the Production Adapter to the top of the list.
Initialise Shared Disks
Now the shared disks must be brought online.
- Open Disk Management on Cluster Node 1
- Select the Quorum Disk
- Bring it online
- Initialise the Disk (MBR)
- Create New Simple Volume
- Set Drive Letter to Q:\
- Change Volume Label to Quorum.
- Repeat these steps for all the shared disks
AD Accounts and Permissions
Before installing the Cluster services, you will need to create AD accounts as noted below
Once created add the Cluster admin AD account to the local administrator group on both cluster nodes.
Component | Permissions Required |
Cluster Admin AD User Domain User | Local Admin on Cluster Nodes Create Computer Objects – OU Container Read All – OU Container |
SQL Accounts
Component | Windows Server 2008 R2 or higher |
Database Engine | Provide a domain user account. |
SQL Server Agent | Provide a domain user account. |
SSAS | Provide a domain user account. |
SSIS | Virtual Account |
SSRS | Virtual Account |
FD Launcher (Full-text Search) | Virtual Account |
SQL Server Browser | LOCAL SERVICE |
SQL Server VSS Writer | LOCAL SYSTEM |
Install Cluster Services – Node 1
On the first cluster node run the “Add Roles and Features Wizard”
- Login with admin account and add the Clusteradmin AD account to the Local Admin group then logout
- Login with the Cluster admin user AD account.
- Select “Role-based or features-based installation”
- Select the First cluster node server
- Skip the “Server Roles” and proceed to Features
- In the Features menu select “Failover Clustering” and all required sub-features.
- Select “Install”
- Complete a restart when required
- Once to Restarted select “Tools” from the server manager and select “Failover Cluster Manager”
- Validate Cluster – Select Validate Cluster from the Actions menu
- Enter the server name
- Select All tests
- Wait for tests to finish
- Resolve any errors that are displayed.
Validate Cluster
Now validate the cluster and correct any errors.
- Open Failover Cluster Manager from the actions Menu
- Enter the Server name
- Select “Run all tests”
Begin cluster Install
Select “Create Cluster” from Actions menu
- Enter Cluster name
- Enter Cluster IP Address – This is the cluster management IP address.
- Select Next and follow the process through
- You may get the following error
Warnings
An appropriate disk was not found for configuring disk witness. The cluster is not configured with a quorum witness. As a best practice, configure a quorum witness to help achieve the highest availability of the cluster.
This will be corrected later on in the build process.
Now Set Permission for Computer Object
Now set the OU permissions to allow the Computer Object permission to create computer objects.
Cluster Computer Account | Create Computer objects – OU Container Read All Properties – OU Container Full Control – Cluster Admin Account |
Quorum Disk
To set the quorum disk you must bring the disk online and then add the storage
First make sure the Quorum disk is still online, if not bring it online.
- Expand the cluster from the “Failover Cluster Manager” and open Storage, then Disks
- Select Add Disk from the “Actions” menu
- This will launch the “Add Disks to a Cluster” wizard
- Select the Disk you’ve previously configured for the Quorum
- This disk will now be added and come online.
- Right Click the newly formed cluster and select “More Actions -> Configure Cluster Quorum Settings”
- Select “Advanced quorum configuration”
- Select “All Nodes”
- Select “Configure a disk witness”
- Select the Disk you brought online in 4
- Finish the setup.
Install Cluster Services – Node 2
On the first cluster node run the “Add Roles and Features Wizard”
- Login with admin account and add the Clusteradmin AD account to the Local Admin group
- Login with the Cluster admin user AD account.
- Select “Role-based or features-based installation”
- Select the First cluster node server
- Skip the “Server Roles” and proceed to Features
- In the Features menu select “Failover Clustering” and all required sub-features.
- Select “Install”
- Complete a restart when required
- Once to Restarted select “Tools” from the server manager and select “Failover Cluster Manager”
- Validate Cluster – Select Validate Cluster from the Actions menu
- Enter the server name
- Select All tests
- Wait for tests to finish
- Resolve any errors that are displayed.
Joining the nodes together.
- Connect to Node 1
- Open Failover Cluster manager
- Select Nodes
- Select Add Node from Actions window
- Add the name of the Node 2
- Select “Yes..” on the Validation warning
- Select to include Quorum Disk
- Correct any warnings and then complete the install process.
Test Failover
The nodes should now be showing lone with the Quorum disk connect to one of the nodes. You should now test if the Quorum moves and fails correctly.
- Open Failover Cluster manager
- Open nodes
- Right click Node 1 and select “Pause -> Drain Roles”
- This should now move the Quorum disk to the second node.
- Click Node and check the “Disks” tab in the bottom pane. It should now show the Quorum drive.
- Right click Node 1 and select “Resume -> Do not failback roles back”
- Now repeat the same steps with Node 2
You should see the disk fails between the two nodes.
Now repeat this process for all other shared drives.
SQL Server Install Node 1
Login to the first cluster node with the cluster admin and run the SQL Setup.exe
- Select new SQL Server failover cluster
- Enter Product key
- Accept the license.
- Select the “Use Microsoft updates to check for updates”
- Select the product updates it there are any.
- Resolve any errors that occur (Windows Firewall and Microsoft Cluster Service (MSCS) can be ignored)
- Select the required Instance features.
- Set the Instance Root Directory, Share feature directory and Shared feature directory (x86) to be the Program Files drive you setup previously.
- Set the SQL Server Network Name
- Set an Instance name if required other
- Ignore the Cluster Resource Group Name errors is any show.
- Next Select the Shared cluster disk – This should be the drive you’d planned to contain the SQL Database
- Next configure the SQL Server IP Address
- Enter the usernames and password for the configured accounts.
- Select your method out Authentication and the SQL Server Administrators
- Check the Data directories are configured to use the shared drive
- Add any other settings you required for TempDB and Filestream
- Finish the install
SQL Server Install Node 2
Login to the second cluster node with the cluster admin and run the SQL Setup.exe
- Select Add Node to a SQL server failover cluster
- Enter Product key
- Accept the license.
- Select the “User Microsoft updates to check for updates
- Select the product updates it there are any.
- Resolve any errors that occur (Windows Firewall and Microsoft Cluster Service (MSCS) can be ignored)
- Most of the settings now will be automatically detected from the first node.
- Enter the service account passwords when prompted.
- Wait for the installer to finish.
Testing Failover
Once the install has finished try failing the cluster nodes between the two hosts.
- Open Failover Cluster Manager
- Expand Roles.
- Right Click on the SQL Server cluster role
- Select Move -> Select Node
- Select the other cluster node and wait for role to move.
- Now complete the action in reverse.