Tuesday 25 February 2020


Adding disk to existing Storage Pool in SQL Failover cluster with Storage Spaces direct

      
For Storage Space direct Storage Pool, new disks should be unformatted disks on each node.

VM1




VM2




Newly added disks shown as available disks:




Run this command to add the newly added disks into cluster storage pool:



Now the disks merged to the Windows clustered storage:


Add the new disks into existing storage pool:



Extend the storage pool virtual disks:


After expanding the virtual disks, the space will be available to extend:





Configuring Windows Server 2016 Storage Spaces Direct for SQL Server Failover Cluster


         Storage Spaces Direct is a new feature available in Windows 2016 Datacenter edition that enables to make a storage solution in High Availability with local storage of each node.

This article describes step-by-step procedure on how to configure Storage Spaces direct for Windows failover cluster.


Requirements to S2D cluster nodes:
  • Windows Server 2016 Datacenter edition;
  • At least two servers in a cluster;
  • In addition to the system drive, there must be at least three physical disks. All disks that you are going to add to the Storage Spaces Direct must be unformatted.

Preparing disks for Storage Spaces Direct:

Before we configure Storage Spaces Direct, we need to ensure the disks we are using are free of all partitions, otherwise they will not be included in the Storage Spaces Direct pool.


Servers: VM1 & VM2

Add empty volumes to each node:

VM1 

VM2



Now the new volumes will appear in the available disks in Storage Pool:


Note: PowerShell to find the Primordial pool (This is simply a holding pool for all unallocated disks that are connected to the server that you are currently managing. If you create a storage pool, you will be able to grab the disks from this pool to use in the new storage pool.)



Configure Storage Space Direct Cluster:

We have a functioning Windows cluster setup already, so now it’s time to setup Storage Spaces Direct.


Please note, this cmdlet will fail if you have less than 3 disks on the cluster nodes. In this scenario, there are two disks per node, a total of 4.


Creating Storage Pool in cluster via wizard:






Select the number of disks needed for pool creation. If you want to designate a physical disk as a hot spare, then select the “Hot Spare” allocation type instead of Automatic.


From above screenshots, we can see that we would not be able to proceed with config if selecting 2 disks, atleast 3 disks required to create a pool.


Note: Storage Space creation using PowerShell cmdlet

Below are a couple of PowerShell cmdlet examples that demonstrate the same provisioning action as described above using Server Manager. Create a storage pool using all the available physical disks. More information on pool creation using PowerShell cmdlet is here.

$PhysicalDisks = Get-StorageSubSystem -FriendlyName “Storage Spaces” | Get-PhysicalDisk -CanPool $True; 

New-StoragePool -FriendlyName CompanyData -StorageSubsystemFriendlyName “Storage Spaces*” -PhysicalDisks $PhysicalDisks;

From Failover Cluster Admin, we will now be able to see all the added physical disks from Storage Pool:


Create virtual disk (storage space):

Here, for testing environment, two volumes, one for data and one for logs have been created with 64KB allocation unit size, fixed provisioning and NTFS. SQL 2016 supports also using REFS however still don’t see any definitive documentation that recommends REFS over NTFS for SQL Server workloads.

New-Volume -StoragePoolFriendlyName "ClusterPool" -FriendlyName SQL17Data -FileSystem CSVFS_NTFS -AllocationUnitSize 65536 -Size 100GB -ProvisioningType Fixed -ResiliencySettingName Mirror

A small note on S2D storage options. It offers both two way and three-way mirror, by default two-way will be selected unless specified otherwise. This can be done through the PhysicalDiskRedundancy or the NumberofDataCopies parameters. A two-way mirror leaves more space available but can only sustain the loss of one disk, whereas 3-way can sustain the loss of two drives at the cost of more storage used.


You can also verify with PowerShell:

Get-ClusterSharedVolume

The CSV disks will show in Windows as two mountpoints under C:\ClusterStorage called Volume1 and Volume2. We can rename them to something more friendly:


We can verify the cluster shared volumes were created in the FCM GUI:


At this point, we’re good to go on the cluster and storage side.