Storage Spaces Direct with 3 VMs using Windows Server 2016 Technical Preview 5

No comments

In this blog post let’s looks at creating a Storage Spaces Direct Hyper Converged solution using three virtual machines. For production deployment, it is recommended to use physical servers instead of virtual machines. I will be using Windows Server 2016 Technical Preview 5 version which was just released few days back for this blog post.

Before I move any further, I would like to highlight some of the key features introduced part of Windows Server 2016 Technical Preview 5

– Automatic Configuration

– Storage Spaces Direct Management Using Virtual Machine Manager

– Chassis and Rack Fault Tolerance

– Deployment with 3 Servers

– Deployments with NVMe,SSD and HDD

Overview of Storage Spaces Direct

clip_image001

Storage Spaces Direct enables building highly available and scalable storage systems using local storage. We can utilize storage locally attached to individual nodes such as HDD, SSD and NVMe drives for creating Storage Spaces Direct volumes.

There are two deployment scenarios with Storage Spaces Direct. Hyper-Converged scenarios and disaggregated scenario. In this post, I will be demonstrating Hyper-Converged scenario.

In this blog post let’s look at how we can create, Storage Spaces Direct with 3 Virtual Machines with mirrored resiliency. This deployment is resilient for a single node failure.

Step 01 – Create 3 Virtual Machines with 2 Networks and 3 Hard Drives (1 for OS and the other two for Storage Spaces Direct). Add all 3 Virtual Machines to a domain.

clip_image003

clip_image005

Step 02- is for us to go and install Failover Clustering features and File Services. You can do so by using PowerShell command below.

Install-WindowsFeature –Name File-Services, Failover-Clustering –IncludeManagementTools -ComputerName $VMname

Step 03 – Before go ahead and create the cluster, let’s go ahead and validate our cluster configuration.

Test-Cluster –Node ‘ws164cls1′,’ws164cls2′,’ws164cls3’ –Include “Storage Spaces Direct”,Inventory,Network,”System Configuration”

clip_image007

Validate Test comes back with a failure for Disk Configuration. This is due to Technical Preview 5 not recognizing Virtual Hard Drive storage media type. This will be fixed in the next release, but for now we need to proceed ahead and skip some of the validation for this to work within Technical Preview 5.

clip_image009

clip_image011

Step 04 – Let’s go ahead and create a new cluster without any storage.

New-Cluster –Name ‘ws164cluster1’ –Node ‘ws164cls1′,’ws164cls2′,’ws164cls3’ –NoStorage

clip_image013

Step 05 – I will configure Cloud Witness for this cluster

Set-ClusterQuorum –CloudWitness –AccountName <AccountName> -AccessKey <AccesKey>

clip_image015

Step 06 – Now that we have created a cluster, next option is for us to go ahead and enable Storage Spaces Direct. Please note that we cannot use the commends we used part of Technical Preview 04 since the enable option will fail as it cannot detect required storage disks. This is due to Technical Preview 5 not recognizing Virtual Hard drives and due to this reason we need to skip eligibility checks.

Enable-ClusterS2D -CacheMode Disabled -AutoConfig:0 -SkipEligibilityChecks -Confirm

clip_image016

clip_image018

Step 07 – Once we have enabled Storage Spaces Direct, we need to go ahead and manually create Storage Pool. If we were using Physical Server, we could use auto configuration but this will not work at the moment for Virtual Machines.

New-StoragePool -StorageSubSystemFriendlyName *Cluster* -FriendlyName S2D -ProvisioningTypeDefault Fixed -PhysicalDisk (Get-PhysicalDisk | ? CanPool -eq $true)

clip_image020

clip_image022

clip_image024

Step 08 – Create Storage Tiers

$pool = Get-StoragePool S2D

New-StorageTier -StoragePoolUniqueID ($pool).UniqueID -FriendlyName Performance -MediaType HDD -ResiliencySettingName Mirror

New-StorageTier -StoragePoolUniqueID ($pool).UniqueID -FriendlyName Capacity -MediaType HDD -ResiliencySettingName Parity

Step 09 – Create a Volume

New-Volume -StoragePool $pool -FriendlyName Mirror -FileSystem CSVFS_REFS -StorageTierFriendlyNames ‘Performance’,’Capacity’ -StorageTierSizes 50GB, 200GB

clip_image026

clip_image028

clip_image030

Within Cluster Manager we can now see that we have a new CSV disk available, which can be used by Hyper-V for hosting Virtual Machines.

clip_image032

clip_image034

As mentioned before, if a single node fails, we still have access to storage

clip_image036

Turnoff WS164CLS3

clip_image038

clip_image040

CSV disk is still online and we can see that it has moved in to WS165CLS2. We can still do read/write

clip_image042

However, if we have two node failures, then we will lose access to storage

clip_image044

clip_image046

clip_image048

References for more information

Technet-Storage Spaces Direct in Windows Server 2016 Technical Preview

Technet-Storage Spaces Direct Hardware Requirements

Hyper-converged solution using Storage Spaces Direct in Windows Server 2016

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s