How to Configure a Clustered Storage Space in Windows Server 2012

How to Configure a Clustered Storage Space in Windows Server 2012

Rate This
  • Comments 35

This blog outlines the sequence of steps to configure a Clustered Storage Space in Windows Server 2012 using the Failover Cluster Manager or Windows PowerShell®. You can learn more about Storage Spaces here:


  •          A minimum of three physical drives, with at least 4 gigabytes (GB) capacity each, are required to create a storage pool in a Failover Cluster.
  •          The clustered storage pool MUST be comprised of Serial Attached SCSI (SAS) connected physical disks. Layering any form of storage subsystem, whether an internal RAID card or an external RAID box, regardless of being directly connected or connected via a storage fabric, is not supported.
  •          All physical disks used to create a clustered pool must pass the Failover Cluster validation tests. To run cluster validation tests:
    • Open the Failover Cluster Manager interface (cluadmin.msc) and select the Validate Cluster option.

  •          Clustered storage spaces must use fixed provisioning.
  •          Simple and mirror storage spaces are supported for use in Failover Cluster. Parity Spaces are not supported.
  •          The physical disks used for a clustered pool must be dedicated to the pool. Boot disks should not be added to a clustered pool nor should a physical disk be shared among multiple clustered pools.
  •          Storage spaces formatted with ReFS cannot be added to the Cluster Shared Volume (CSV).

Steps to configure using the Failover Cluster Manager

1.       Add the File Services Role and the File Services Role Administration Tools  to all nodes in the Failover Cluster

2.       Open the Failover Cluster Manager interface (cluadmin.msc)

3.       In the left-hand pane, expand Storage. Right-click on Pools and select New Storage Pool. This will start the New Storage Pool Wizard

4.       Specify a Name for the Storage Pool and choose the Storage Subsystem that is available to the cluster and click Next

5.       Select the Physical Disks (a minimum of three with minimum capacity 4GB each and bus type SAS) for the storage pool and confirm the creation of the pool. The pool will be added to the cluster and brought Online, once created.

6.       The next step is to create a Virtual Disk (storage space) that will be associated with a storage pool. In the Failover Cluster Manager, select the storage pool that will be supporting the Virtual Disk. Right-click and choose New Virtual Disk

7.       This initiates the New Virtual Disk Wizard. Select the server and storage pool for the virtual disk and click Next.  Note that the cluster node hosting the storage pool will be listed.

8.       Provide a name and description for the virtual disk and click Next

9.       Specify the desired Storage Layout (Simple or Mirror; Parity is not supported in a Failover Cluster) and click Next

Note: I/O operations to a CSV mirror space are redirected at the block level through the CSV coordinator node. This may result in different performance characteristics for I/O to the storage, compared to a simple space.

10.   Specify the size of the virtual disk and click Next. After you confirm your selection, the virtual disk is created. The New Volume Wizard is launched if you do not uncheck this option on the confirmation page.

11.   The correct Disk and the Server to provision the disk to should be selected for you. Verify this selection and click Next.

12.   Specify the size of the volume and click Next

13.   Optionally assign a drive letter to the volume and click Next


14.   Select the file system settings and click Next and confirm the volume settings. The new volume will be created on the virtual disk and will be added to the Failover Cluster.

Note: The NTFS File System should be selected if the volume is to be added to Cluster Shared Volumes.


15.   Your clustered storage space can now be used to host clustered workloads. You can also see the properties of the clustered storage space and the clustered pool that contains it, from the Failover Cluster Manager.


Steps to configure using Windows PowerShell®


Open a Windows PowerShell® console and run the following steps:

1.       Create a new pool

a.  Select physical disks to add to the pool

$phydisk = Get-PhysicalDisk –CanPool $true | Where BusType -eq "SAS”

b.  Obtain the storage subsystem for the pool

$stsubsys = Get-StorageSubsystem

c.       Create the new storage pool

$pool = New-StoragePool -FriendlyName TestPool -StorageSubsystemFriendlyName $stsubsys.FriendlyName -PhysicalDisks $phydisk -ProvisioningTypeDefault Fixed

d.      Optionally add an additional disk as a HotSpare
$hotSpareDisk =
Get-PhysicalDisk –CanPool $true |Out-GridView -PassThru

Add-PhysicalDisk -StoragePoolFriendlyName TestPool -PhysicalDisks $hotSpareDisk -Usage HotSpare


2.        Now create a Storage Space in the pool created in the previous step

a.  $newSpace = New-VirtualDisk –StoragePoolFriendlyName TestPool –FriendlyName space1 -Size (1GB)  -ResiliencySettingName Mirror   


3.       Initialize, partition and format the Storage Space created

a.  $spaceDisk = $newSpace | Get-Disk

b.  Initialize-Disk -Number $spaceDisk.Number -PartitionStyle GPT

c.  $partition = New-Partition -DiskNumber $spaceDisk.Number -DriveLetter $driveletter -size $spaceDisk.LargestFreeExtent

d.  Format-Volume -Partition $partition -FileSystem NTFS


4.       Add the Storage Space created to the Cluster

a.  $space = Get-VirtualDisk -FriendlyName space1              

b.   Add-ClusterDisk $space



  • Clustered Spaces can also be created using the Server Manager:

  • You can find a full end to end Windows PowerShell® sample on setting up a file server cluster with Storage Spaces here.

Troubleshooting tips:

If you come across any of the following errors while attempting to add a storage pool to the cluster please review the Prerequisites section at the beginning of this blog to determine which requirement was not met:

Failed to add storage pool to cluster – {XXXXXXX-XXXX-XXXX-XXXX-XXXXXXX}

No storage pool suitable for cluster was found. 




Subhasish Bhattacharya                                                                                                               
Program Manager                                                                                                          
Clustering & High Availability                                                                                      

Leave a Comment
  • Please add 5 and 4 and type the answer here:
  • Post
  • @Layering

    In a nutshell I mean that you cannot combine h/w or s/w RAID with storage spaces.

    "I want to connect the required SAS drives using a SAS controller I can install into an available PCI or PCIe slot. Will that work?" Absolutely!

  • @eripoll

    Actually, I did not mention using DAS. Shared storage is still required for clustering in Windows Server 2012.

  • In a nutshell I mean that you cannot combine h/w or s/w RAID with storage spaces.

    ok wait a min.... if I put disks in a number of servers and use a raid on those disks.. say with a SAS controller....

    Your telling me I can't create storage spaces?  

    That doesn't make any sense to me... I should be at least able to create a RAID array internal to the server and present that volume... as a disk to it's server and then repeat on more than one server... then continue the setup.

  • With regard to building a Hyper-V 2-node cluster, can I use a dual-expander SAS JBOD enclosure with three SAS disks?  Each server node would be connected to one of the expanders on the JBOD enclosure with a SAS HBA card.  From there, I'm assuming that they can both "see" the disks and I can create a cluster shared volume.

  • Hi Sean,

    Spaces is a software solution to achieve resiliency with low cost JBOD storage, it is not another layer to be put on top of a software or hardware RAID solution.  You are correct, you cannot create a Space on top of SAS RAID.

    Spaces is a new solution that unlocks a new set of scenarios and hardware deployments, so think of it as something new...   it is not just another type of ketchup to throw on top of your hamburger.  



  • Hi Thomas,

    You're correct - this configuration should work fine.



  • Can anyone suggest a low cost dual initiator SAS JBOD enclosure to set this up with, I can't find one for less than about $5,000 (from eonstor) which is not much cheaper than a dual controller iSCSI box. There are ones that would work such as HP D2700 but for some reason they only support dual-domain, not dual-initiator.

  • Hi Andy,

    A start would be to look through the certified hardware list for WS 2012:

    A caveat is that we expect to have more partners certifying their hardware in the coming months.

    - Subhasish

  • Hi I know this is a very late response but in my previous post, i asked about DAS, you responded with " i didnt mention DAS" . Im not sure if your are aware but DAS means Direct attached storage, I.e. a sas JBOD. i was just trying to be general since i have also seen sata jbods. with that in my are there any walkthroughs or documentation on connecting 2 nodes to the same DAS, JBOD, direct storage

  • Hi e ripoll , The bottom line as you've gathered is that the SAS JBOD must be physically connected to all cluster nodes which will use the storage pool.  Direct attached storage, which is not connected to all cluster nodes is not usable for clustered pools with Storage Spaces. That said you should be able to consult your hardware vendor for shared SAS configurations - we are not providing guidelines on this. We list certified configurations here: . Thanks and happy new year!

  • Hi Subhash,

    I have a SAS storage box which is not a JBOD. The SAS storage box has a RAID controller which cannot be bypassed. If I create RAID 1s of all the disks individually, it means I dont have an abstraction layer on the storage.

    Will such a configuration be supported by MS? the Cluster Validation Utility passes all the test the moment it sees a SAS storage. It doesnt look at RAID metadata on the disks.

    So basically in my configuration there is no failure in the Cluster Validation Utility but the statement on this blog

    "         The clustered storage pool MUST be comprised of Serial Attached SCSI (SAS) connected physical disks. Layering any form of storage subsystem, whether an internal RAID card or an external RAID box, regardless of being directly connected or connected via a storage fabric, is not supported.


    means my config is not supported.

    Thanks...Any input appreciated.

  • Spaces is not supported on top of any form of hardware RAID.  For future questions please post them to the cluster forum, it can be found at this link:

  • Hi, I have a 2 node Windows Server 2012 Hyper-V Cluster - I have a EMC VNX FC Storage - 1GB quorum & 3x 2.5TB of disks are already mapped. I tried creating cluster storage pool and it did but I am not able to see them in failover cluster manager. It is showing under storage space but the volume created is not a part of failover. any idea how can I resolve this

  • Again... Spaces is not supported on top of any form of hardware RAID, that would include your Fibre Channel EMC SAN.  Blogs are not the best channel for troubleshooting conversations, please post future questions to the cluster forum, it can be found at this link:

  • Rob, thanks for this info.  I am confused on one topic.  I am getting ready to configure a pair of Dell C6100s with 4 nodes each w/2 Xeon 4/8Procs.  Each node is limited to 4 internal disks each.  I'm planning to install Hyper V Server 2012 only on 3 nodes, each node 1-Pair RAID1 Boot SAS6 Hyper V OS Boot Only Disks and 1-Pair RAID1 SAS6 for VM Boots Drives Only.  2 of the nodes will also have 4 external SAS6 Drives on a rack based backplain JBOD.  So I can plan to use the external SAS6 Disks, 2 sets of 4 (Node 1/2) to implement this.  Correct?  I will have 8 physical External Disks as one failover group for all other storage needs, basically data disks for all the VMs.  I'm basically using the SAS6 Internals to create very fast boot for both the Hyper V Root and All Boot VMs and using this shared storage cluster for everything else.  Am I correct?

Page 2 of 3 (35 items) 123