I recently came across an puzzling SQL install issue.  Actually, it was more of a “SQL won’t let me install” issue with SQL 2008 R2. When running the SQL installer for a failover cluster, one of the prerequisite checks is that cluster validation has been run successfully on the Windows cluster.  If that test does not pass, you will get the error described in KB953748. Note that this does show you a workaround to ignore this error, but do you really want to continue installing SQL when your Windows Cluster is not 100% healthy?

When this issue was brought to me, at first I panicked…. Is Windows becoming self-aware?  Will it prevent me from harming it? Will I soon be a slave in a world run by machines? Oh wait, that already is the case.  Might as well succumb to my fate and get to work on the issue.

This type of failure is a bit outside the realm of a DBA, but I wasn’t going to settle for passing the buck. I wanted to know what was going on.  Luckily, it is pretty easy to run the validation report from Failover Cluster Manager since this particular system was not yet in production.  In the Management pane you just select “Validate a Configuration” and go through the wizard.  This does require local administrative rights on all cluster nodes therefore a DBA may not have access to run this report in many environments.  Your OS administrators can often be your best friend, especially since they would be required to help resolve the issue anyway.

The report showed that the storage test failed. The specific error message was “Multi-path I/O (MPIO) does not work for disk with identifier *****  on node ServerX.domain.com” . When looking at the report in detail, I noticed a number of volumes listed as “EnabledForValidation = ‘True’” which did not seem to be any of the volumes that would be used by the SQL Instance.  Note, the storage in this case is EMC, and most of the information going forward is going to be vendor specific to EMC. It may still apply to other storage vendors.  The report showed a number of disks with identifier ***** (number hidden to protect the innocent).  I looked at them in disk manager and they were all 5MB little  SAN disks presented to the OS, obviously pointed to the same place since they had the same identifier.  In EMC terms, these are known as Gatekeeper volumes.  They are used for the server to communicate with the vcmdb and ACLX of the storage frame to know how to talk to the volumes that are allocated for that server. I will not pretend to be a storage expert, so I did find a fairly straightforward article that briefly describes the VCM and ACLX

These disks should not be in consideration to be cluster resources! Why does the Cluster Validation report show EnabledForValidation = TRUE for these disks?  It is because the disks are visible to all cluster nodes.   One way we validated that these gatekeeper volumes were not right was by doing one of my favorite troubleshooting techniques.  Look at an environment that works and play the old “One of these things is not like the other” game. In looking at the properties, there were noticeable differences in the correct (left) vs. incorrect (right) configuration

imageimage

The size difference does not really matter here (that may differ from frame to frame – the 2 servers in this comparison are connected to different frames) .We then got in touch with the storage team to compare the 2. What this came down to, was that the ACLX was set to Write Enabled on the frame when it should be Write Disabled. Windows should in no way have any ability to write to these disks, which includes creating a volume on them! One work around from the Windows OS side would be to disable these disks in Disk Manager so that the OS doesn’t really see them anymore (you can re-enable them in Device Manager though).  That is just a workaround and the true fix should be performed at the frame level in the SAN.

Just for fun, we decided to run Cluster Validation on another cluster connected to the same frame that had recently passed validation and already had a working SQL instance (not yet in production). Cluster validation failed with the same error! That proved to us that there was obviously a configuration change on the frame that needed to be reverted.

Once this was complete, we passed this portion of validation, but then failed another piece of storage validation….UGH!

image

The storage team did show us that in fact the volumes did have SCSI-3 Persistent Reservation enabled on the drives.  Unfortunately, the only way to get around this was for the storage team to rezone all the disks mapped to the clusters. There are two parts to it: zoning and masking.  Zoning allows the servers to talk on the SAN fabric (to the FA ports explicitly listed for each HBA/WWN),  and that is the first step.  After the disk drives have been created on the frame, then they are masked so that the servers are allowed to see them.  Luckily, since each server has multiple paths to the frame (via multiple HBAs), this can be done 1 path at a time, and the drives are not lost in the process, which is really nice for the SQL instances that were already installed.  Luckily, this was a brand new frame, so all servers connected to it were not yet production.  After rezoning was complete, everything was happy.

This is really far more information than a database professional really needs to know about SAN storage, but since SQL is nothing without the drives that hold the data, it is fairly important to at least understand some of the architecture and nomenclature behind it.  If you want to understand your particular environment better, just hit up one of your storage engineers.  Like most of us techies, they love sharing their knowledge about their expertise, and will probably want to understand more about how SQL interacts with their storage as well.  I am FAR from a storage expert, but over the years I have gained a lot of insight from the storage engineers I have worked with that have really helped me in my job as a SQL professional.