What the heck is PlumbAllCrossSubnetRoutes?

What the heck is PlumbAllCrossSubnetRoutes?

Rate This
  • Comments 3

Hi cluster fans,

 

If you ever browsed your cluster properties you may have noticed an entry for “PlumbAllCrossSubnetRoutes” and you’ve probably asked yourself, “what the heck is that?”

 

The only thing on the web will point you to MSDN (http://msdn.microsoft.com/en-us/library/aa371422(VS.85).aspx) but there’s not a whole lot of info here:

 

PlumbAllCrossSubnetRoutes

Data type: uint32
Access type: Read/write

Plumbs all possible cross subnet routes to all nodes.

Windows Server 2003:  

This property is not supported.

 

So let’s take a look at this in a little more detail and explain exactly what this does… 

 

First of all, this is a property which is only exposed through the command line interface.  If you run CMD as an administrator, you can see the cluster properties using > cluster /prop.

 

Here you will find the PlumbAllCrossSubnetRoutes property:

 

Plumb 

 

Starting in Windows Server 2008 Failover Clustering nodes can communicate with each other across a router (this is often referred to as the multi-subnet or cross-subnet support), which is very important in supporting our multi-site clustering scenarios.  To do this, the cluster service builds a list of communication routes to every other node in the cluster and provides those routes to the NetFT driver, which is clustering’s fault-tolerant network driver.

 

Let’s assume you have a 2-node cluster.  Both nodes are in the same site, and both are connected to three networks A, B, and C.  Each node should end up with three routes in the NetFT driver – from Node 1’s perspective we will see: A1->A2, B1->B2, and C1->C2.  But now let’s assume that networks A, B, and C are all connected to the same routing infrastructure, and thus interfaces on all networks are mutually reachable.  Still, by default, each node will end up with the same three routes: A1->A2, B1->B2, and C1->C2.    

 

But if you set the PlumbAllCrossSubnetRoutes property (to equal 1), then nodes will also attempt to find those routes that cross subnets.  So Node 1 would end up with all these routes: A1->A2, B1->B2, C1->C2, A1->B2, A1->C2, B1->A2, B1->C2, C1->A2, C1->B2 – nine routes.  Why is this bad?  Well it uses a lot more heartbeats, especially if you have more than two nodes and more than two networks – think how quickly this could exponentially scale up.  Well, why is this good?  This can enable cluster communication to survive some real pathological failures.  For instance, let’s say network A completely fails, and B1 fails, and C2 fails.  Your cluster is still up because you have C1->B2.  We didn’t think all the heartbeat and fault isolation complexity cost was worth it for most customers who are responsive to the first failure that occurs, so we left PlumbAllCrossSubnetRoutes off by default.  I haven’t yet heard of a customer that experienced that pathological failure and decided to enable PlumbAllCrossSubnetRoutes, however we wanted to extend the option to our customers if they so choose.

 

Also, in a true multi-subnet cluster (where Node 1 and Node 2 have no subnets in common), the cluster service always searches for all cross-subnet routes, so PlumbAllCrossSubnetRoutes has no effect.

 

 

Thanks,

David Dion

Principal Development Lead

Clustering & High Availability

Leave a Comment
  • Please add 8 and 5 and type the answer here:
  • Post
Page 1 of 1 (3 items)