Dougs Blog... (rarely) daily

Microsoft UK Consultancy Services

Why not stretch CCR nodes across 2 Data Centres..?

Why not stretch CCR nodes across 2 Data Centres..?

  • Comments 4

Have had a number of conversations with customers concerning the merits of achieving both data centre resilience and cost cutting by stretching CCR and avoiding deploying both CCR and SCR.   i.e. 3 for the price of 2.

Firstly I wouldn’t consider stretching CCR if you are not confident of either the latency of the underlying network, or the reliability of the link.  However there are plenty of enterprise customers with multiple Gigabit data centre interconnects for whom stretching CCR seems a viable alternative to deploying both CCR and SCR.

So what are the pro’s and con’s of doing so?

Pro Con Explanation
Less expensive to deploy and manage Only 2 sets of servers and storage required as opposed to 3
Less servers to manage Complex to manage Whilst there are going to be less servers to manage; in order to make most efficient use of the data centre interconnect, and to ensure backups of the passive node, the solution must have a ‘normally’ active node. This is often a change to the way the solution is managed
Manual configuration may be required to control message routing within a data centre Exchange Server 2007 uses AD site based routing. Each mailbox role server will use any hub transport server in the same site regardless of data centre location
Difficult to control client access within a data centre Exchange Server 2007 provides client access (OWA, OA etc..) based on AD site membership. CAS<->MBX server MAPI may occur across the data centre interconnect. (Outlook connects directly to mailbox role for most operations)
  Querying of Active Directory may take place across the data centre interconnect Exchange Server 2007 makes no distinction between GC’s in the same AD site.  AD queries will take place across the data centre interconnect which can lead to delays in message delivery for example..
  Outlook clients will experience a delay following failover (for both managed and unmanaged failover * In this configuration, there is one Network Name resource and two IP addresses on which the Network Name is dependent. In DNS, the network name is associated with the current online IP address. During failover, as the Network Name resource comes online, the Cluster service updates the DNS entry for the Network Name with the second IP address, which corresponds to the other subnet. The record update has to propagate throughout DNS. From Outlook’s perspective, Outlook does not need a new or reconfigured profile, but it does need to wait for its local DNS cache to flush to allow the Network Name to resolve to the other IP address.
More stringent requirements of the network ** CCR designed to be deployed within a data centre. To avoid database copies becoming out of sync, and potential data losses increasing, there are more specific requirements in terms of network latency and bandwidth.
Less resilient 2 copies of the data with CCR->CCR as opposed to 3 with CCR->CCR->SCR
Not recommended by Microsoft *** Although this is a supported solution it is not recommended by Microsoft

* I have made the assumption here that the solution will be deployed on Windows 2008 and that it is not possible to stretch a subnet between the data centres. If this is the case two nodes must be in different subnets. Following cluster failover the change in the IP address means that the client must wait to update its DNS entry before it will connect to the CMS.  To quote Technet (Installing Cluster Continuous Replication on Windows Server 2008):

“…the name of the CMS does not change, but the IP address assigned to the CMS changes. Clients and other servers that communicate with a CMS that has changed IP addresses will not be able to re-establish communications with the CMS until DNS has been updated with the new IP address, and any local DNS caches have been updated. To minimize the amount of time it takes to have the DNS changes known to clients and other servers, we recommend setting a DNS TTL value of five minutes for the CMS Network Name resource.”

If you are able to stretch a subnet then this disadvantage disappears.

** As CCR makes use of asynchronous replication the requirements of the network are not difficult to meet.  However by stretching CCR you need to be more confident of the reliability and performance of the underlying network.
*** See
Site Resilience Configurations

And so just to finish off this discussion there is also the issue of Failover\Failback following the loss of a data centre.

It often appears that a stretched cluster makes it easier to failover and failback following the loss and subsequent rebuild of a primary data centre. It is important to remember that the 2 node cluster is a majority node set cluster and as such uses a File Share Witness (FSW) to maintain quorum. In effect therefore it is a 3 node cluster with 2 of the 3 nodes in the primary data centre. This means that in the event of the loss of the primary data centre there are a series of steps to follow to ‘force’ the passive node online when it cannot contact the FSW. (Placing the FSW in the secondary data centre in the 1st place is not recommended because if you lose the link between the data centres you have lost access to the FSW which is deemed more likely to occur than complete data centre failure.) To failback to the primary data centre must be very carefully managed since there may be for a time 2 FSW’s and 2 instances of the same set of databases. The likelihood is that the formerly active node will need to be rebuilt and\or the databases reseeded. This process can be more time consuming and difficult to manage than with CCR->CCR->SCR where failback steps are better documented and understood.

So in my opinion you should deploy a combination of CCR\SCR where possible.  However if you are confident that you understand all the issues related to stretching CCR and that you can manage the solution successfully then I believe it is a viable option...

  • Good read!  One thing I would like add because I see this confusion a lot with my clients... it is important to understand what you are trying to accomplish by using the 2 data centres scenario – HA or DR.  They seem to get blurred very often.

    High-Availability keeps the application or process alive in the event of an “incident” (box failure for any reason).  DR is implemented when the data centre is a smoking pile of debris.  Keep that picture in your head when you’re planning.

    HA best practice says your should fail-over landscape should be in the same data center (I'm talking HP, IBM, SAP, et al).  This is because your raised floor should be a highly secured area where you haven't got people roaming around sticking their hands in production cabinets and potentially "creating" an incident.  And that is what HA is for... an INCIDENT... not a disaster.

    When you span data centers, you open yourself to the world outside your raised floor with your data link of N miles.  Some schmo with a backhoe can dig up your fibre link with one shovel scoop (don’t think it hasn't happened - a few years back they were digging a hole beside a rail line in Ontario and they cut through a massive piece of fibre that took them more than a week to fix).  Plus the likelihood of failure increases with every device between the 2 points.  If you span data centres, you have additional routers, telco COs, telco equipment, manholes, blah blah blah.  Needless to say that in the SAP HA infrastructure world, it's not done that I have seen even between 2 Tier 4 data centers with a dark fibre ring.  I have project managed an implementation with load-balanced hot data centres using content switches (load-balancers) but the cost is crazy expensive and the complexity huge.  Not to mention you need a deep bench to support it.

    Whether you are talking about a box failure, disk sub-system failure, or some other event in the data center... these are incidents and therefore a fail-over in the same data center is the way to go.  You should have your Exchange databases on different power, in different cabinets, and as far apart as practical so that if a sprinkler head goes off or the floor collapses (both incidents – both I have seen happen) it doesn’t take down both cabinets.

    When you talk about the loss of a data centre, you're now talking about a whole other animal.  Not only do you need the mailbox databases at the your second data center, you need all the other components of the application (be it Exchange, SAP, whatever) at the 2nd data center as well.  In the case of Exchange you need a link to the Internet as well to get to the outside world for SMTP traffic.  Not only that, the guys that manually fail-over the box in this case are likely underneath the smoking rubble of the first data centre so who exactly is going to fail it over for you?  Now your talking BC (Business Continuity planning), which is a huge undertaking.

    I am going through this Exchange planning right at this very moment in a 5500 user environment and we are having this exact discussion about the 2 data centres.  The question I ask myself is - if the goal is HA, why span 2 data centres and expose yourself to a WAN link (even though its redundant dark fibre and we can stretch the subnet over it).  For any of you thinking about this that cant stretch the subnet, if your DNS TTL is 5 minutes that means your outage is 5 minutes.  I’m thinking that if you’re on the same subnet you would use a VIP (Virtual IP address) and the outage would be measured in seconds.

    The thinking is by having CCR over the 2 data centres, we would get a pseudo DR environment.  But in the even of a disaster, who is going to manually fail it over?  And if you are missing any of the components you need to run the complete Exchange environment that don’t exist in the 2nd data centre, you're not getting DR at any rate.  It doesn’t buy you anything UNLESS you have all the components at both sides.  (I should note that we are not doing SCR, just CCR).  Typically, the mission is that the CEO doesn’t want to lose his email because somebody yanked out a piece of fibre.  The answer is CCR in the same data centre so he doesn’t even see the hiccup.

    The bottom-right-hand-corner is be clear on the WHAT... what are you trying to accomplish HA or DR? Once you know the what, you can figure out the HOW.  I think its clear your Enterprise Architecture strategy should say to build HA before DR because an incident as far more likely than a disaster and unless you have completely redundant Exchange environments and Internet access and a 5 9s WAN link with buckets of bandwidth... keep it in the same data centre.

    My 2c... 3c Canadian! :)

  • Yeah I agree - am working on a project at the moment where we have to provide full DR capability.  The only way we can provide it is because it is a messaging platform and only a messaging platform so we know all of the components that are to be deployed. Having said that the investment in time and energy in designing for full DR is enormous.  Every component has to be considered.  Generally any issues that we have come up against (that have not been related to the scale of the solution (several hundred thousand mailboxes)) have been to do with things like management consoles and software updates - what you would consider to be peripheral technology but if you can't recover all of it you don't have full DR.  That's why like you say if you understand cost versus risk and determine exactly what you want first your deployment will be different to what you thought to begin with...

  • Don't forget about database corruption, virus problems, and other issues that wreck the databases.  With CCR, most of the time the replication is too fast to stop the corruption from wrecking the second node in the cluster.  That's where SCR comes in.  You have a longer lag and better control to prevent the replay of the log files when necessary.  You also get per database/SG control - fail over just a single SG instead of the entire server.

  • SCR is unlikely to be any help with a logical type of corruption since you are unlikely to know what caused the corruption in terms of a transaction within the log stream.  Recovery mechanism would likely be to move mailboxes to a new database, perhaps in combination with a restore (as well as fix the root cause of the problem).  CCR does protect you from physical corruption since if a database is damaged you can failover to the second node (or restore if you don't want to fail the entire server over [again once you've fixed the root cause]).  If it is a transaction log that is damaged the replication service will not replay that log into the second database.  You wouldn't invoke SCR to recover from a database corruption in my opinion.

Page 1 of 1 (4 items)
Leave a Comment
  • Please add 5 and 6 and type the answer here:
  • Post