Configuration Manager in Microsoft IT

Real World Experiences from Microsoft IT

System Center 2012 Configuration Manager Hardware used for site roles in Microsoft IT

System Center 2012 Configuration Manager Hardware used for site roles in Microsoft IT

  • Comments 8

One of the most common queries I receive is regarding site role configuration with hardware specifications used in Microsoft IT for System Center 2012 Configuration Manager. The purpose of this blog post today is to share our hardware specifications. At Microsoft we are using virtualization for almost all site roles with the exception of SQL server in our current architecture. These hardware specifications that I share below are not going to be applicable for all Configuration Manager hierarchies due to different business requirements, features used and number of clients managed.

For more details on supported configurations and recommended hardware please refer product documentation -

Planning for Hardware Configurations for Configuration Manager: http://technet.microsoft.com/en-us/library/hh846235.aspx

Supported Configurations for Configuration Manager: http://technet.microsoft.com/en-us/library/gg682077.aspx

The table below shares our hardware configuration for our Configuration Manager environment.

clip_image001

Central Administration Site (CAS)

clip_image001[1]Our Central Administration Site (CAS) is managing ~280,000 clients at Microsoft IT and has following site roles globally

  • 5 Primary sites
  • Each Primary Site has Application Catalog and Fallback Status Point role
  • 13 Secondary sites which has proxy Management Points and Distribution Point role.
  • 11 Software Update Point
  • 220+ Distribution Points

 

Machine Type

Computer Model

Processors

System Memory

Operating System

SQL Server

Physical

HP Proliant SE326M1

2 x Intel(R) Xeon(R) CPU L5640 @2.26GHz with 12 cores and 24 threads (HT)

64 GB

Windows 2008 R2 Enterprise Edition *

SQL Server 2008 R2 Datacenter Edition *

* Microsoft IT data center standard versions

CAS – Hard Disk and Array Configuration

clip_image003

Primary Site with Fallback Status Point and Application Catalog

image At Microsoft, we have all Primary site servers with SQL Server on remote site server computer. The reason we chose to have remote SQL server for all primary sites is because all sites have between 40,000 and 75,000 clients managed so we are looking to have better performance with future client growth. We virtualized all Primary site roles with configurations shown below:

Machine Type

Processors

System Memory

Operating System

Virtual

Intel(R) Xeon(R) CPU UE7450 @2.40GHz with 4 cores and 4 threads

12 GB

Windows 2008 R2 Enterprise Edition *

* Microsoft IT data center standard versions

Primary Site – Hard Disk Configuration

clip_image006

Remote SQL Server

imageWe have 2 configurations for remote SQL server at Microsoft.

1. Remote SQL Server for more than 50,000 clients

2. Remote SQL Server for less than 50,000 clients

Currently for remote SQL server more than 50,000 clients we have ~75,000 clients managed and for remote SQL server less than 50,000 clients we have ~40,000 clients managed. Here are server specifications details for both remote SQL servers. Also these SQL servers are also hosting WSUS database along with Configuration Manager Database.

Clients Managed

Machine Type

Computer Model

Processors

System Memory

Operating System

SQL Server

More than 50,000 Clients

Physical

HP Proliant DL 580 G5

4 x Intel(R) Xeon(R) CPU L5640 @2.26GHz with 12 cores and 24 threads

64 GB

Windows 2008 R2 Enterprise Edition *

SQL Server 2008 R2 Datacenter Edition *

Less than 50,000 Clients

Physical

HP Proliant SE326M1

2 x Intel(R) Xeon(R) CPU L5640 @2.26GHz with 12 cores and 24 threads (HT)

48 GB

Windows 2008 R2 Enterprise Edition *

SQL Server 2008 R2 Datacenter Edition *

* Microsoft IT data center standard versions

Remote SQL Server for more than 50,000 Clients – Hard Disk Configuration

clip_image011

Remote SQL Server for less than 50,000 Clients – Hard Disk Configuration

clip_image013

Management Points

We have all multiple management points are each primary site for service continuity. image

Machine Type

Processors

System Memory

Operating System

Virtual

Intel(R) Xeon(R) CPU UE7450 @2.40GHz with 4 cores and 4 threads

6 GB

Windows 2008 R2 Enterprise Edition *

* Microsoft IT data center standard versions

Management Point – Hard Disk Configuration

clip_image017

Software Update Points

image We have all software update points (SUP) configured in Network Load Balance (NLB) in each primary site for service continuity. These SUP roles are having dedicated server role as we use WSUS for driver management as well.

Machine Type

Processors

System Memory

Operating System

Virtual

Intel(R) Xeon(R) CPU UE7450 @2.40GHz with 4 cores and 4 threads

6 GB

Windows 2008 R2 Enterprise Edition *

* Microsoft IT data center standard versions

Software Update Point – Hard Disk Configuration

clip_image021

Secondary Site + Distribution Points

imageWe have 13 Secondary Site which has Proxy Management Point and Distribution point role assigned our out of 225 Distribution point worldwide. The hardware configuration for these distribution point + secondary site + proxy management point is same worldwide

Machine Type

Processors

System Memory

Operating System

Virtual

Intel(R) Xeon(R) CPU UE7450 @2.40GHz with 4 cores and 4 threads

4 GB

Windows 2008 R2 Enterprise Edition *

* Microsoft IT data center standard versions

Secondary Site / Distribution Point – Hard Disk Configuration

clip_image025

Please share your comments for these platform standards and I would be glad to answer any queries. I want to acknowledge my team – Arun Ramakrishnan, Blair Wright, Partha Chandran, Benjamin Reynolds and Naveen Kumar who contributed for these platform standards.

Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of any included script samples are subject to the terms specified in the Terms of Use

Comments
  • Shitanshu: Thank you (and your teammates) for sharing this information. Can you please share additional details about why the chosen design is not 100% virtualized?

  • Shitanshu, if you're still out there, can you provide any answer to my previous question?

  • Can you tell me more about the rationale for multiple virtual disk configurations?

  • Chase, we didn't virtualize SQL because of the restrictions we had in our VM environment around the number of processors we could have on a VM. You'll notice that everything that is virtualized has a max number of processors of 4. 4 processers just isn't enough CPU power for the number of clients we have (based on our own "delay tolerance").

  • @ Arun Ramakrishnan i am monitoring a ongoing migration in my organization please can you help me with your email address ....i want to verify if all the specs are correct in my organization

    Sourav

  • my email address is sourav_banerjee200685@yahoo.com

  • Hi Shithanshu and Team,

    Is there an reason why you've chosen to use secondaries instead of Pulled DPs? I'm sure there are some locations that can take advantage of Pulled DPs now that it is an option? Is that something that has been taken into consideration  based on the design changes in R2?

  • If the goal is to minimize the number of secondary sites, would there be push towards moving these sites that have a secondary sites towards using pulled DPs? The idea of simplifying the infrastructure should be one of the ideal approaches, that would potentially allow for the elimination of secondaries. Has this been considered. Also, with the location of the DP has any consideration been made to centralize the location of the MPs (near the site DB and the primary site servers) for communication and solid data transfer to the primary site server.

Page 1 of 1 (8 items)
Leave a Comment
  • Please add 5 and 1 and type the answer here:
  • Post