Erick Smith, Development Manager, Fabric Controller and Chuck (architect in the team).
The Fabric Controller
What it is to deploy a service manually : resource allocation, provisioning, upgrades, maintain service health. The Fabric Controller takes care of all this plumbing for you.
The Fabric Controller (FC) works against a VM or a physical machine. In the VM environment, the Fabric Controller uses a Control VM that complements the WS2008 Hypervisor. In the physical env, it is a Control Agent.
FC maps declarative service specifications to available resources. FC is model-driven, The service model describes
The Fault/Update domains allows you to specify what portion of your service can be offline at a time. Fault domains are based on the topology of the data center (switch failure). Upgrade domains are determined by what percentage of your service you will take out at a time for an upgrade. Don'’t put all roles on the same stack !
You also add dynamic configuration settings to communicate settings to the service roles (the equivalent of the registry).
In the CTP, the service model is not exposed directly. It is generated from the Web and Worker roles you create.
Nodes are chosen based on the constraints encoded in the service model. We find a home for all role instances. The instances are allocated across “fault domains”.
FC maintains a state machine for each node. FC maintains a cache about the state it beleives each node to be in. FC receives events and
Virtual IPs (VIP) are allocated from a pool, then we setup the Load Balancer (LB). LB probing is setup to communicate with agent on node which has real time info on health of role : the traffic is only routed to roles ready to accept traffic. Redundant network gear is in place for high availability.
Then the FC keeps your service running. Look for roles and nodes defects.
FC can upgrade a running service : it updates onre update domain at a time. update domains are logical and don’t need to be tied to a fault domain. There is 2 modes of operation : manual or automatic. Rollbacks are achieved the same way.
Windows Azure monitors compute nodes, TOR/L2 switches, LBs, access routers, …
IP Filtering, Virtual Machine, Firewall, Restriction of privileges, Managed code.
FC is highly available. FC is a cluster of 5-7 replicas : replicated state with automatic failover…
In a disaster scenario, we have checkpoints of the FC state, we can turn to a previous state.
Virtualization and deployment
Process : we send a “Windows Server 2008 Core” VHD to a machine and have it boot on from this VHD (this feature has been added to Windows 7) to make it the host. Then we send VHD images for guests. We use cumulative images for applications. Apps can share the same VHD to minimize the disk image size.
For Tech Preview, the VM is 64bit WS2008, 1,5 to 1, 7GHz x64 equivalent, 1.7 BG memory, 100Mbps network, transient local storage : 250 GB, for durable storage use Windows Azure storage it is 50 GB.
The Hypervisor exploit the latest processor virtualization features (SLAT, large pages…), it is small (little resources), scalable (NUMA-Aware). SLAT stands for Second Level Address, this Translation.
The Guest OS is WS2008 Enterprise, the Host OS is WS2008 Core. NLB is hardware based.
Q : The Service Model conforms to the Oslo vision ? A : No, but we start discussing with the Oslo teams.
[© Nacsa Sándor, 2008. november 10.] A teljes Azure szolgáltatási platform legalapvetőbb része a Windows