Architecture + Strategy

Musings from David Chou - Architect, Microsoft

March, 2011

  • Architecture + Strategy

    Internet Service Bus and Windows Azure AppFabric


    imageMicrosoft’s AppFabric, part of a set of ”application infrastructure” (or middleware) technologies, is (IMO) one of the most interesting areas on the Microsoft platform today, and where a lot of innovations are occurring. There are a few technologies that don’t really have equivalents elsewhere, at the moment, such as the Windows Azure AppFabric Service Bus. However, its uniqueness is also often a source of confusion for people to understand what it is and how it can be used.

    Internet Service Bus

    Oh no, yet another jargon in computing! And just what do we mean by “Internet Service Bus” (ISB)? Is it hosted Enterprise Service Bus (ESB) in the cloud, B2B SaaS, or Integration-as-a-Service? And haven’t we seen aspects of this before in the forms of integration service providers, EDI value-added networks (VANs), and even electronic bulletin board systems (BBS’s)?

    At first look, the term “Internet Service Bus” almost immediately draws an equal sign to ‘Enterprise Service Bus’ (ESB), and aspects of other current solutions, such as ones mentioned above, targeted at addressing issues in systems and application integration for enterprises. Indeed, Internet Service Bus does draw relevant patterns and models applied in distributed communication, service-oriented integration, and traditional systems integration, etc. in its approaches to solving these issues on the Internet. But that is also where the similarities end. We think Internet Service Bus is something technically different, because the Internet presents a unique problem domain that has distinctive issues and concerns that most internally focused enterprise SOA solutions do not. An ISB architecture should account for some very key tenets, such as:

    • Heterogeneity – This refers to the set of technologies implemented at all levels, such as networking, infrastructure, security, application, data, functional and logical context. The internet is infinitely more diverse than any one organization’s own architecture. And not just current technologies based on different platforms, there is also the factor of versioning as varying implementations over time can remain in the collective environment longer than typical lifecycles in one organization.
    • Ambiguity – the loosely-coupled, cross-organizational, geo-political, and unpredictable nature of the Internet means we have very little control over things beyond our own organization’s boundaries. This is in stark contrast to enterprise environments which provide higher-levels of control.
    • Scale – the Internet’s scale is unmatched. And this doesn’t just mean data size and processing capacity. No, scale is also a factor in reachability, context, semantics, roles, and models.
    • Diverse usage models – the Internet is used by everyone, most fundamentally by consumers and businesses, by humans and machines, and by opportunistic and systematic developments. These usage models have enormously different requirements, both functional and non-functional, and they influence how Internet technologies are developed.

    For example, it is easy to think that integrating systems and services on the Internet is a simple matter of applying service-oriented design principles and connecting Web services consumers and providers. While that may be the case for most publicly accessible services, things get complicated very quickly once we start layering on different network topologies (such as through firewalls, NAT, DHCP and so on), security and access control (such as across separate identity domains), and service messaging and interaction patterns (such as multi-cast, peer-to-peer, RPC, tunneling). These are issues that are much more apparent on the Internet than within enterprise and internal SOA environments.

    On the other hand, enterprise and internal SOA environments don’t need to be as concerned with these issues because they can benefit from a more tightly controlled and managed infrastructure environment. Plus integration in the enterprise and internal SOA environments tend to be at a higher-level, and deal more with semantics, contexts, and logical representation of information and services, etc., organized around business entities and processes. This doesn’t mean that these higher-level issues don’t exist on the Internet; they’re just comparatively more “vertical” in nature.

    In addition, there’s the factor of scale, in terms of the increasing adoption of service compositions that cross on-premise and public cloud boundaries (such as when participating in a cloud ecosystem). Indeed, today we can already facilitate external communication to/from our enterprise and internal SOA environments, but to do so requires configuring static openings on external firewalls, deploying applications and data appropriate for the perimeter, applying proper change management processes, delegate to middleware solutions such as B2B gateways, etc. As we move towards a more inter-connected model when extending an internal SOA beyond enterprise boundaries, these changes will become progressively more difficult to manage collectively.

    An analogy can be drawn from our mobile phones using cellular networks, and how its proliferation changed the way we communicate with each other today. Most of us take for granted that a myriad of complex technologies (e.g., cell sites, switches, networks, radio frequencies and channels, movement and handover, etc.) is used to facilitate our voice conversations, SMS, MMS, and packet-switching to Internet, etc. We can effortlessly connect to any person, regardless of that person’s location, type of phone, cellular service and network, etc. The problem domain and considerations and solution approaches for cellular services, are very different from even the current unified communications solutions for enterprises. The point is, as we move forward with cloud computing, organizations will inevitably need to integrate assets and services deployed in multiple locations (on-premises, multiple public clouds, hybrid clouds, etc.). To do so, it will be much more effective to leverage SOA techniques at a higher-level and building on a seamless communication/connectivity “fabric”, than the current class of transport (HTTPS) or network-based (VPN) integration solutions.

    Thus, the problem domain for Internet Service Bus is more “horizontal” in nature, as the need is vastly broader in scope than current enterprise architecture solution approaches. And from this perspective Internet Service Bus fundamentally represents a cloud fabric that facilitates communication between software components using the Internet, and provides an abstraction from complexities in networking, platform, implementation, and security.

    Opportunistic and Systematic Development

    It is also worthwhile to discuss opportunistic and systematic development (as examined in The Internet Service Bus by Don Ferguson, Dennis Pilarinos, and John Shewchuk in the October 2007 edition of The Architecture Journal), and how they influence technology directions and Internet Service Bus.

    Systematic development, in a nutshell, is the world we work in as professional developers and architects. The focus and efforts are centered on structured and methodical development processes, to build requirements-driven, well-designed, and high-quality systems. Opportunistic development, on the other hand, represents casual or ad-hoc projects, and end-user programming, etc.

    This is interesting because the majority of development efforts in our work environments, such as in enterprises and internal SOA environments, are aligned towards systematic development. But the Internet advocates both systematic and opportunistic developments, and increasingly more so as influenced by Web 2.0 trends. Like “The Internet Service Bus” article suggests, today a lot of what we do manually across multiple sites and services, can be considered a form of opportunistic application; if we were to implement it into a workflow or cloud-based service.

    And that is the basis for service compositions. But to enable that type of opportunistic services composition (which can eventually be compositing layers of compositions), technologies and tools have to be evolved into a considerably simpler and abstracted form such as model-driven programming. But most importantly, composite application development should not have to deal with complexities in connectivity and security.

    And thus this is one of the reasons why Internet Service Bus is targeted at a more horizontal, and infrastructure-level set of concerns. It is a necessary step in building towards a true service-oriented environment, and cultivating an ecosystem of composite services and applications that can simplify opportunistic development efforts.

    ESB and ISB

    So far we discussed the fundamental difference between Internet Service Bus (ISB) and current enterprise integration technologies. But perhaps it’s worthwhile to discuss in more detail, how exactly it is different from Enterprise Service Bus (ESB). Typically, an ESB should provide these functions (core plus extended/supplemental):

    • Dynamic routing
    • Dynamic transformation
    • Message validation
    • Message-oriented middleware
    • Protocol and security mediation
    • Service orchestration
    • Rules engine
    • Service level agreement (SLA) support
    • Lifecycle management
    • Policy-driven security
    • Registry
    • Repository
    • Application adapters
    • Fault management
    • Monitoring

    In other words, an ESB helps with bridging differences in syntactic and contextual semantics, technical implementations and platforms, and providing many centralized management capabilities for enterprise SOA environments. However, as we mentioned earlier, an ISB targets concerns at a lower, communications infrastructure level. Consequently, it should provide a different set of capabilities:

    • Connectivity fabric – helps set up raw links across boundaries and network topologies, such as NAT and firewall traversal, mobile and intermittently connected receivers, etc.
    • Messaging infrastructure – provides comprehensive support for application messaging patterns across connections, such as bi-directional/peer-to-peer communication, message buffers, etc.
    • Naming and discovery – a service registry that provides stable URI’s with a structured naming system, and supports publishing and discovering service end point references
    • Security and access control – provides a centralized management facility for claims-based access control and identity federation, and mapping to fine-grained permissions system for services

    A figure of an Internet Service Bus architecture is shown below.


    And this is not just simply a subset of ESB capabilities; ISB has a couple of fundamental differences:

    • Works through any network topology; whereas ESB communications require well-defined network topologies (works on top)
    • Contextual transparency; whereas ESB has more to do with enforcing context
    • Services federation model (community-centric); whereas ESB aligns more towards a services centralization model (hub-centric)

    So what about some of the missing capabilities that are a part of ESB, such as transformation, message validation, protocol mediation, complex orchestration, and rules engine, etc.? For ISB, these capabilities should not be built into the ISB itself. Rather, leverage the seamless connectivity fabric to add your own implementation, or use one that is already published (can either be cloud-based services deployed in Windows Azure platform, or on-premises from your own internal SOA environment). The point is, in the ISB’s services federation model, application-level capabilities can simply be additional services projected/published onto the ISB, then leveraged via service-oriented compositions. Thus ISB just provides the infrastructure layer; the application-level capabilities are part of the services ecosystem driven by the collective community.

    On the other hand, for true ESB-level capabilities, ESB SaaS or Integration-as-a-Service providers may be more viable options, while ISB may still be used for the underlying connectivity layer for seamless communication over the Internet. Thus ISB and ESB are actually pretty complementary technologies. ISB provides the seamless Internet-scoped communication foundation that supports cross-organizational and federated cloud (private, public, community, federated ESB, etc.) models, while ESB solutions in various forms provide the higher-level information and process management capabilities.

    Windows Azure AppFabric Service Bus

    The Service Bus provides secure messaging and connectivity capabilities that enable building distributed and disconnected applications in the cloud, as well hybrid application across both on-premise and the cloud. It enables using various communication and messaging protocols and patterns, and saves the need for the developer to worry about delivery assurance, reliable messaging and scale.

    AppFabric Service Bus

    In a nutshell, the Windows Azure AppFabric Service Bus is intended as an Internet Service Bus (ISB) solution, while BizTalk continues to serve as the ESB solution, though the cloud-based version of BizTalk may be implemented in a different form. And Windows Azure AppFabric Service Bus will play an important role in enabling application-level (as opposed to network-level) integration scenarios, to support building hybrid cloud implementations and federated applications participating in a cloud ecosystem.

  • Architecture + Strategy

    Rise of the Cloud Ecosystems


    I had the opportunity to participate at this year’s CloudConnect conference, with my session on Building Highly Scalable Applications on Windows Azure, which is mostly an update from my earlier presentations at JavaOne and Cloud Computing Expo. I was pleased to learn that the cloud-optimized design leveraging distributed computing best practices approach, aligned well to similar talks by well-known cloud experts from Amazon, Google, etc. A more detailed discussion on this topic can be found in my earlier post - Designing for Cloud-Optimized Architecture.

    One of the major takeaways I had from the conference, was the focus on ‘cloud platforms’ (or platform-as-a-service generally) messaging, further reinforcing the platform view of cloud computing, as opposed to the infrastructure-level perspectives, or mixed views around the popular SaaS, PaaS, and IaaS service delivery models.

    Werner Vogels,  Vice President and CTO, Amazon.comIt started with Werner Vogels in his keynote presentation. Werner said, “it is all about the cloud ecosystem”, that “everything are cloud services; everything as a cloud service”, and “not constrained by any model”. And “let a thousand platforms bloom”, where “ecosystems grow as big as possible”. This implies that the popular models (e.g., SaaS, PaaS, IaaS) are irrelevant, because everything are simply services that we can consume, and these services can span the entire spectrum of IT capabilities as we know, and potentially more.

    Infrastructure as a serviceIt is interesting to see how the platform messaging evolved over the past few years. For instance, Werner Vogels used to refer to Amazon Web Services as “Infrastructure as a Service”. However, I think Werner Vogels has always advocated the platform view, similar to Gartner’s notion of “application infrastructure as a service”, instead of the overused IaaS (infrastructure-as-a-service) service delivery model (as popularized by NIST’s definition of cloud computing). Perhaps, it was also because many people started incorrectly referring to Amazon Web Services as IaaS and not seeing the platform view, that Werner chose to clarify that models are irrelevant and “it is all about the cloud ecosystem”.

    I also belong to the camp that advocates the platform view, and the further ecosystem view of cloud computing. The platform and ecosystem views of cloud computing represent a new paradigm, and promote a new way of computing. Though I think the SaaS, PaaS, and IaaS classifications (or service delivery models) still have some uses too. They are particularly relevant when trying to understand the general differences and trade-offs between the service delivery models (as defined by NIST), from a layers and levels of abstractions perspective.


    Perhaps, what we shouldn’t do, is to try to fit cloud service providers into these categories/models. As often, a particular service provider may have offerings in multiple models, have offerings that don’t fit well in these models, or it’d be over-simplifying to refer to cloud platforms, like Amazon Web Services, Google App Engine, Windows Azure platform, etc., strictly in this platform-as-a-service definition. These platforms have a lot more to offer than simply a higher-level abstraction compute service.

    Cloud Platforms

    As Werner Vogels said, “cloud is actually a very large collection of services”, cloud platforms aren’t just a place to deploy and execute workloads. Cloud platforms provide the necessary capabilities, delivered as distinct services, that applications can leverage to accomplish specific tasks.

    Amazon Web Services has always been a cloud platform; today it is a collection of services that provide capabilities for compute (EC2, EMR), content delivery (CloudFront), database (SimpleDB, RDS), deployment and management (Elastic Beanstalk, CloudFormation), e-commerce (FWS), messaging (SQS, SNS, SES), monitoring (CloudWatch), networking (VPC, ELB), payments and billing (FPS, DevPay), storage (S3, EBS), support, etc. It is not just a hosting environment for virtual machines (which the popular IaaS model is more aligned with). In fact Amazon Web Services released S3 into production (March 2006) before EC2 (limited public beta in August 2006, removed beta label in October 2008).

    Similarly, Microsoft has been using the platform-as-a-service messaging when describing Windows Azure platform, but it is also about the collection of various capabilities (delivered as services). For example, below is a visual representation of the application platform capabilities in Windows Azure platform that I have been using since 2009 (though the list of capabilities grew over that period):


    And below shows how those capabilities are delivered as services and organized in Windows Azure platform.


    This is important because the platform view is one of the major differentiators of cloud platforms when compared to the more conventional outsourced hosting providers. When building applications using hosting providers (or strictly infrastructure-as-a-service offerings), we have to incur the engineering efforts to design, implement, and maintain our own solutions for storage, data management, security, caching, etc. In cloud platforms such as Amazon Web Services, Google App Engine, and Windows Azure platform, these capabilities are baked into the platform and available as services that are readily accessible. Applications just need to use them, without having to be concerned with any of their underpinnings and operations.

    An analogy can be drawn from high-level differences between getting food products from Costco to put together a semi-homemade meal, versus getting raw ingredients from a supermarket and prepare, cook, and finish a fully-homemade meal. :) The semi-homemade model offers higher agility (less time and efforts required to put together a meal and to scale it for bigger parties) and economy of scale in Costco’s case, while the fully-homemade model offers more control.

    Another distinction between cloud platforms and typical IaaS offerings, is that cloud platforms are more of a way of computing – a new/different paradigm; whereas IaaS offerings are better aligned towards hosting scenarios. This doesn’t mean that there are no overlaps between the two models, or that one is necessarily better than the other. They are just different models; with some overlaps, but ideally suited for different use cases. For cloud platforms, ideal use cases are aligned to net-new, or greenfield development projects that are cloud-optimized. Again, hosting scenarios also work on cloud platforms, but cloud-optimized applications stand to gain more benefits from cloud platforms.

    Cloud Ecosystems

    The cloud ecosystem view takes the cloud platform view one step further, and includes partners and third parties that enable their services to participate in an ecosystem. The collective set of capabilities from multiple organizations and potentially services spanning multiple platforms and cloud environments together form an ecosystem that feeds and builds upon each other (in composite, federated, application models), and generating best practices and reusable processes, communities, etc. This can also be viewed as a natural evolution of platform paradigms, when drawing inference from other models where the iterations typically evolved from technology maturity, critical mass in adoption, and then building ecosystems. The platform with the largest and most diverse ecosystem, gets to ride the paradigm shift and enjoy a dominant position for that particular generation.

    The Web platform stack model I discussed back in 2007 is one way of looking at the ecosystem model (apologies for the rich color scheme; I was going through a coloring phase at the time).

    In essence, a cloud ecosystem itself will likely have many layers of abstractions as well; one building on top of another. One future trend in cloud computing may very well be the continued climb into higher levels of abstraction, as differences and complexities at one level often represent development opportunities (e.g., for specializations, consolidations/aggregations, derivations, etc.) at a higher level.

    Ultimately, cloud platforms enable the dynamic environments that support the construction of ecosystems. This is one aspect inherent in cloud platforms, but not as much for lower-level IaaS environments. And as the ecosystems grow in size and diversity, the network effect (as discussed briefly back in 2008) will contribute to increasingly intelligent and interactive environments, and generate, collectively, tremendous value.

  • Architecture + Strategy

    Cloud-optimized architecture and Advanced Telemetry


    Advanced TelemeryOne of the projects I had the privilege of working with this past year, is the Windows Azure platform implementation at Advanced Telemetry. Advanced Telemetry offers an extensible, remote, energy-monitoring-and-control software framework suitable for a number of use case scenarios. One of their current product offerings is EcoView™, a smart energy and resource management system for both residential and small commercial applications. Cloud-based and entirely Web accessible, EcoView enables customers to view, manage, and reduce their resource consumption (and thus utility bills and carbon footprint), all in real-time via the intelligent on-site control panel and remotely via the Internet.


    Much more than Internet-enabled thermostats and device end-points, “a tremendous amount of work has gone into the core platform, internally known as the TAF (Telemetry Application Framework) over the past 7 years” (as Tom Naylor, CEO/CTO of Advanced Telemetry wrote on his blog), which makes up the server-side middleware system implementation, and provides the intelligence to the network of control panels (with EcoView being one of the applications), and an interesting potential third-party application model.

    The focus of the Windows Azure platform implementation, was moving the previously hosted server-based architecture into the cloud. Advanced Telemetry completed the migration in 2010, and the Telemetry Application Framework is now running in Windows Azure Platform. Tom shared some insight from the experience in his blog post “Launching Into the Cloud”. And of course, this effort was also highlighted as a Microsoft case study on multiple occasions:


    The Move to the Cloud

    As pointed out by the first case study, the initial motivation to adopt cloud computing was driven by the need to reduce operational costs of maintaining an IT infrastructure, while being able to scale the business forward.

    “We see the Windows Azure platform as an alternative to both managing and supporting collocated servers and having support personnel on our side dedicated to making sure the system is always up and the application is always running,” says Tom Naylor. “Windows Azure solves all those things for us effectively with the redundancy and fault tolerance we need. Because cost is based on usage, we’ll also be able to much more accurately assess our service fees. For the first time, we’ll be able to tell exactly how much it costs to service a particular site.”

    For instance, in the Channel 9 video, Tom mentioned that replicating the co-located architecture from Rackspace to Windows Azure platform resulted in approximately 75% cost reduction on a monthly basis in addition to other benefits. One of the major ‘other’ benefits is agility, which arguably is much more valuable than the cost reduction normally associated with cloud computing benefits. In fact, as the second case study pointed out, in addition to breaking ties to an IT infrastructure, Windows Azure platform become a change enabler that supported to shift to a completely different business model for Advanced Telemetry (from a direct market approach to that of an original equipment manufacturer (OEM) model). The move to Windows Azure platform provided the much needed scalability (of the technical infrastructure), flexibility (to adapt to additional vertical market scenarios), and manageability (maintaining the level of administrative efforts while growing the business operations). The general benefits cited in the case study were:

    • Opens New Markets with OEM Business Model
    • Reduces Operational Costs
    • Gains New Revenue Stream
    • Improves Customer Service

    Cloud-Optimized Architecture

    However, this is not just another simple story of migrating software from one data center to another data center. Tom Naylor understood well the principles of cloud computing, and saw the value in optimizing the implementation for the cloud platform instead of just using it as a hosting environment for the same thing from somewhere else. I discussed this in more detail in a previous post Designing for Cloud-Optimized Architecture. Basically, it is about leveraging cloud computing as a way of computing and as a new development paradigm. Sure, conventional hosting scenarios do work in cloud computing, but there is more value and benefits to gain if an application is designed and optimized specifically to operate in the cloud, and built using unique features from the underlying cloud platform.

    In addition to the design principles around “small pieces, loosely coupled” fundamental concept I discussed previously, another aspect of the cloud-optimized approach is to think about storage first, as opposed to thinking about compute. This is because, in cloud platforms like Windows Azure platform, we can build applications using the cloud-based storage services such as Windows Azure Blob Storage and Windows Azure Table Storage, which are horizontally scalable distributed storage systems that can store petabytes and petabytes of data and content without requiring us to implement and manage the infrastructure. This is in fact, one of the significant differences between cloud platforms and traditional outsourced hosting providers.

    In the Channel 9 video interview, Tom Naylor said “what really drove us to it, honestly, was storage”. He mentioned that the Telemetry Application Platform currently handles about 200,000 messages per hour, each containing up to 10 individual point updates (which roughly equates to 500 updates per second). While this level of traffic volume isn’t comparable to the top websites in the world, it still poses significant issues for a startup company to store and access the data effectively. In fact, the data required the Advanced Telemetry team to cull the data periodically in order to maintain a relatively workable size for the operational data.

    “We simply broke down the functional components, interfaces and services and began replicating them while taking full advantage of the new technologies available in Azure such as table storage, BLOB storage, queues, service bus and worker roles. This turned out to be a very liberating experience and although we had already identified the basic design and architecture as part of the previous migration plan, we ended up making some key changes once unencumbered from the constraints inherent in the transitional strategy. The net result is that in approximately 6 weeks, with only 2 team members dedicated to it (yours truly included), we ended up fully replicating our existing system as a 100% Azure application. We were still able to reuse a large percentage of our existing code base and ended up keeping many of the database-driven functions encapsulated in stored procedures and triggers by leveraging SQL Azure.” Tom Naylor described the approach on his blog.

    The application architecture employed many cloud-optimized designs, such as:

    • Hybrid relational and noSQL data storage – SQL Azure for data that is inherently relational, and Windows Azure Table Storage for historical data and events, etc.
    • Event-driven design – Web roles receiving messages act as event capture layer, but asynchronously off-loads processing to Worker roles

    Lessons Learned

    In the real world, things rarely go completely as anticipated/planned. And it was the case for this real-world implementation as well. :) Tom Naylor was very candid about some of the challenges he encountered:

    • Early adopter challenges and learning new technologies – Windows Azure Table and Blob Storage, and Windows Azure AppFabric Service Bus are new technologies and have very different constructs and interaction methods
    • “The way you insert and access the data is fairly unique compared to traditional relational data access”, said Tom, such as the use of “row keys, combined row keys in table storage and using those in queries”
    • Transactions - initial design was very asynchronous; store in Windows Azure Blob storage and put in Windows Azure Queue, but that  resulted in a lot of transactions and significant costs based on the per-transaction charge model for Windows Azure Queue. Had to leverage Windows Azure AppFabric Service Bus to reduce that impact

    The end result is a an application that is horizontally scalable, allowing Advanced Telemetry to elastically scale up or down the deployments of individual layers according to capacity needs, as different application layers are nicely decoupled from each other, and the application is decoupled from horizontally scalable storage. Moreover, the cloud-optimized architecture supports both multi-tenant and single-tenant deployment models, enabling Advanced Telemetry to support customers who have higher data isolation requirements.

  • Architecture + Strategy

    Unplug from your day job. Be inspired at MIX



    One of my favorite annual conferences is back again this year in Las Vegas. Yes, despite the fabulous location and the plethora of distractions it provides. ;)

    Join the conversation at MIX – see the latest tools and technologies and draw inspiration from a professional community of your peers and experts. More About MIX.

    MIX is where you’ll learn about the future of web, from the diversity of devices and interaction models on the front-end, to the tools and technologies that power the user experience, to the services that make it all possible. There’s no better place to hear about the future of Silverlight, Internet Explorer, Windows Phone, ASP.NET, and technologies like HTML5 and CSS3.

    MIX isn’t just about getting a first look at the latest technologies and trends – it’s an opportunity for you to have your questions answered by industry and Microsoft experts.

    MIX is a gathering of developers, designers, UX experts and business professionals creating the most innovative and profitable consumer sites on the web. The opportunities for networking are limitless…this is Las Vegas, after all.

    MIX is a unique opportunity to engage with Microsoft and industry professionals in a two-way conversation about the future of web - from the diversity of devices and interaction models on the front-end, to the tools and technologies that power the user experience, to the services that make it all possible. MIX is for professionals who design and build cutting-edge websites.

  • Architecture + Strategy

    The Windows Azure Handbook, Volume 1: Strategy & Planning


    clip_image001I was happy to see that a colleague and friend, David Pallman (GM at Neudesic and one of the few Windows Azure MVP’s in the world), successfully published the first volume of his The Windows Azure Handbook series:

    1. Planning & Strategy
    2. Architecture
    3. Development
    4. Management

    I think that is a very nice organization/separation of different considerations and intended audiences. And the ‘sharding’ of content into four separate volumes is really not intended as a ploy to sell more with what could be squeezed into one book. :) The first volume itself has 300 pages! And David only spent the minimally required time on the requisite “what is” cloud computing and Windows Azure platform topics (just the two initial chapters, as a matter of fact). He then immediately dived into more detailed discussions around the consumption-based billing model and its impacts, and many aspects that contribute into a pragmatic strategy for cloud computing adoption.

    Having worked with Windows Azure platform since its inception, and successfully delivering multiple real-world customer projects, David definitely has a ton of lessons, best practices, and insights to share. Really looking forward to the rest of the series.

    Kudos to David!


Page 1 of 1 (5 items)