Architecture + Strategy

Musings from David Chou - Architect, Microsoft

April, 2008

  • Architecture + Strategy

    .NET and Multiple Inheritance

    • 6 Comments

    Occasionally I get questions on why does .NET not support multiple inheritance. It is actually a pretty interesting question to contemplate with, though I usually start the conversation by asking: "what issue requires multiple inheritance to solve?".

    More often than not, the question surfaces when people are trying to "do the right thing" by correctly refactoring code in an object-oriented manner, and facilitate code reuse by using inheritance, but encounter challenges when trying to reuse methods and code behaviors defined in separate places of the class hierarchy. Thus the most "natural" question was, if I can just inherit the code from these classes...

    Many decisions in language design, just like software projects, are balancing acts between various trade-offs. There are many very interesting conversations happening in the community, such as the debate on generics and closures on the Java side (for example: How Does Language Impact Framework Design? and Will Closures Make Java Less Verbose? and James Gosling's Closures). Interesting to see how much thought goes into each seemingly small decision on adding specific language features, or not adding them.

    There were many factors that influenced the .NET team to favor not implementing multiple inheritance. A few of the more prominent ones include:

    • .NET was designed to support multiple languages, but not all languages can effectively support multiple inheritance. Or technically they could, but the complexities added in language semantics would make some of those languages more difficult to use (and less similar to their roots, like VB, and for backward compatibility reasons) and not worth the trade-off of being able to reuse code in the manner of multiple inheritance
    • It would also make cross-language library interoperability (via CLS compliance) less of a reality than it is today, which is one of the most compelling features of .NET. There are over 50 languages supported on .NET in over 70 implementations today
    • The most visible factor is language semantics complexity. In C++ we needed to add explicit language features in order to address ambiguities (such as the classic diamond problem) caused by multiple inheritance, such as the "virtual" keyword to support virtual inheritance to help the compiler resolve inheritance paths (and we had to use it correctly too)
    • As we know code is written 20% of the time, but read 80% of the time. Thus advocates on simplicity side prefer not to add language features for the sake of keeping semantics simple. In comparison C# code is significantly simpler to read than C++ code, and arguably easier to write

    Now Java doesn't support multiple inheritance as well, though probably for a different set of reasons. Thus it is not a case of simple oversight in design or lack of maturity, as it was a careful and deliberate decision to not support multiple inheritance in the .NET and Java platforms.

    So what's the solution? Often people are directed to using interfaces, but interfaces don’t lend themselves very well to meet the requirements of reusing code and implementing separation of concern; as interfaces are really intended to support polymorphism and loosely-coupled/contractual design. But other than trying to tie behaviors into object inheritance hierarchies, there are many alternative approaches that can be evaluated to meet those requirements. For example, adopting relevant design patterns like Visitor, frameworks like MVC, delegates, mixins (interfaces combined with AOP), etc.

    Bottom line is, there are considerably elegant alternatives to inheriting/deriving behavior in class hierarchies, when trying to facilitate code reuse with proper re-factoring. Plus trade-offs in code reuse vs. the costs incurred to manage the reuse is another full topic in itself. In some cases it may have been simpler to have multiple inheritance and access to sealed classes, but the trade-offs may have been greater costs in other areas.

  • Architecture + Strategy

    Microsoft Platform Overview

    • 4 Comments

    I also had the privilege of speaking at the South Bay .NET User Group, at their April monthly meeting, held at the Honda Motors U.S. headquarters campus in Torrance, CA.

    The topic of this presentation was an overview of the neat and new things on the broad Microsoft platform, to help distill an understanding of how Microsoft is evolving the platform in response to major trends in IT environment today.

     

    We took a quick glance over many interesting platform components from Microsoft:

    The intention is to show that, in addition to building .NET applications on the core .NET platform (ASP.NET, Atlas/AJAX, WinForms, WPF, WCF, WF, etc.), there are many rich frameworks for building different kinds of applications, and often available at a higher abstraction level or specialized in specific scenarios. Having an awareness of these components means additional options for .NET developers to address specific problems or implement specific capabilities. The skills and knowledge on the .NET platform, such as programming in C# and familiarity with the Visual Studio development environment, can easily be extended to create solutions using these rich frameworks and platform components.

    All this is being brought together under the context of Microsoft's perception of the the future of technology, influenced by major trends today including SOA, Web 2.0, Software-as-a-Service, etc. Microsoft uses the term "Software + Services" to describe this vision, where rich and targeted software components (client-side and installed on-premise) connect to and leverage distributed services (server-side and cloud-based).

    The big question is, is this "Software + Services" view of the future relevant? Arguably Microsoft seems to be the only one advocating this view of the world where both client software and distributed services combine to deliver compelling user experiences, when mainstream mindshare today seems to be focusing on browser-based applications. And while it is worth noting that most of the major services players, such as SalesForce, Google, Adobe, Yahoo, Mozilla, etc., all are delivering desktop components that live outside of the browser (or at least work in off-line modes), their approach seems to be client-side software as an augmentation to cloud-based services (i.e., Google Desktop, Adobe AIR, etc.).

    It is still difficult to say whether "Software + Services" will be more relevant, or browser platforms will become more dominant than they already are. As we can expect to see that the browser platform will become more sophisticated, whether via continued improvements in HTML and JavaScript or shift to RIA platforms such as Adobe Flex and Microsoft Silverlight (and Java FX, Open Laszlo, etc.); and that smart client applications will become easier to distribute and manage (like how FireFox manages its own updates). But I do think probability is higher that we can expect that not everything will be delivered through browsers.

    In particular, we should expect that organizations will continue to invest in additional channels beyond the browser to reach customers. Desktop gadgets, desktop applications, plug-ins or add-ons to existing desktop application platforms (such as Office clients, Windows Live Mesenger, Vista Sidebar, SideShow, etc. on the Microsoft side), multiple device platforms (such as Windows Mobile, XBox, Zune, Media Center, Windows Embedded, etc.; again on the Microsoft side), and various services platforms (such as Windows Live, Popfly, SharePoint Online, etc.; on the Microsoft side), are all potential channels to add value to browser-based user experiences, and in many cases, very viable options to differentiate from others.

    Now Microsoft may be the most vocal about the value of client-side software combined with server-side services, and building a platform that provides a spectrum of choices (which may be criticized as adding complexity as opposed to simplifying and unifying into a "good enough" approach). Similar approaches can also be identified from other leaders in the industry. Google for one is delivering more and more platform components - Google Apps, Apps Engine, Android, Desktop, GrandCentral, iGoogle, Search/Analytics/Ads, Youtube, and many more in the pipeline such as audio and video advertising, etc. From a high-level the visible trend is that Google is aggressively diversifying its platform and providing value by allowing customers to leverage the capabilities in those platforms.

    Thus, we can expect to see that the technology "platform" is evolving into a much more diversified set of capabilities, and increasingly, those capabilities can be leveraged via a multitude of means beyond tradition API-based or Web services-based integration; beyond writing code.

  • Architecture + Strategy

    Microsoft Implementing Software Plus Services

    • 4 Comments

    Microsoft has been talking about "Software + Services" (S+S) as its vision of the future for a while now (see related posts on S+S: Microsoft Platform Overview & Talking about Software Plus Services). People like Bill Gates and Ray Ozzie often talk about the applicable patterns and trends that exemplify this concept, even though they don't always mention the moniker.

    And Microsoft's execution on this direction is quite visible too. From continued investments on the desktop and enterprise software, to the latest and still growing cloud platform that brings many of the traditional capabilities into the Web.

    Slide23

    For example, many of the enterprise servers - Exchange, SharePoint, Office Communications, and eventually Biztalk and SQL Server as well, are all being implemented as services in the cloud that users can use directly, without investing in their own physical infrastructures to host and manage them. There are also a lot of progress being made in the consumer space in the form of Windows Live services.

    However, a major value proposition in S+S is the ability to integrate traditional software with distributed services, and bring the best of both worlds together. What has Microsoft done so far to implement that S+S vision?

    Basically, many efforts are happening across the board. Some of the more visible ones include:

    Exchange - it supports multiple delivery means (hosted on-premise, outsourced hosting/management by a partner, and cloud-based service from Microsoft), it supports many clients (Outlook, OWA, Outlook Mobile, Outlook Voice Access), multiple licensing models - traditional perpetual and subscription; plus itself can be a consumer of attached services such as Forefront spam/filtering services

    Slide10

    Office System - Office clients combined with SharePoint server represents a business productivity platform (client-server interaction and leveraging the many valuable enterprise services in SharePoint such as enterprise search, content management, business data catalog, business intelligence, etc.). Excel spreadsheets can be published into SharePoint and then provisioned as web services, InfoPath forms, stored as part of SharePoint’s InfoPath services, can be rendered on InfoPath clients but can also be rendered directly from SharePoint as forms services. Office clients themselves can also be extended with .NET to connect to back-end systems whether directly or via SharePoint or Biztalk. For example, Office Live Workspaces which is a cloud-based SharePoint service for consumers, SharePoint Online for businesses, etc.

    Slide28

    SharePoint - SharePoint Server itself can be deployed on-premise, outsourced hosting, or accessed as a subscription service from Microsoft (SharePoint Online). It also has many other flavors such as Office Live, Office Live Workspaces that live in the cloud as services for consumers to use

    Windows Live - known as a set of cloud-based services, but Microsoft has also delivered a set of client-side software (Mail, Messenger, PhotoGallery, Toolbar, Writer) to improve the user experience, in addition to the browser-based interfaces. Also many of the services offer API’s for people to build applications with.

    Slide24

    Office Communications Server - similar to Exchange, it now also has a cloud-based service for people to use (Office Communications Online), plus API's for developers to build specific branding and user experiences

    Duet - a product that integrates Microsoft Office with SAP. Basically users can use the Office clients as the UI to SAP services

    Xbox - Xbox Live is one of the first examples of S+S

    Dynamics - similar model to Exchange - multiple deployment/delivery models, licensing models, and client access channels

    Windows - Windows Update is a componentized client and cloud-based service interaction model; similar is OneCare

    These examples all demonstrate the fundamental principles of S+S:

    Slide4

    One recent offering that is particularly interesting, is Windows Live Workspaces (http://workspace.officelive.com). This service offering, in a way, is Microsoft's response to Google Apps. Instead of converting the Office client software suite (Outlook, Word, Excel, PowerPoint, Groove, OneNote, Visio, InfoPath, Access, etc.) into browser-based solutions to compete head-on with Google Apps, Windows Live Workspaces was delivered to offer the sharing and collaborating capabilities that have been cited as the biggest shortcoming when using the Office clients.

    Now Microsoft actually has been delivering SharePoint services for a number of years now to provide that file sharing and collaboration scenarios for workgroups and enterprises. But there was a gap for consumers and inter-organizational scenarios that traditional SharePoint deployments (inside the firewalls) don't address very well.

    Thus Windows Live Workspaces is still built on SharePoint, but has been designed specifically to support consumer and end-user collaboration. It provides capabilities for fine-grained document-level access control, ubiquitous access, cloud-based storage, and client-side add-on's that integrate directly into the Office clients. So users can create/open/save documents into Windows Live Workspace directly from Word or Excel, for example. And of course, user always have the option to save documents locally until they're ready to share with other people.

    This approach illustrates the S+S approach by leveraging best of both worlds. Rich client-side software (criticized as bloatware sometimes but it can also be perceived as having the capabilities ready-to-use regardless of where a user is; having internet access or not) that fully leverages the power of the client device platform to maximize individual productivity, while leveraging cloud-based platforms for sharing and collaborating with others to maximize group productivity.

  • Architecture + Strategy

    Event - Adopting Visual Studio Team System 2008

    • 2 Comments

    Another one of those free events hosted by our teams and partner in southern California.

    May 5, 2008 | Los Angeles, CA
    http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032374276&Culture=en-US
    Event ID: 1032374276

    June 10, 2008 | Irvine, CA
    http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032374277&Culture=en-US
    Event ID: 1032374277

    ABSTRACT/SUMMARY

    Microsoft Visual Studio Team System 2008 is an integrated Application Life-cycle Management (ALM) solution comprising tools, processes, and guidance to help everyone on the team improve their skills and work more effectively together. VSTS 2008 provides multi-disciplined team members with an integrated set of tools for architecture, design, development, database development, and testing of applications. Team members can continuously collaborate and utilize a complete set of tools and guidance at every step of the application lifecycle.

    This one-day seminar will walk through VSTS 2008, highlighting new features that are available in the most recent release. Presentations will include demonstrations, best practices, and discussions on all four role-specific editions. We will also cover project management with Team Foundation Server (TFS), leveraging TFS source control, and new features such as integration with MOSS, and managing the build process with continuous integration. .  During lunch, we will also have a discussion around the adoption of methodology within the enterprise including lessons and experience from customers that have been through that process.

    Please join Microsoft and Neudesic, a Microsoft Gold Certified Partner for this one-day seminar. Thank you, we look forward to seeing you there!

    COURSE OUTLINE
    Interactive seminar and demonstrations

    VSTS Role-based Editions
    •     Architect
    •     Developer
    •     Test
    •     Database Professional

    Team Foundation Server

    Adopting a Methodology (lessons from other customers)
    •     Best Practices
    •     Version Control
    •     Project Management using VSTS
    •     Working with Continuous Integration

    REGISTRATION
    May 5, 2008 - Los Angeles
    Time: 9:00 AM-5:00 PM
    Event ID: 1032374276
    Location:
    Microsoft Corporation
    333 S. Grand Ave., Ste. 3300
    Los Angeles, CA 90071
    Phone: 213.806.7422
    Registration Link: http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032374276&Culture=en-US
    Or call 1.877.MSEVENT (1.877.673.8366) and use event ID 1032374276

    June 10, 2008 - Irvine
    Time: 9:00 AM-5:00 PM
    Event ID: 1032374277
    Location:
    Microsoft Corporation
    3 Park Plaza, Ste. 1600
    Irvine, CA 92614
    Phone: 949.263.3000
    Registration Link: http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032374277&Culture=en-US
    Or call 1.877.MSEVENT (1.877.673.8366) and use event ID 1032374277

  • Architecture + Strategy

    SOA Change Management Strategies

    • 1 Comments

    By today's standards, it is pretty well-understood that governance is a critical success factor for enterprise SOA initiatives. And there is already a considerably saturated/consolidated market providing the SOA governance solutions (see Gartner's Magic Quadrant for Integrated SOA Governance Technology Sets, 2007, Forrestor's SOA Service Life-Cycle Management Q1 2008, and SOA Governance Conference 5 for some content from HP, IBM, Progress Software, and SOA Software).

    A quick glance over the product features and discussions finds that the SOA governance tools in the market today focus on a set of key capabilities:

    • Design-time lifecycle management (contracts & metadata & artifacts management, change & reporting notification plus automation, QA and test automation, dependencies mapping, policy compliance auditing, etc.)
    • Run-time lifecycle management (versioning, decommissioning, monitoring & reporting, change deployment management, usage and exceptions metrics collection, policy compliance enforcement, etc.)
    • Security and access control (policy-driven fine-grained service authorization)
    • Integration with service infrastructure (ESB, identity management, single sign-on, MDM, service registries, metadata repositories, PKI, etc.)

    Just one way of categorizing the capabilities. Most vendors have their own ways of categorizing/describing these products, and some provide more built-in features. These capabilities are quite advanced and do address a wide range of governance needs. And then there is another set of products that aims to address SOA testing and automation needs.

    How to Validate Incremental Changes Deployed to a Live, Real-Time, Inter-Connected, and Inter-Dependent Distributed Architecture?

    The SOA governance tools support this from the perspective of making sure services are developed in compliance to policies and defined contracts, then managed in runtime after deployment and release. The SOA testing tools support this from managing and automating test efforts against component-based service deployments. However, there seems to be a considerable gap in terms of validating and managing changes in an enterprise SOA environment.

    A closer look uncovers many tough questions:

    • How do we validate changes in an SOA where a set of (sometimes hundreds of) physically distributed systems and services have been managed and governed into one logical/virtual entity? Specifically, how do we ensure a particular change being released into the live production environment won't break anything else, which often are other connected mission-critical systems that are running concurrently, based on traditional multi-staging change management strategies?
    • Do we trust that changes verified in a QA environment will behave exactly the same in production? If so, is the QA environment an exact replica of production, including the operational data that processing logic depends on?
    • Do we just "unit test" the service components associated with a unit of change, or do we work with other teams to conduct a full integration test in QA? And in an integration test, is the whole virtual enterprise involved, or just the directly connected system components?
    • How do we ensure that downstream components connected in multi-part, distributed transactions are not impacted if we don't conduct integration tests on everything?
    • How do we ensure that the QA environment is pristine and how do we coordinate among multiple team's project schedules, which are often more different than similar?
    • In a highly inter-connected and inter-dependent environment, how do we manage the worst-case scenario where hundreds of service components are impacted by a change?
    • If change verification/testing in production is allowed, how do we facilitate synthetic transactions so that actual production data is not interfered by test cases?

    SOA Requires Different Methods of Managing Changes

    Well that's obvious, right? :) But fundamentally it probably requires a different way of thinking too. For example, traditional multi-staging change migration strategies (dev, test, QA, staging/regression, prod, etc.) don't lend themselves very well anymore as they were more effective at managing changes that are more autonomous and local in nature. Now that changes are inter-related and inter-dependent, and often impacting a high number of systems not under any one team's management, full integration tests may mean coordinating schedules, code/data versions, security, etc. all bundled into one massive enterprise-wide test. Which would be too difficult and complex to undertake on a regular basis, and as a result what happens to the agility SOA was intended to deliver?

    The SOA governance tools today address this change management need mainly via service lifecycle management, so that newer versions of services can be deployed with minimal initial dependencies. Then over time consumers can be migrated over from the older versions in their own independent schedules, and eventually the older versions can be decommissioned once no one is using them anymore. However, it isn't always that applications can support multiple versions of the same service (and best practices on when a new version is required as opposed to hot fixes is still unclear), or the trade-offs in management costs may not justify doing so.

    And is the only effective solution to manage changes in an SOA environment, to implement SOA governance tools? Tools are tools, and they do help, but often they also bring a layer of complexity as well. And governance tools are best suited to support specific processes defined in specific architectures; they don't actually solve problems in this area, as the problems are due to the collective processes and systems bound together in an SOA. Thus, well-defined processes and architectures are still required, then tools can be used for automation and enforcement.

    Build Layers of Encapsulation and Abstraction Into an SOA

    This concept is markedly different from the initial intention of transforming a disconnected and silo'ed enterprise into one seamless entity. But basically, one massive logical SOA may actually be more difficult to manage, than a set of smaller localized/partitioned SOA's federating as one. Even though more costly from an infrastructure perspective, there are many benefits to this approach (especially for larger enterprise environments):

    • Layers of abstraction/encapsulation provide boundaries where changes can be localized instead of being required to be verified against the entire end-to-end architecture
    • Allows for shrinking and localizing the scope of impacted components in integration tests, into smaller and more discrete units which become easier to coordinate and schedule between smaller number of involved teams
    • Still not effective at addressing changes that impact a high number of systems, but smaller and localized changes no longer have to be tied up and wait for the "big test" to complete, to be released
    • Over time, the entire architecture is re-validated
    • From a security perspective, this supports defense in depth

    ESB vendors will like this, as these products are the most effective solutions to build in layers of encapsulation/abstraction into an SOA. But there are many different kinds of ESB's in the market. Point is, from an enterprise architecture perspective, we really don't need to migrate to a full centralization model when implementing an SOA. A model where local SOA's federate into one enterprise SOA may work out better, providing sufficient local autonomy (type of ESB, local governance, etc.) while coherently organizes the enterprise into one logical entity, and likely higher scalability, reliability, and agility.

    Also, data integration/replication, even though often cited as a major anti-pattern in SOA, when applied appropriately, is often an effective way to add a layer between different systems, when an encapsulation layer is preferred. Basically, inter-dependencies are minimized if there are no distributed transactions binding systems together at the process level.

    A Different Process-Oriented Approach

    A resulting SOA validation strategy is to have a centralized management of integration testing schedule in the enterprise QA environment, so that at any one point in time, only one set of changes is being validated. As a result, most integration tests should occur in localized groups and at more discrete intervals/schedules, as opposed to trying to get everyone to undergo validation at the same time, or cause people to run over each other with conflicting changes.

    Thus there are three testing models: unit test, localized integration test, and full integration test. Full integration tests are usually preferred (perceived to be more accurate and comprehensive), but also too cost-prohibitive to undertake. The best trade-off is localized integration tests performed at more discrete and distributed schedules, as each validation can assume it’s done in a pristine environment, and logically the entire architecture is re-validated over time.

    In addition, from a SLA or security management perspective, systems are often categorized into different criticality tiers. In an ideal SOA where everything is connected to everything, it shouldn’t mean that everything is now molded into the same tier. Consequently, different strategies can be devised to re-validate systems in different tiers. For example, only require unit test for systems in lower tiers.

    The enterprise perspective can be that, all three types of tests are done; it’s just management effort that is required. For example, a full integration test can be scheduled on a quarterly basis, while localized integration tests are used to release regular changes.

    SOA Change Management Requires a Multi-Faceted Approach

    There are still some areas where the current set of SOA governance and testing tools don't address very well. It's not that these products lack maturity; it's just some issues are inherent in distributed computing and are created by the collection of design decisions, processes and methodologies, and technologies implemented in an SOA (which obviously, can be different for each organization). The SOA governance solution vendors themselves state that governance is a people-oriented process.

    Thus, when architecting SOA governance, additional thought needs to be placed in these areas, in a change management context, and integrated into many different aspects of an SOA, leveraging an integrated approach across people, processes, and technologies.

    See Related Content

Page 1 of 2 (6 items) 12