Architecture + Strategy

Musings from David Chou - Architect, Microsoft

  • Architecture + Strategy

    PDC2008 - A Futures Look at the Microsoft Platform

    • 1 Comments

    Microsoft's Professional Developer Conference (October 27-30, 2008) is coming to the Los Angeles Convention Center again!

    2005 was the last time PDC was held, also in LA. That event unveiled the pieces that make up of what we know today as .NET Framework 3.0, as well as Windows Vista, Windows Server 2008, etc. Things such as Windows Communications Foundation (WCF; then-called "Indigo"), Windows Presentation Foundation (WPF; then-called Avalon), and Windows Workflow Foundation all represented significant advances in the .NET Framework, and how they impact application development efforts on the Microsoft platform today.

    Unlike the TechEd events held annually at Orlando, Florida, PDC is focused on the leading-edge technologies and platform components and has usually been held once every two years. So Microsoft spent 3 years in hibernation this time (arguably for a number of reasons), and it certainly is a much different world today.

    Today we live in an environment where cloud computing has obtained mainstream status, and developers & organizations have a larger number of platforms to consider, and new & viable vendors to partner with. However, cloud computing also brings along a set of new concerns, models, and architectures.

    As a platform company, Microsoft has also been spending time to shift towards cloud computing. Our approach is to provide a platform that spans the cloud, enterprises, desktops, and devices; a full spectrum of choices that, we think, are relevant for the foreseeable future. And the Microsoft platform is designed to bring all of those previously silo'ed areas together, in a seamless and consistent manner, that fully addresses the wide range of concerns, models, and architectures in this new environment. Thus at PDC we can expect to get an inside look at how Microsoft's platform for the future has evolved, and how we can leverage existing skillsets to build applications for the future.

    PDC2008 features more than 160 sessions covering a wide range of topics for professional developers and architects. These sessions provide an in-depth technical understanding of Microsoft’s future platform and offer practical guidance to help plan the evolution of your own products.

    The topics include:

    • Cloud services - SQL Server Data Services, messaging and identity services, Live platform services, etc.
    • Live Mesh - Mesh services, FeedSync, device P2P, Mesh Operating Environment, etc.
    • Silverlight - mobile, deep dives, business apps, etc.
    • Cloud synchronization - Sync Framework, ADO.NET Data Services (Astoria), SQL Server project Velocity, etc.
    • .NET Framework - F#, C# futures, VB futures, dynamic languages, COM interop advances, WPF futures, WF futures, Workflow Services, etc.
    • Windows 7 - touch computing, native Web services
    • Windows Mobile - location-based services, Web development

    And many, many more as more details emerge.

    Visit the Microsoft PDC website for up-to-date information.

    For registration details, see http://microsoftpdc.com/Registration/. $200 early bird discount before August 15.

    Hope to see you there! Please say hi if you happen to run into me. :)

  • Architecture + Strategy

    New tools enhance SQL Server security

    • 1 Comments

    In collaboration with SQL Server, IIS, and Hewlett Packard, the Microsoft Security Response Center (MSRC) announced a set of tools that customers can use to defend against SQL injection attacks on their ASP websites and identify and mitigate root ASP code vulnerabilities. These tools are available through Microsoft Security Advisory 954462. These tools provides customers with automated assistance in defending against these attacks and for correcting the root cause. The following three tools are available for immediate download:

    • Microsoft Source Code Analyzer for SQL Injection
      New static analysis tool that identifies SQL injection vulnerabilities in ASP source code and suggests fixes.  Enables customers to address the vulnerability at the source.
    • URLScan 3.0
      Updated version of the IIS tool that acts as a site filter by blocking specific HTTP requests.  Can be used to block malicious requests used in this attack.
    • Scrawlr
      New scanning tool from Hewlett Packard that scans websites looking for SQL injection vulnerabilities in URL parameters.
  • Architecture + Strategy

    Is Open Source Ready for Prime Time?

    • 1 Comments

    Well that's an obvious question. And you'd be surprised if you were expecting Microsoft to say no to it. :) Fact is, Microsoft has learned / adapted to embrace and support open source communities and models.

    I had the opportunity to speak at the UCLA Anderson School of Management's IS Associates event, titled "Is Open Source Ready for Prime Time".

     

    My esteemed fellow speakers included Dan Frye (VP Open Systems Development, IBM), Robert Kabat (Senior Counsel, Twentieth Century Fox), and Howard Wright (IT Strategist, Stradling Globl Source; formal head of IT at Dreamworks). Our discussions were moderated by Maryfran Johnson (Editorial Director of Executive Programs, CXO Media; former Editor in Chief of Computerworld).

    Some people at the event had anticipated that Microsoft would be very vocal against open source. But in general, all of the speakers, including myself, were in agreement that many open source software products have gained sufficient maturity to meet many enterprise needs.

    Microsoft's Perspective on Open Source

    So what exactly is Microsoft's perspective on open source? Basically, the IT environment today is a mix of open-source and private source software (or proprietary commercial software), and Microsoft recognizes that it needs to be an active participant in that environment, in order to continue to provide value to customers.

    Slide7

    Microsoft does this via a number of efforts:

    • Partnerships with open source software vendors - such as SugarCRM, JBoss, Xen Source, MqSQL, Novell, Zend, Mozilla, etc. For example, Microsoft's engineering teams worked closely with the FireFox team to make sure FireFox will run well on Windows. Similarly, ensuring platform compatibility and interoperability with many open source software products
    • Engaging communities - participating in community events, open source conferences, forums and alliances such as the Open Source Interoperability Initiative, Open Source ISV Forum, Interoperability Forum, Interoperability Vendor Alliance, etc.
    • Code contributions - including technology open sourced by Microsoft through the Shared Source initiative, or developed through sponsorship or community project partnerships
    • Support for research efforts - relationships and collaborative projects with the academic research community in computer science, sociology, and economics

    Essentially, Microsoft is now leveraging open source development models and approaches, and supporting open source business models. By embracing diverse application development and business models, Microsoft seeks to participate successfully and responsibly in a world of choice in which individuals and organizations can pursue their goals based on what uniquely inspires them, including open source.

    However, as a software company, Microsoft does compete with open source software products (i.e., Linux, Open Office, MySQL, FireFox, Java, Apache, PHP, etc.). But this does not represent a conflict of interest. Microsoft focuses on supporting the people working on and with open source.

    And many open source software products today do present significant challenges to Microsoft's competing products. So does that spell the end of Microsoft, now that conceivably free software that are "good enough" are available? Well, not really. But they do force Microsoft to innovate and add value. In fact I think open source from this perspective is actually a good influence on Microsoft; it requires us to work harder and try to do what's right for customers.

    Take for example, Microsoft Office. Does Open Office or other conceivably free applications like Google Apps mean that Microsoft should open source Microsoft Office as well?

    Slide26

    Microsoft's response is two-fold.

    First from a product suite perspective, Office today is no longer just a bag of desktop clients used to create documents. Office 2007 is a bona fide user collaboration and business productivity platform that integrates high-fidelity client components on the desktop, free and subscription-based services in the cloud, enterprise servers, and mobile devices. It provides sophisticated capabilities, and a range of choices in terms of how users want to leverage the capabilities. It's not a one-size-fits all solution compared to the current open source and freeware counterparts.

    Second, from a development perspective, the entire suite of Office products consists of multiple project teams. Microsoft's vision for the user collaboration and business productivity platform requires a closely managed and well-orchestrated effort across these teams. And the question is, is it more effective to make it open source so that the community can contribute to its future development, or to continue leveraging Microsoft's closely knit full-time development resources? In Microsoft's opinion, keeping Office private still makes more sense.

    And of course, this gives Microsoft the ability to continue using traditional licensing means to generate revenue from Office. But I think Office does offer great value to customers at commodity-level pricing points.

    Users' Perspective on Open Source

    Granted, Microsoft is not a major user of open source software products, even though it actively participates in the open source communities. Thus for everyday users of software products, the perspective on open source software is different from Microsoft's. Ultimately, the best way to look at open source software products, is that they are software products just like proprietary commercial software products.

    Slide21

    In general, neither open source software or private source software, is inherently or naturally better than the other. Advantages claimed on one side for some products, can also be claimed as advantages for some products on the other side. Same holds true for disadvantages.

    For example, concerns for product support has been cited as the biggest challenge for open source software products, as it is unclear who owns accountability and support responsibilities when no one owns the software (the community owns it). However, that doesn't mean products like Linux has inferior support. Customers can get support from established vendors like Red Hat that leverages the support services model.

    For the most part, trade-offs are quite unclear between both sides. As benefits claimed by one side are not always true. For example, one will say "zero cost", and the other will say "lower long-term cost", or "good enough" vs. "cheap enough", etc. The list goes on, but none of these claims are consistently true.

    Slide22

    Thus for users evaluating to adopt open source software products, the decision should be made based on specific requirements, and evaluated against a list of options which may include both open source and private source software products. The evaluation process should focus on value, which can be a combination of short-term acquisition/deployment costs, long-term ongoing support and maintenance costs, and sometimes hard-to-quantify benefits and costs like productivity gains/costs.

    Slide23

    Some Final Thoughts

    Having said the above, there are still some inherent differences between open source and private source software. One major difference is how these software products are developed. In general, open source software products are created by mostly highly technical people, whereas private source commercial software products often are combined multi-disciplinary efforts that include, for example, marketers, analysts, researchers, usability experts, creative designers, user experience experts, mangers, architects, engineers, etc. Consequently, we can see that the most successful open source projects today are products (true community-driven projects; not vendor products that do open source) that address technical and infrastructure issues. But this is not necessarily good or bad; just different. And the likelihood is higher that open source projects and development processes will continue to mature, and may at some point integrate additional types of communities to collaboratively create products.

    How does one measure the viability of an open source software product, when evaluating financial viability of vendors (as a business partner)? There is no good answer for that question today, but the best way to look at that is to look at the size and activeness of the community around a project. The larger and more active the community, the more viable the project is from a long-term perspective. For example, one study indicated that over 2,000 developers contributed to one year's worth of changes into the Linux kernel (just the kernel!), which is one indication on the likelihood that this project will not be abandoned anytime soon.

    And not all software projects should be moved to open source models. Some projects are very well-suited to leverage open-source development, but some would benefit more remaining private. Again, there are differences between truly community-driven open source projects like the Linux kernel, and vendor-driven products like Eclipse and MySQL that do open source. But in general, we do find that technical products have a higher likelihood of building a strong community to maintain the product from a long-term perspective. Thus there is still plenty of room for proprietary/commercial software vendors like Microsoft and IBM to innovate and add value. For example, IBM is building on top of open source products, and Microsoft is focusing on user experience and innovative capabilities.

    Licensing in open source products, such as LGPL, GPL v3 and the ongoing debates around that, and the proliferation of different flavors of open source licenses, are still sources of confusion for users. For example, who is responsible to provide indemnity from a legal perspective? Our recommendation is, end-users of open source software products don't really have to worry about this, including developers who build applications using open source software. However, if open source software components are embedded and/or distributed as another commercial product, then the organization needs to engage its legal counsel to identify legal risks and map out mitigation requirements.

  • Architecture + Strategy

    Silverlight 2 Now Full Public Beta

    • 1 Comments

    Silverlight 2 Beta2 was released last week. This is the major milestone that supports a commercial go-live license, which means the platform is considered stable and robust enough to support mission critical applications.

    Read Scott Guthrie and my teammate Sam Chenaur's blog entries for details on the release.

    While some people may perceive Silverlight still in its infancy compared to Flash, I continue to be amazed at the sophistication and quality of solutions developers are building on Silverlight. I also think this is becoming another proof point that specialized platforms do add significant value when implemented effectively, and one-size-fits-all "good enough" platforms have their limits. Consequently, the strategy to pursue a diversified platform approach, is actually not any more imprudent than choosing to focus on one platform. And ultimately, more choice for customers (i.e., Silverlight, ASP.NET AJAX, WPF, WinForms, Office Business Applications, SharePoint, Live Mesh, etc.; as choices in customer-facing platform components) may seem a bit complex, but do offer extraordinary value. In the case of user interface platforms, more choices allow delivery of specialized and high-fidelity user experiences to different user segments, as opposed to requiring all users to work with the same "lowest common denominator" user experience (i.e., HTML-based).

    Some of the most compelling Silverlight applications I have seen (many are registered on Silverlight Showcase) are listed below.

     

    General Info:

    Media sites/demos:

    Rich application sites/demos:

    Casual Games:

    Reusable Controls (for enterprise applications):

    Now these are just some of my favorites. But with the pace of developers building cool Silverlight applications, this list may need to be updated very frequently (last updated 7/11/08).

  • Architecture + Strategy

    SOA Change Management Strategies

    • 1 Comments

    By today's standards, it is pretty well-understood that governance is a critical success factor for enterprise SOA initiatives. And there is already a considerably saturated/consolidated market providing the SOA governance solutions (see Gartner's Magic Quadrant for Integrated SOA Governance Technology Sets, 2007, Forrestor's SOA Service Life-Cycle Management Q1 2008, and SOA Governance Conference 5 for some content from HP, IBM, Progress Software, and SOA Software).

    A quick glance over the product features and discussions finds that the SOA governance tools in the market today focus on a set of key capabilities:

    • Design-time lifecycle management (contracts & metadata & artifacts management, change & reporting notification plus automation, QA and test automation, dependencies mapping, policy compliance auditing, etc.)
    • Run-time lifecycle management (versioning, decommissioning, monitoring & reporting, change deployment management, usage and exceptions metrics collection, policy compliance enforcement, etc.)
    • Security and access control (policy-driven fine-grained service authorization)
    • Integration with service infrastructure (ESB, identity management, single sign-on, MDM, service registries, metadata repositories, PKI, etc.)

    Just one way of categorizing the capabilities. Most vendors have their own ways of categorizing/describing these products, and some provide more built-in features. These capabilities are quite advanced and do address a wide range of governance needs. And then there is another set of products that aims to address SOA testing and automation needs.

    How to Validate Incremental Changes Deployed to a Live, Real-Time, Inter-Connected, and Inter-Dependent Distributed Architecture?

    The SOA governance tools support this from the perspective of making sure services are developed in compliance to policies and defined contracts, then managed in runtime after deployment and release. The SOA testing tools support this from managing and automating test efforts against component-based service deployments. However, there seems to be a considerable gap in terms of validating and managing changes in an enterprise SOA environment.

    A closer look uncovers many tough questions:

    • How do we validate changes in an SOA where a set of (sometimes hundreds of) physically distributed systems and services have been managed and governed into one logical/virtual entity? Specifically, how do we ensure a particular change being released into the live production environment won't break anything else, which often are other connected mission-critical systems that are running concurrently, based on traditional multi-staging change management strategies?
    • Do we trust that changes verified in a QA environment will behave exactly the same in production? If so, is the QA environment an exact replica of production, including the operational data that processing logic depends on?
    • Do we just "unit test" the service components associated with a unit of change, or do we work with other teams to conduct a full integration test in QA? And in an integration test, is the whole virtual enterprise involved, or just the directly connected system components?
    • How do we ensure that downstream components connected in multi-part, distributed transactions are not impacted if we don't conduct integration tests on everything?
    • How do we ensure that the QA environment is pristine and how do we coordinate among multiple team's project schedules, which are often more different than similar?
    • In a highly inter-connected and inter-dependent environment, how do we manage the worst-case scenario where hundreds of service components are impacted by a change?
    • If change verification/testing in production is allowed, how do we facilitate synthetic transactions so that actual production data is not interfered by test cases?

    SOA Requires Different Methods of Managing Changes

    Well that's obvious, right? :) But fundamentally it probably requires a different way of thinking too. For example, traditional multi-staging change migration strategies (dev, test, QA, staging/regression, prod, etc.) don't lend themselves very well anymore as they were more effective at managing changes that are more autonomous and local in nature. Now that changes are inter-related and inter-dependent, and often impacting a high number of systems not under any one team's management, full integration tests may mean coordinating schedules, code/data versions, security, etc. all bundled into one massive enterprise-wide test. Which would be too difficult and complex to undertake on a regular basis, and as a result what happens to the agility SOA was intended to deliver?

    The SOA governance tools today address this change management need mainly via service lifecycle management, so that newer versions of services can be deployed with minimal initial dependencies. Then over time consumers can be migrated over from the older versions in their own independent schedules, and eventually the older versions can be decommissioned once no one is using them anymore. However, it isn't always that applications can support multiple versions of the same service (and best practices on when a new version is required as opposed to hot fixes is still unclear), or the trade-offs in management costs may not justify doing so.

    And is the only effective solution to manage changes in an SOA environment, to implement SOA governance tools? Tools are tools, and they do help, but often they also bring a layer of complexity as well. And governance tools are best suited to support specific processes defined in specific architectures; they don't actually solve problems in this area, as the problems are due to the collective processes and systems bound together in an SOA. Thus, well-defined processes and architectures are still required, then tools can be used for automation and enforcement.

    Build Layers of Encapsulation and Abstraction Into an SOA

    This concept is markedly different from the initial intention of transforming a disconnected and silo'ed enterprise into one seamless entity. But basically, one massive logical SOA may actually be more difficult to manage, than a set of smaller localized/partitioned SOA's federating as one. Even though more costly from an infrastructure perspective, there are many benefits to this approach (especially for larger enterprise environments):

    • Layers of abstraction/encapsulation provide boundaries where changes can be localized instead of being required to be verified against the entire end-to-end architecture
    • Allows for shrinking and localizing the scope of impacted components in integration tests, into smaller and more discrete units which become easier to coordinate and schedule between smaller number of involved teams
    • Still not effective at addressing changes that impact a high number of systems, but smaller and localized changes no longer have to be tied up and wait for the "big test" to complete, to be released
    • Over time, the entire architecture is re-validated
    • From a security perspective, this supports defense in depth

    ESB vendors will like this, as these products are the most effective solutions to build in layers of encapsulation/abstraction into an SOA. But there are many different kinds of ESB's in the market. Point is, from an enterprise architecture perspective, we really don't need to migrate to a full centralization model when implementing an SOA. A model where local SOA's federate into one enterprise SOA may work out better, providing sufficient local autonomy (type of ESB, local governance, etc.) while coherently organizes the enterprise into one logical entity, and likely higher scalability, reliability, and agility.

    Also, data integration/replication, even though often cited as a major anti-pattern in SOA, when applied appropriately, is often an effective way to add a layer between different systems, when an encapsulation layer is preferred. Basically, inter-dependencies are minimized if there are no distributed transactions binding systems together at the process level.

    A Different Process-Oriented Approach

    A resulting SOA validation strategy is to have a centralized management of integration testing schedule in the enterprise QA environment, so that at any one point in time, only one set of changes is being validated. As a result, most integration tests should occur in localized groups and at more discrete intervals/schedules, as opposed to trying to get everyone to undergo validation at the same time, or cause people to run over each other with conflicting changes.

    Thus there are three testing models: unit test, localized integration test, and full integration test. Full integration tests are usually preferred (perceived to be more accurate and comprehensive), but also too cost-prohibitive to undertake. The best trade-off is localized integration tests performed at more discrete and distributed schedules, as each validation can assume it’s done in a pristine environment, and logically the entire architecture is re-validated over time.

    In addition, from a SLA or security management perspective, systems are often categorized into different criticality tiers. In an ideal SOA where everything is connected to everything, it shouldn’t mean that everything is now molded into the same tier. Consequently, different strategies can be devised to re-validate systems in different tiers. For example, only require unit test for systems in lower tiers.

    The enterprise perspective can be that, all three types of tests are done; it’s just management effort that is required. For example, a full integration test can be scheduled on a quarterly basis, while localized integration tests are used to release regular changes.

    SOA Change Management Requires a Multi-Faceted Approach

    There are still some areas where the current set of SOA governance and testing tools don't address very well. It's not that these products lack maturity; it's just some issues are inherent in distributed computing and are created by the collection of design decisions, processes and methodologies, and technologies implemented in an SOA (which obviously, can be different for each organization). The SOA governance solution vendors themselves state that governance is a people-oriented process.

    Thus, when architecting SOA governance, additional thought needs to be placed in these areas, in a change management context, and integrated into many different aspects of an SOA, leveraging an integrated approach across people, processes, and technologies.

    See Related Content

Page 23 of 28 (137 items) «2122232425»