Architecture + Strategy

Musings from David Chou - Architect, Microsoft

  • Architecture + Strategy

    Cloud Computing and User Authentication

    • 4 Comments

    When we look at authentication and authorization aspects of cloud computing, most discussions today point towards various forms of identity federation and claims-based authentication to facilitate transactions between service end points as well as intermediaries in the cloud. Even though they represent another form of paradigm shift from the self-managed and explicit implementations of user authentication and authorization, they have a much better chance at effectively managing access from the potentially large numbers of online users to an organization's resources.

    So that represents using trust-based, identity assertion relationships to connect services in the cloud, but what do we do to authenticate end users to establish their identities? Most user-facing services today still use simple username and password type of knowledge-based authentication, with the exception of some financial institutions which have deployed various forms of secondary authentication (such as site keys, virtual keyboards, shared secret questions, etc.) to make it a bit more difficult for popular phishing attacks.

    But identity theft remains one of the most prevalent issues in the cloud, and signs show that the rate and sophistication of attacks are still on the rise. The much publicized DNS poisoning type of flaws disclosed by Dan Kaminsky at the Black Hat conference (and related posts on C|Net News, InformationWeek, Wired, ZDNet, CIO, InfoWorld, PC World, ChannelWeb, etc.) earlier point out how fragile the cloud still is, from a security perspective, even at the network infrastructure level.

    Strong User Authentication

    Thus the most effective way to ensure users are adequately authenticated when using browsers to access services in the cloud, is to facilitate an additional authentication factor outside of the browser (in addition to username/password). Which is essentially multi-factor authentication, but available options today are rather limited when considering requirements of scalability and usability.

    The aspect of designing and implementing effective user authentication, was the focus of my recently published article, "Strong User Authentication On the Web", as part of the 16th edition of the Architecture Journal. The article discussed a few viable options at implementing "strong" user authentication for end users in the cloud (not limited to multi-factor authentication), and an architectural perspective on many of the capabilities that together form a strong authentication system.

    Just one of the many ways to compose these capabilities together. As we move towards cloud computing, the line between internal security infrastructure and public cloud-based services will continue to blur.

  • Architecture + Strategy

    Internet Service Bus and Windows Azure AppFabric

    • 3 Comments

    imageMicrosoft’s AppFabric, part of a set of ”application infrastructure” (or middleware) technologies, is (IMO) one of the most interesting areas on the Microsoft platform today, and where a lot of innovations are occurring. There are a few technologies that don’t really have equivalents elsewhere, at the moment, such as the Windows Azure AppFabric Service Bus. However, its uniqueness is also often a source of confusion for people to understand what it is and how it can be used.

    Internet Service Bus

    Oh no, yet another jargon in computing! And just what do we mean by “Internet Service Bus” (ISB)? Is it hosted Enterprise Service Bus (ESB) in the cloud, B2B SaaS, or Integration-as-a-Service? And haven’t we seen aspects of this before in the forms of integration service providers, EDI value-added networks (VANs), and even electronic bulletin board systems (BBS’s)?

    At first look, the term “Internet Service Bus” almost immediately draws an equal sign to ‘Enterprise Service Bus’ (ESB), and aspects of other current solutions, such as ones mentioned above, targeted at addressing issues in systems and application integration for enterprises. Indeed, Internet Service Bus does draw relevant patterns and models applied in distributed communication, service-oriented integration, and traditional systems integration, etc. in its approaches to solving these issues on the Internet. But that is also where the similarities end. We think Internet Service Bus is something technically different, because the Internet presents a unique problem domain that has distinctive issues and concerns that most internally focused enterprise SOA solutions do not. An ISB architecture should account for some very key tenets, such as:

    • Heterogeneity – This refers to the set of technologies implemented at all levels, such as networking, infrastructure, security, application, data, functional and logical context. The internet is infinitely more diverse than any one organization’s own architecture. And not just current technologies based on different platforms, there is also the factor of versioning as varying implementations over time can remain in the collective environment longer than typical lifecycles in one organization.
    • Ambiguity – the loosely-coupled, cross-organizational, geo-political, and unpredictable nature of the Internet means we have very little control over things beyond our own organization’s boundaries. This is in stark contrast to enterprise environments which provide higher-levels of control.
    • Scale – the Internet’s scale is unmatched. And this doesn’t just mean data size and processing capacity. No, scale is also a factor in reachability, context, semantics, roles, and models.
    • Diverse usage models – the Internet is used by everyone, most fundamentally by consumers and businesses, by humans and machines, and by opportunistic and systematic developments. These usage models have enormously different requirements, both functional and non-functional, and they influence how Internet technologies are developed.

    For example, it is easy to think that integrating systems and services on the Internet is a simple matter of applying service-oriented design principles and connecting Web services consumers and providers. While that may be the case for most publicly accessible services, things get complicated very quickly once we start layering on different network topologies (such as through firewalls, NAT, DHCP and so on), security and access control (such as across separate identity domains), and service messaging and interaction patterns (such as multi-cast, peer-to-peer, RPC, tunneling). These are issues that are much more apparent on the Internet than within enterprise and internal SOA environments.

    On the other hand, enterprise and internal SOA environments don’t need to be as concerned with these issues because they can benefit from a more tightly controlled and managed infrastructure environment. Plus integration in the enterprise and internal SOA environments tend to be at a higher-level, and deal more with semantics, contexts, and logical representation of information and services, etc., organized around business entities and processes. This doesn’t mean that these higher-level issues don’t exist on the Internet; they’re just comparatively more “vertical” in nature.

    In addition, there’s the factor of scale, in terms of the increasing adoption of service compositions that cross on-premise and public cloud boundaries (such as when participating in a cloud ecosystem). Indeed, today we can already facilitate external communication to/from our enterprise and internal SOA environments, but to do so requires configuring static openings on external firewalls, deploying applications and data appropriate for the perimeter, applying proper change management processes, delegate to middleware solutions such as B2B gateways, etc. As we move towards a more inter-connected model when extending an internal SOA beyond enterprise boundaries, these changes will become progressively more difficult to manage collectively.

    An analogy can be drawn from our mobile phones using cellular networks, and how its proliferation changed the way we communicate with each other today. Most of us take for granted that a myriad of complex technologies (e.g., cell sites, switches, networks, radio frequencies and channels, movement and handover, etc.) is used to facilitate our voice conversations, SMS, MMS, and packet-switching to Internet, etc. We can effortlessly connect to any person, regardless of that person’s location, type of phone, cellular service and network, etc. The problem domain and considerations and solution approaches for cellular services, are very different from even the current unified communications solutions for enterprises. The point is, as we move forward with cloud computing, organizations will inevitably need to integrate assets and services deployed in multiple locations (on-premises, multiple public clouds, hybrid clouds, etc.). To do so, it will be much more effective to leverage SOA techniques at a higher-level and building on a seamless communication/connectivity “fabric”, than the current class of transport (HTTPS) or network-based (VPN) integration solutions.

    Thus, the problem domain for Internet Service Bus is more “horizontal” in nature, as the need is vastly broader in scope than current enterprise architecture solution approaches. And from this perspective Internet Service Bus fundamentally represents a cloud fabric that facilitates communication between software components using the Internet, and provides an abstraction from complexities in networking, platform, implementation, and security.

    Opportunistic and Systematic Development

    It is also worthwhile to discuss opportunistic and systematic development (as examined in The Internet Service Bus by Don Ferguson, Dennis Pilarinos, and John Shewchuk in the October 2007 edition of The Architecture Journal), and how they influence technology directions and Internet Service Bus.

    Systematic development, in a nutshell, is the world we work in as professional developers and architects. The focus and efforts are centered on structured and methodical development processes, to build requirements-driven, well-designed, and high-quality systems. Opportunistic development, on the other hand, represents casual or ad-hoc projects, and end-user programming, etc.

    This is interesting because the majority of development efforts in our work environments, such as in enterprises and internal SOA environments, are aligned towards systematic development. But the Internet advocates both systematic and opportunistic developments, and increasingly more so as influenced by Web 2.0 trends. Like “The Internet Service Bus” article suggests, today a lot of what we do manually across multiple sites and services, can be considered a form of opportunistic application; if we were to implement it into a workflow or cloud-based service.

    And that is the basis for service compositions. But to enable that type of opportunistic services composition (which can eventually be compositing layers of compositions), technologies and tools have to be evolved into a considerably simpler and abstracted form such as model-driven programming. But most importantly, composite application development should not have to deal with complexities in connectivity and security.

    And thus this is one of the reasons why Internet Service Bus is targeted at a more horizontal, and infrastructure-level set of concerns. It is a necessary step in building towards a true service-oriented environment, and cultivating an ecosystem of composite services and applications that can simplify opportunistic development efforts.

    ESB and ISB

    So far we discussed the fundamental difference between Internet Service Bus (ISB) and current enterprise integration technologies. But perhaps it’s worthwhile to discuss in more detail, how exactly it is different from Enterprise Service Bus (ESB). Typically, an ESB should provide these functions (core plus extended/supplemental):

    • Dynamic routing
    • Dynamic transformation
    • Message validation
    • Message-oriented middleware
    • Protocol and security mediation
    • Service orchestration
    • Rules engine
    • Service level agreement (SLA) support
    • Lifecycle management
    • Policy-driven security
    • Registry
    • Repository
    • Application adapters
    • Fault management
    • Monitoring

    In other words, an ESB helps with bridging differences in syntactic and contextual semantics, technical implementations and platforms, and providing many centralized management capabilities for enterprise SOA environments. However, as we mentioned earlier, an ISB targets concerns at a lower, communications infrastructure level. Consequently, it should provide a different set of capabilities:

    • Connectivity fabric – helps set up raw links across boundaries and network topologies, such as NAT and firewall traversal, mobile and intermittently connected receivers, etc.
    • Messaging infrastructure – provides comprehensive support for application messaging patterns across connections, such as bi-directional/peer-to-peer communication, message buffers, etc.
    • Naming and discovery – a service registry that provides stable URI’s with a structured naming system, and supports publishing and discovering service end point references
    • Security and access control – provides a centralized management facility for claims-based access control and identity federation, and mapping to fine-grained permissions system for services

    A figure of an Internet Service Bus architecture is shown below.

    clip_image002

    And this is not just simply a subset of ESB capabilities; ISB has a couple of fundamental differences:

    • Works through any network topology; whereas ESB communications require well-defined network topologies (works on top)
    • Contextual transparency; whereas ESB has more to do with enforcing context
    • Services federation model (community-centric); whereas ESB aligns more towards a services centralization model (hub-centric)

    So what about some of the missing capabilities that are a part of ESB, such as transformation, message validation, protocol mediation, complex orchestration, and rules engine, etc.? For ISB, these capabilities should not be built into the ISB itself. Rather, leverage the seamless connectivity fabric to add your own implementation, or use one that is already published (can either be cloud-based services deployed in Windows Azure platform, or on-premises from your own internal SOA environment). The point is, in the ISB’s services federation model, application-level capabilities can simply be additional services projected/published onto the ISB, then leveraged via service-oriented compositions. Thus ISB just provides the infrastructure layer; the application-level capabilities are part of the services ecosystem driven by the collective community.

    On the other hand, for true ESB-level capabilities, ESB SaaS or Integration-as-a-Service providers may be more viable options, while ISB may still be used for the underlying connectivity layer for seamless communication over the Internet. Thus ISB and ESB are actually pretty complementary technologies. ISB provides the seamless Internet-scoped communication foundation that supports cross-organizational and federated cloud (private, public, community, federated ESB, etc.) models, while ESB solutions in various forms provide the higher-level information and process management capabilities.

    Windows Azure AppFabric Service Bus

    The Service Bus provides secure messaging and connectivity capabilities that enable building distributed and disconnected applications in the cloud, as well hybrid application across both on-premise and the cloud. It enables using various communication and messaging protocols and patterns, and saves the need for the developer to worry about delivery assurance, reliable messaging and scale.

    AppFabric Service Bus

    In a nutshell, the Windows Azure AppFabric Service Bus is intended as an Internet Service Bus (ISB) solution, while BizTalk continues to serve as the ESB solution, though the cloud-based version of BizTalk may be implemented in a different form. And Windows Azure AppFabric Service Bus will play an important role in enabling application-level (as opposed to network-level) integration scenarios, to support building hybrid cloud implementations and federated applications participating in a cloud ecosystem.

  • Architecture + Strategy

    Cloud Computing and Microsoft

    • 7 Comments

    Here's a popular topic. ;) A quick search in the blogsphere finds countless number of posts and comments proclaiming the inevitable (or already happening) decline of Microsoft as we near the age of cloud computing.

    A small sampling of some well-known publications finds comparatively less dramatic views, but the theme is quite consistent - cloud computing is heralded as the future, and Google is best positioned to dominate this new era.

    While things look quite certain on Google's side in terms of succeeding on the Internet and leading/riding the cloud computing tidal wave, at this point it doesn't seem to be the case for Microsoft.

    Single-Era Conjecture

    "The invisible law that makes it impossible for a company in the computer business to enjoy pre-eminence that spans two technological eras"; as described in the NYTimes article, "The Computer Industry Comes With Built-In Term Limits". While I think the article did not articulate the precise reasons why Microsoft won't stay on top through a major paradigm shift (other than focusing on where Google is succeeding and where Microsoft isn't), the question is still valid.

    Reason being, cloud computing itself represents a migration from distributed client-server and on-premise software models, to a more centralized model (though logically centralized and physically distributed). Eric Schmidt said, "What [cloud computing] has come to mean now is a synonym for the return of the mainframe... and the mainframe is a set of computers... They're in a cloud somewhere." And this everything-as-a-service (in the cloud) model directly conflicts with what people most commonly identify Microsoft with - software for people to use (and most visibly Windows and Office - client O/S and desktop software).

    More specifically, various trends in the past 10+ years have also influenced this paradigm shift. Just to list the obvious ones:

    • Consumerization of the Web, and use of browsers
    • Application development efforts shifting towards thin clients and server-side programming
    • Improvements in network bandwidth, anywhere wireless access, etc.
    • Increased maturity in open source software
    • Proliferation and advancement of mobile devices
    • Service Oriented Architecture
    • Software-as-a-Service
    • Utility/Grid Computing

    Effectively, the shift towards browsers as the primary channel of access for consumers and deployment target for organizations, has reduced the need to use locally installed software. And this directly impacts Windows and Office, plus server software traditionally acquired and installed on-premise; perceivably the entire Microsoft product portfolio.

    Plus arguably, Microsoft's investments in online services so far have not won leadership positions and profitability. Recent deliverables such as Windows Live have made significant progress, especially Live Search where the technology has closed some gaps with Google, but still trails behind in terms of adoption. And the gap seems to be widening still.

    So it seems the "Single-Era Conjecture" must be true, and that Microsoft is walking towards its impending downfall? That would certainly be the case if Microsoft completely ignores the market and not see the need to change, or is incapable of change.

    However, Microsoft does see the need, and has been working on many significant changes to shift towards the cloud.

    But, then, why does it seem so difficult for Microsoft to make any significant progress in this area? A couple of thoughts:

    • Microsoft still has a $60B/year and consistently growing business in software
    • As a publicly traded company, Microsoft cannot simply abandon the current software customer base to fully pursue the new services model
    • Microsoft's own culture has to shift as well, but that won't happen overnight given the size and complex environment
    • As a software platform company, Microsoft's focus has been delivering innovation-enabling capabilities to IT customers, not massive scale and speed and multi-tenancy in Microsoft's own technical infrastructures
    • Software is a complex business, and Microsoft has built extensive processes and organizations to deliver secure and robust software, compared to Google's rapid release and perpetual beta model. It will require significant changes in Microsoft to compete on the same grounds as Google, or achieve a similar level of agility
    • If the bottle is half-full, Microsoft actually has made very significant progress; but Google has set the bar very high for anyone to be compared against
    • General mindshare shift towards open source software

    What About the Other Guys?

    So much attention has been focused on the perceived "war" between Microsoft and Google. But what about the rest of the IT industry? This shift towards cloud computing also in many ways presents challenges to their existing businesses. I'll bet many organizations are also thinking that they missed the opportunity to land grab some market share in online search.

    But from a corporate strategy perspective, the seemingly different approaches are very remarkable. Microsoft is pursuing the path of organically build towards the cloud. Oracle is starting to build data centers, but seems to in general prefer acquiring its way into the cloud once the market begins to mature; as they have done with the major acquisitions in the past decade. IBM is also building data centers, but seems to leverage and build upon pure cloud players such as their partnership with Google; as they have done with open source software. Similarly, Intel, HP, and Yahoo have teamed up to build towards the cloud as well. Very distinct strategies, but do seem to be the sensible approaches for each organization.

    However, not everyone will be able to afford building massive data centers and establishing clouds. Yahoo Research Chief Prabhakar Raghavan said, "In a sense, there are only five computers (clouds) on earth." He listed Google, Yahoo, Microsoft, IBM, and Amazon. Well, may be not exactly, but that view of the world is also not entirely implausible, and imagine the kinds of changes everyone will undergo in order to achieve that state.

    That view also underscores a complete commoditization of the entire IT industry. While unfathomable by today's standards, it is conceivable that a paradigm shift such as cloud computing could exact such sweeping changes from us. Reason being, cloud-based services have the benefit of the economies of scale, which allows the services to be provisioned at a lower cost. Existing IT shops, which increasingly face the need to reduce maintenance costs and deliver innovation, may very well favor switching over to cloud-based utility-like services; often compromising quality for lower cost.

    Nicholas Carr said in his book, The Big Switch - "In the long run, the IT department is unlikely to survive, at least in its familiar form. It will have little left to do once the bulk of business computing shifts out of private data centers and into the cloud." Plus a very contrarian and sensational blog post "Your new IT budget: $10".

    Which means, IT shops will increasingly need fewer investments on in-house technology and resources to support operations. And that also translates to needing fewer in-house technical specialists. As demand for technology and technical skills decreases, so does their value, which marks the trend of complete commoditization. Scary thought no?

    Of course Nick Carr's view has generated a lot of controversy, as well as an ample amount of vehement disagreement. But in that light, doesn't Microsoft's scramble towards the cloud seem reasonable, and whoever is not doing so may need to be a bit concerned?

    Microsoft's Perspective

    In a nutshell, Microsoft does see the importance of cloud computing, and is actively building towards the cloud. But Microsoft's approach to shift towards the cloud, is based on a considerably pragmatic perspective that the world will not completely move into the cloud. We think that on-premise and local/client software still has value in the future, but the cloud will undoubtedly be a major component as well.

    The moniker that best describes Microsoft's approach is "Software Plus Services".

    Basically, we expect that the world will remain in a hybrid state, where commonly used services (like today's outsourcing of payroll services to ADP) may move into the cloud, but many differentiating capabilities will continue to be implemented and delivered towards the edge. Some potential factors:

    • User experience - browsers tend to constrict differentiation; client software is still best at delivering high-fidelity and robust user experiences
    • Not one-size-fits-all - not all things are suitable for the cloud; many capabilities are still best delivered on the client or on-premise
    • Organizations' need to innovate and differentiate from competitors using the same or similar commoditized services in the cloud
    • Advancements in human-device interface will drive a new class of clients (i.e., Surface, Sphere, 3-D displays, multi-touch, etc.) which may drive entirely new ways to interact with information (and browsers become the legacy mode)
    • Merits of distributed computing may shift attention back towards the edge again, as certain tasks are more suitable for smaller and more distributed units than a few massive centralized clouds (such as disaster recovery, defense-in-depth, data privacy, etc.)

    So what is Microsoft doing to shift towards cloud computing? To list some interesting points:

    • Microsoft's recent surge in investments on building massive data centers around the world (for example, Quincy, WA; Chicago, IL; San Antonio, TX; Des Moines, IA; Dublin, Ireland; Siberia, Russia, etc.). Some of these data centers cost more than $500M each
    • Every one of Microsoft's server products will eventually be offered as subscription-based services in the cloud; though licensing-based on-premise versions will also continue to be offered to provide the full range of choices for customers (reinforcing "Software + Services")
    • For example, Exchange, SharePoint, and Office Communication Servers are already part of the Microsoft Online services offerings; Biztalk and SQL Servers are also being delivered as Biztalk Services and SQL Server Data Services; and Dynamics CRM Online
    • One publicized case is Coca-Cola Enterprises moving 30,000+ knowledge workers to Exchange Online (from on-premise Lotus Notes)
    • Significant investments in advertising solutions; both on-line and off-line
    • Significant investments in leveraging open source software
    • Research efforts in multi-core and massively distributed parallel processing capabilities
    • Closing the gap in online search is just one component of the overall services strategy which includes a full spectrum of target audiences, and different types of services (i.e., infrastructure/building blocks, complementary/attached, consumer/finished services, etc.)

    Thus Microsoft definitely aspires to become a major player in the cloud. What remains to be seen is just how well Microsoft executes towards the cloud.

    Interesting times ahead. One thing is certain - more changes to come, and we can expect Microsoft to ramp up activities in this area very aggressively the next few years. Don't count Microsoft out, just yet. :)

    This post is part of a series of articles on cloud computing and related concepts.

     

  • Architecture + Strategy

    Multi-Enterprise Business Applications (MEBA) as Cloud-Based Next-Generation B2B Business Processes

    • 8 Comments

    Multi-Enterprise Business Applications (MEBA) are a new class of applications that can be used to support business processes that span enterprise and organizational boundaries. MEBAs leverage best practices and patterns from service-oriented architecture (SOA) techniques and technologies, and specifically cloud-based platforms, to facilitate the next-generation B2B (or multi-enterprise) collaboration.

    This is a project I had the privilege to participate in for the past few months, along with my esteemed colleague, Wade Wegner, and under Jack Greenfield's leadership, as part of Microsoft's Platform Architecture Team. This project was just highlighted at Microsoft’s Professional Developers Conference (PDC2008) in Los Angeles a few weeks ago, as the RedPrairie keynote demos that showcased Microsoft’s cloud computing platform, and discussed in more depth in one of its breakout sessions (see Wade's blog for more info).

    And more recently, we took a more architectural look at MEBA's at Microsoft's Strategic Architect Forum 2008 (SAF08).

     

     

    So what do we mean by “multi-enterprise business applications”? They are a new class of applications, different from the traditional data-driven applications that focus on managing data and resources and providing access to end users. They are more focused on implementing business processes that span enterprises, such as traditional B2B integration and collaboration scenarios. They leverage and build upon message-based integration and use well-defined protocols and roles, such as fundamental approaches and technologies used in enterprise service-oriented architecture initiatives. Actually, in a way, they represent an approach of extending enterprise SOA beyond the four walls of each enterprise, to integrate and work more seamlessly with other enterprises.

    At the same time, multi-enterprise business applications also have some differentiating requirements. Scenarios may include participants distributed throughout different parts of the world. And since they are intended to support mission-critical business activities, we need to have a very robust architecture that can ensure high availability, high reliability, a high-level of security; plus the need to have auditing, reporting, regulatory compliance, and so on; not significantly different from our enterprise IT architecture concerns from that perspective.

    And unlike traditional SOA applications that are more focused on functional capabilities within one enterprise, MEBAs extend the SOA concepts and technologies to business processes that span multiple enterprises. In addition, because MEBAs operate between organizations, their primary concerns are also different – community management, identity management, process execution management, multi-enterprise governance, etc.

    Moreover, unlike incumbents in the B2B software and service providers spaces where implementations today consist mostly of customizing proprietary products and services; MEBA defines an architecture in which foundational, common services should be implemented, and how they can composited into applications that define business processes. The evolution and maturity model of MEBAs are also more dependent on the community, than any one vendor maintaining a solution. MEBAs are a new class of applications; not a new class of products or solutions.

    Thus MEBA’s can be used across many different industries, especially ones that, from a traditional B2B perspective, tend to have a lot of cross-organizational collaboration needs. During this phase of the MEBA project, we attempted to take a more detailed look at the supply chain industry. And a more detailed view found many capabilities, within the supply chain industry, that today leverage various forms of technologies and implementation approaches to facilitate communication and integration across multiple partners on supply chain networks, or in a way, multiple enterprises. And for our project, we chose just one specific area, supply chain orchestration, to investigate further to evaluate how MEBA’s can be used to meet its specific requirements. So at the next lower level of detail, we have identified a set of common scenarios in supply chain orchestration. And then we chose just a few to prototype out, such as search for capacity, product recall, etc.; by building with Microsoft's Azure Services Platform.

    Challenges Today

    So now taking a few steps back up. Let’s talk about how some of these scenarios and capabilities are implemented today across the various industries. First of all we have multiple protocols that are intended to standardize the interaction models and data being exchanged. Some are more industry specific, such as RosettaNet, HL7 for healthcare, and FIX for financial services. While some are designed to be more general purpose, such as ebXML, WS-BPEL or BPEL4WS, and many others such as WS-Choreography, Java Business Integration (JBI), etc.

    But this at the same time also hints at some challenges we have today, as in a way, there are just too many standards. And many organizations are in the process of developing more standards to define how multi-enterprise collaboration should be facilitated in their industries. For example, automotive, auto parts distribution, etc.; just to name a few.

    And the observation today, is that, B2B, or integration or collaboration between multiple enterprises, or inter-enterprise SOA, is still relatively complex and difficult to implement and manage.

    For example, we have a very diverse set of technologies, collected from the past 25 years or so in various attempts at optimizing or streamlining communications between multiple organizations. Everything from EDI, FTP, on-premises software, integration service providers, B2B gateways, to the current class of SOA solutions that can be extended to support B2B scenarios.

    And not just the underlying technologies used, enterprises have very sophisticated needs and concerns in many areas, such as security, data ownership, management, and governance, etc. What’s more here, is that these needs and concerns are in a way, amplified or more complex when we look at them from a cross-organizational perspective.

    Why MEBA?

    So integration and enabling business processes across organizational boundaries, have been complex and difficult to accomplish, for many years. Why do we think now we may have a better chance at simplifying and streamlining efforts in this area, and moreover, why do we think MEBA’s, as a new class of applications may have a better chance at doing so?

    Traditional B2B or multi-enterprise communication and collaboration were complex to implement and often unreliable and error-prone. Organizations had to deal with a multitude of technologies, standards, and additional sets of technologies and implementations with each partner organization on a one-to-one basis. MEBAs aim to streamline and simplify these inherent complexities, by providing an architecture that builds on existing technologies and abstracts infrastructural concerns.

    From a timing perspective, the growing inter-dependence and always-connected environment for businesses, and availability of key new technologies (Web services, SOA, cloud computing, etc.), are showing signs that the time is right to take a new approach at multi-enterprise (or B2B) interactions. MEBAs build on the best practices and proven technology models of today, and offer the potential to greatly streamline and simplify efforts of implementing business processes that span multiple enterprises.

    And in general, by leveraging many of the best practices and building on many of the latest trends, such as the concept of an “Internet Service Bus” (ISB) providing an architecture and platform, upon which multi-enterprise business applications can be constructed, that simplify and streamline many aspects of B2B integration today, while meeting the needs of the organizations and participants involved. We think, this concept of the ISB, will significantly transform the architecture, and how we design and implement approaches to facilitate multi-enterprise business applications.

    And of course, this doesn’t mean we intend to replace the work organizations have already done with protocols and standards (such as RosettaNet, ebXML, HL7, FIX, etc.). In fact we think we can leverage those work, and use MEBA’s to provide implementations of these protocols, as well as an environment / platform to facilitate their orchestration and management.

    Thus MEBAs provide the potential of these high-level benefits:

    Business benefits – improve business agility, bottom-line revenue (reduced errors and cost, and faster response), and top-line revenue (higher automation, improved relationships with partners and customers)

    Technical benefits – simplify and streamline connectivity, improve visibility and governance, higher quality of service, focus on delivering business processes instead of infrastructure, leverage standards-based technologies, bridge multiple and disparate identity systems, etc.

    MEBA Reference Architecture

    MEBA Reference Architecture

    Now let us show you what a reference architecture for multi-enterprise business applications may look like. Just as you would expect, there are many layers of capabilities from this perspective. Basically, these applications will need to have a foundational services layer, which provides some core infrastructure capabilities such as compute or runtime environment, identity management system, workflow execution and management, robust messaging infrastructure, data management solution, and operational management services. On top of that, we can then build a layer of higher-level services, which are considered more functional, and provides capabilities such as community management (extending from trading party management in a traditional B2B perspective), services orchestration, business process management, party management, and so on. Then finally, we can build various types of communities, or partner networks, that define working relationships between multiple organizations. For example, each organization can belong to multiple communities, or as administrators to a particular community, can invite and add new partners, and so on. We would also like to use model-driven techniques to define business processes, and to be able to provision one or more instances of a given business process, each supporting a specific community of trading partners.

    Azure Services Platform

    When we saw the plans for the Azure Services Platform, we studied them to understand the kinds of applications it would support. Our conclusion was that it will spawn a second generation of B2B applications. We then coined the name MEBA to describe this new class of applications. Our focus here is to talk about MEBAs and how they can leverage this Azure Services Platform. So we will not get into the details of the Azure Services Platform, but we do want to articulate specifically how some of its components help support the MEBA concept.

    Windows Azure – the cloud compute platform, provides the underlying runtime environment for the foundational services and MEBA implementation. It offers high scalability and reliability, and global reach to partners across the world

    .NET Services – with its Service Bus, Access Control Services, and Workflow Services, provides the fundamental “Internet Service Bus” that can effectively address common concerns around identity and security, connectivity, and service orchestrations between multiple organizations

    SQL Services – provides the scalable and reliable cloud-based database solution, which helps to establish databases in the cloud to manage the data that is shared across the multiple organizations, and especially data that support the communities and their interactions

    Live Services – provides capabilities to connect to end users and support many aspects of human interaction needs

    Food for Thought

    MEBAs have the potential of completely changing the way enterprises interact with each other. If we further extend the MEBA vision and how it continues to simplify and streamline infrastructure concerns in facilitating business processes across organizations, we may see a world where business networks can be quickly and dynamically constructed to deliver business capabilities, simply by being able to connect the dots (or building with blocks). Business results delivered by the collective resources and capabilities from a network of multiple enterprises will become true differentiating factors, and more significant than any one enterprise can deliver alone. New business models may also emerge, as increasingly enterprises can participate in the larger scheme of things, and sometimes be a part of the process that span industries. The control of processes may begin to shift away from being enterprise-centric, to community-centric. Ultimately, enterprises can create new, more diverse, and more differentiating business offerings by being able to leverage the community of partners.

    This post is part of a series of articles on cloud computing and related concepts.

  • Architecture + Strategy

    Silverlight and Photosynth at Stargate Universe

    • 1 Comments

    image

    MGM and Microsoft partnered to build an application intended to give old and new fans of Stargate a first look at the main location of the new Stargate Universe show, an ancient starship called the Destiny. The sets for the Destiny are amongst the most sophisticated ever built for a TV show, accurately capturing the spirit of a spaceship that has been abandoned for millennia. 

    MGM wanted to give fans an accurate experience of what it would be like to walk around and explore the Destiny, and using Microsoft Silverlight and Photosynth were able to deliver an immersive 3-D experience.

    UPDATE 2009.09.04 – a Microsoft Case Study and video has been published for this project. You can view the Case Study at http://www.microsoft.com/casestudies/Case_Study_Detail.aspx?casestudyid=4000005102, and the video at http://www.microsoft.com/video/en/us/details/c0387435-dd01-4c5a-87c9-6e87cedeee15.

    Solution

    Over the space of two days, several hundred photographs were taken of some of the locations of the Destiny starship. These include the fan-favorite Gate Room, from where the crew embarks on their adventures through the Stargate as well as locations such as the ship’s control center, observation deck, and main shuttle.

    These photographs were used to create a number of 3D scenes using Microsoft Photosynth technology.  This technology analyzes pictures and creates a virtual 3D space using the different viewpoints from different pictures. Because the pictures were shot in high resolution, fans can zoom in and see the sets in great detail.

    image

    See the Stargate Universe Photosynth set at http://stargate.mgm.com/photosynth/index.html

    Application Scenario

    Now Photosynth does not require any programming effort. All we need to do, is to take pictures. Lots of pictures. Then all we need to do is upload the pictures to Photosynth.net, and Photosynth will do the rest, by interpreting textures in each photo and correlate the photos by matching similar textures, and figuring out spatially where each photo is located. Then finally, all of the photos in one set are stitched into a 3D environment for visitors to navigate through.

    Compare Photosynth to traditional approaches of trying to deliver 360 degree view of items, rooms, etc., which requires carefully planned photo shoots, then writing or obtaining specialized software to render them and allow viewers to interact with them. The time and effort required to create a “synth” can literally be minutes. Then just a few HTML-level changes to embed the Silverlight viewer for Photosynth into a web page, and anyone visiting that page can now interact with the synth.

    A screen shot of the gallery view is shown below.

    image

    Kind of like a photo gallery on steroids. However, Photosynth is not just a slideshow. It allows viewers to traverse through the set of pictures in a non-linear manner, and to see the spatial perspectives between pictures (where each photo was taken relative to other photos).

    At the same time, specific considerations should be taken when taking photos for Photosynth. Basically, cover lots of angles and have lots of overlaps for overview shots, then lots of close-ups for detailed shots. Here’s a short tutorial on how to shoot for Photosynth.

    For the Stargate Universe synth, extra attention was paid to the physical architecture of the Destinty set. And the series of shots were taken so that the resulting synth would provide viewers an experience of walking through the set.

    Thus Photosynth can be leveraged in many scenarios, especially when a project has physical content that can be shared online. These can be anything, such as product showcases, exterior environments, interior walk-throughs, etc. Another simple and elegant way to enhance user acquisition and retention on websites.

     

Page 3 of 28 (137 items) 12345»