Architecture + Strategy

Musings from David Chou - Architect, Microsoft

  • Architecture + Strategy

    Building Highly Scalable Java Applications on Windows Azure (JavaOne 2010)

    • 14 Comments

    075018_thumb6JavaOne has always been one of my favorite technology conferences, and this year I had the privilege to present a session there. Given my background in Java, previous employment at Sun Microsystems, and the work I’m currently doing with Windows Azure at Microsoft, it’s only natural to try to piece them together and find more ways to use them. Well, honestly, this also gives me an excuse to attend the conference, plus the co-located Oracle OpenWorld, along with 41,000 other attendees. ;)

    InfoQA related article published on InfoQ may also provide some context around this presentation. http://www.infoq.com/news/2010/09/java-on-azure-theory-vs-reality. Plus my earlier post on getting Jetty to work in Azure - http://blogs.msdn.com/b/dachou/archive/2010/03/21/run-java-with-jetty-in-windows-azure.aspx, which goes into a bit more technical detail on how a Java application can be deployed and run in Windows Azure.

    Java in Windows Azure

    So at the time of this writing, deploying and running Java in Windows Azure is conceptually analogous to launching a JVM and run a Java app from files stored on a USB flash drive (or files extracted from a zip/tar file without any installation procedures). This is primarily because Windows Azure isn’t a simple server/VM hosting environment. The Windows Azure cloud fabric provides a lot of automation and abstraction so that we don’t have to deal with server OS administration and management. For example, developers only have to upload application assets including code, data, content, policies, configuration files and service models, etc.; while the Windows Azure manages the underlying infrastructure:

    • application containers and services, distributed storage systems
    • service lifecycle, data replication and synchronization
    • server operating system, patching, monitoring, management
    • physical infrastructure, virtualization, networking
    • security
    • “fabric controller” (automated, distributed service management system)

    The benefit of this cloud fabric environment is that developers don’t have to spend time and effort managing the server infrastructure; they can focus on the application instead. However, the higher abstraction level also means we are interacting with sandboxes and containers, and there are constraints and limitations compared to the on-premise model where the server OS itself (or middleware and app server stack we install separately) is considered the platform. Some of these constraints and limitations include:

    • dynamic networking – requires interaction with the fabric to figure out the networking environment available to a running application. And as documented, at this moment, the NIO stack in Java is not supported because of its use of loopback addresses
    • no OS-level access – cannot install software packages
    • non-persistent local file system – have to persist files elsewhere, including log files and temporary and generated files

    These constraints impact Java applications because the JVM is a container itself and needs this higher level of control, whereas .NET apps can leverage the automation enabled in the container. Good news is, the Windows Azure team is working hard to deliver many enhancements to help with these issues, and interestingly, in both directions in terms of adding more higher-level abstractions as well as providing more lower-level control.

    Architecting for High Scale

    So at some point we will be able to deploy full Java EE application servers and enable clustering and stateful architectures, but for really large scale applications (at the level of Facebook ad Twitter, for example), the current recommendation is to leverage shared-nothing and stateless architectures. This is largely because, in cloud environments like Azure, the vertical scaling ceiling for physical commodity servers is not very high, and adding more nodes to a cluster architecture means we don’t get to leverage the automated management capabilities built into the cloud fabric. Plus the need to design for system failures (service resiliency) as opposed to assuming a fully-redundant hardware infrastructure as we typically do with large on-premise server environments.

    image_thumb6

    (Pictures courtesy of LEGO)

    The top-level recommendation for building a large-scale application in commodity server-based clouds is to apply more distributed computing best practices, because we’re operating in an environment with more smaller servers, as opposed to fewer bigger servers. The last part of my JavaOne presentation goes into some of those considerations. Basically - small pieces, loosely coupled. It’s not like the traditional server-side development where we’d try to get everything accomplished within the same process/memory space, per user request. Applications can scale much better if we defer (async) and/or parallelize as much work as possible; very similar to Twitter’s current architecture. So we could end up having many front-end Web roles just receiving HTTP requests, persist some data somewhere, fire off event(s) into the queue, and return a response. Then another layer of Worker roles can pick up the messages from the queue and do the rest of the work in an event-driven manner. This model works great in the cloud because we can scale the front-end Web roles independently of the back-end Worker roles, plus not having to worry about physical capacity.

    image_thumb9

    In this model, applications need to be architected with these fundamental principles:

    • Small pieces, loosely coupled
    • Distributed computing best practices
      • asynchronous processes (event-driven design)
      • parallelization
      • idempotent operations (handle duplicity)
      • de-normalized, partitioned data (sharding)
      • shared nothing architecture
      • optimistic concurrency
      • fault-tolerance by redundancy and replication
      • etc.

    Thus traditionally monolithic, sequential, and synchronous processes can be broken down to smaller, independent/autonomous, and loosely coupled components/services. As a result of the smaller footprint of processes and loosely-coupled interactions, the overall architecture will observe better system-level resource utilization (easier to handle more smaller and faster units of work), improved throughput and perceived response time, superior resiliency and fault tolerance; leading to higher scalability and availability.

    Lastly, even though this conversation advocates a different way of architecting Java applications to support high scalability and availability, the same fundamental principles apply to .NET applications as well.

  • Architecture + Strategy

    Designing for Cloud-Optimized Architecture

    • 13 Comments

    I wanted to take the opportunity and talk about the cloud-optimized architecture, the implementation model instead of the popular perceptions around leveraging cloud computing as a deployment model. This is because, while cloud platforms like Windows Azure can run a variety of workloads, including many legacy/existing on-premises software and application migration scenarios that can run on Windows Server; I think Windows Azure’s platform-as-a-service model offers a few additional distinct technical advantages when we design an architecture that is optimized (or targeted) for the cloud platform.

    Cloud platforms differ from hosting providers

    First off, the major cloud platforms (regardless how we classify them as IaaS or PaaS) at the time of this writing, impose certain limitations or constraints in the environment, which makes them different from existing on-premises server environments (saving the public/private cloud debate to another time), and different from outsourced hosting managed service providers. Just to cite a few (according to my own understanding, at the time of this writing):

    • Amazon Web Services
      • EC2 instances are inherently stateless; that is, their local storage is non-persistent and non-durable
      • Little or no control over infrastructure that are used beneath the EC2 instances (of course, the benefit is we don’t have to be concerned with them)
      • Requires systems administrators to configure and maintain OS environments for applications
    • Google App Engine
      • Non-VM/OS instance-aware platform abstraction which further simplifies code deployment and scale, though some technical constraints (or requirements) as well. For example,
      • Stateless application model
      • Requires data de-normalization (although Hosted SQL will mitigate some concerns in this area)
      • If the application can't load into memory within 1 second, it might not load and return 500 error codes (from Carlos Ble)
      • No request can take more than 30 seconds to run, otherwise it is stopped
      • Read-only file system access
    • Windows Azure Platform
      • Windows Azure instances are also inherently stateless – round-robin load balancer and non-persistent local storage
      • Also due to the need to abstract infrastructure complexities, little or no control for the underlying infrastructure is offered to applications
      • SQL Azure has individual DB sizing constraints due to its 3-replica synchronization architecture

    Again, just based on my understanding, and really not trying to paint a “who’s better or worse” comparative perspective. The point is, these so-called “differences” exist because of many architectural and technical decisions and trade-offs to provide the abstractions from the underlying infrastructure. For example, the list above is representative of most common infrastructure approaches of using homogeneous, commodity hardware, and achieve performance through scale-out of the cloud environment (there’s another camp of vendors that are advocating big-machine and scale-up architectures that are more similar to existing on-premises workloads). Also, the list above may seem unfair to Google App Engine, but on the flip side of those constraints, App Engine is an environment that forces us to adopt distributed computing best practices, develop more efficient applications, have them operate in a highly abstracted cloud and can benefit from automatic scalability, without having to be concerned at all with the underlying infrastructure. Most importantly, the intention is to highlight that there are a few common themes across the list above – stateless application model, abstraction from infrastructure, etc.

    Furthermore, if we take a cloud computing perspective, instead of trying to apply the traditional on-premises architecture principles, then these are not really “limitations”, but more like “requirements” for the new cloud computing development paradigm. That is, if we approach cloud computing not from a how to run or deploy a 3rd party/open-source/packaged or custom-written software perspective, but from a how to develop against the cloud platform perspective, then we may find more feasible and effective uses of cloud platforms than traditional software migration scenarios.

    Windows Azure as an “application platform”

    Fundamentally, this is about looking at Windows Azure as a cloud platform in its entirety; not just a hosting environment for Windows Server workloads (which works too, but the focus of this article is on cloud-optimized architecture side of things). In fact, Windows Azure got its name because it is something a little different than Windows Server (at the time of this writing). And that technically, even though the Windows Azure guest VM OS is still Windows Server 2008 R2 Enterprise today, the application environment isn’t exactly the same as having your own Windows Server instances (even with the new VM Role option). And it is more about leveraging the entire Windows Azure platform, as opposed to building solely on top of the Windows Server platform.

    For example, below is my own interpretation of the platform capabilities baked into Windows Azure platform, which includes SQL Azure and Windows Azure AppFabric also as first-class citizens of the Windows Azure platform; not just Windows Azure.

    image_thumb[13]

    I prefer using this view because I think there is value to looking at Windows Azure platform holistically. And instead of thinking first about its compute (or hosting) capabilities in Windows Azure (where most people tend to focus on), it’s actually more effective/feasible to think first from a data and storage perspective. As ultimately, code and applications mostly follow data and storage.

    For one thing, the data and storage features in Windows Azure platform are also a little different from having our own on-premises SQL Server or file storage systems (whether distributed or local to Windows Server file systems). The Windows Azure Storage services (Table, Blob, Queue, Drive, CDN, etc.) are highly distributed applications themselves that provide a near-infinitely-scalable storage that works transparently across an entire data center. Applications just use the storage services, without needing to worry about their technical implementation and up-keeping. For example, for traditional outsourced hosting providers that don’t yet have their own distributed application storage systems, we’d still have to figure out how to implement and deploy a highly scalable and reliable storage system when deploying our software. But of course, the Windows Azure Storage services require us to use new programming interfaces and models (REST-based API’s primarily), and thus the difference with existing on-premises Windows Server environments.

    SQL Azure, similarly, is not just a plethora of hosted SQL Server instances dedicated to customers/applications. SQL Azure is actually a multi-tenant environment where each SQL Server instance can be shared among multiple databases/clients, and for reliability and data integrity purposes, each database has 3 replicas on different nodes and has an intricate data replication strategy implemented. The Inside SQL Azure article is a very interesting read for anyone who wants to dig into more details in this area.

    Besides, in most cases, a piece of software that runs in the cloud needs to interact with data (SQL or no-SQL) and/or storage in some manner. And because data and storage options in Windows Azure platform are a little different than their seeming counterparts in on-premises architectures, applications often require some changes as well (in addition to the differences in Windows Azure alone). However, if we look at these differences simply as requirements (what we have) in the cloud environment, instead of constraints/limits (what we don’t have) compared to on-premises environments, then it will take us down the path to build cloud-optimized applications, even though it might rule out a few application scenarios as well. And the benefit is that, by leveraging the platform components as they are, we don’t have to invest in the engineering efforts to architect and build and deploy highly reliable and scalable data management and storage systems (e.g., build and maintain your own implementations of Cassandra, MongoDB, CouchDB, MySQL, memcarche, etc.) to support applications; we can just use them as native services in the platform.

    The platform approach allows us to focus our efforts on designing and developing the application to meet business requirements and improve user experience, by abstracting away the technical infrastructure for data and storage services (and many other interesting ones in AppFabric such as Service Bus and Access Control), and system-level administration and management requirements. Plus, this approach aligns better with the primary benefits of cloud computing – agility and simplified development (less cost as a result).

    Smaller pieces, loosely coupled

    Building for the cloud platform means designing for cloud-optimized architectures. And because the cloud platforms are a little different from traditional on-premises server platforms, this results in a new developmental paradigm. I previously touched on this topic with my presentation at JavaOne 2010, then later on at Cloud Computing Expo 2010 Santa Clara; just adding some more thoughts here. To clarify, this approach is more relevant to the current class of “public cloud” platform providers such as ones identified earlier in this article, as they all employ the use of heterogeneous and commodity servers, and with one of the goals being to greatly simplify and automate deployment, scaling, and management tasks.

    Fundamentally, cloud-optimized architecture is one that favors smaller and loosely coupled components in a highly distributed systems environment, more than the traditional monolithic, accomplish-more-within-the-same-memory-or-process-or-transaction-space application approach. This is not just because, from a cost perspective, running 1000 hours worth of processing in one VM is relatively the same as running one hour each in 1000 VM’s in cloud platforms (although the cost differential is far greater between 1 server and 1000 servers in an on-premises environment). But also, with a similar cost, that one unit of work can be accomplished in approximately one hour (in parallel), as opposed to ~1000 hours (sequentially). In addition, the resulting “smaller pieces, loosely coupled” architecture can scale more effectively and seamlessly than a traditional scale-up architecture (and usually costs less too). Thus, there are some distinct benefits we can gain, by architecting a solution for the cloud (lots of small units of work running on thousands of servers), as opposed to trying to do the same thing we do in on-premises environments (fewer larger transactions running on a few large servers in HA configurations).

    I like using the LEGO analogy below. From this perspective, the “small pieces, loosely coupled” fundamental design principle is sort of like building LEGO sets. To build bigger sets (from a scaling perspective), with LEGO we’d simply use more of the same pieces, as opposed to trying to use bigger pieces. And of course, the same pieces can allow us to scale down the solution as well (and not having to glue LEGO pieces together means they’re loosely coupled). Winking smile

    But this architecture also has some distinct impacts to the way we develop applications. For example, a set of distributed computing best practices emerge:

    • asynchronous processes (event-driven design)
    • parallelization
    • idempotent operations (handle duplicity)
    • de-normalized, partitioned data (sharding)
    • shared nothing architecture
    • fault-tolerance by redundancy and replication
    • etc.

    Asynchronous, event-driven design – This approach advocates off-loading as much work from user requests as possible. For example, many applications just simply incur the work to validate/store the incoming data and record it as an occurrence of an event and return immediately. In essence it’s about divvying up the work that makes up one unit of work in a traditional monolithic architecture, as much as possible, so that each component only accomplishes what is minimally and logically required. Rest of the end-to-end business tasks and processes can then be off-loaded to other threads, which in cloud platforms, can be distributed processes that run on other servers. This results in a more even distribution of load and better utilization of system resources (plus improved perceived performance from a user’s perspective), thus enabling simpler scale-out scenarios as additional processing nodes and instances can be simply added to (or removed from) the overall architecture without any complicated management overhead. This is nothing new, of course; many applications that leverage Web-oriented architectures (WOA), such as Facebook, Twitter, etc., have applied this pattern for a long time in practice. Lastly, of course, this also aligns well to the common stateless “requirement” in the current class of cloud platforms.

    Parallelization – Once the architecture is running in smaller and loosely coupled pieces, we can leverage parallelization of processes to further improve the performance and throughput of the resulting system architecture. Again, this wasn’t so prevalent in traditional on-premises environments because creating 1000 additional threads on the same physical server doesn’t get us that much more performance boost when it is already bearing a lot of traffic (even on really big machines). But in cloud platforms, this can mean running the processes in 1000 additional servers, and for some processes this would result in very significant differences. Google’s Web search infrastructure is a great example of this pattern; it is publicized that each search query gets parallelized to the degree of ~500 distributed processes, then the individual results get pieced together by the search rank algorithms and presented to the user. But of course, this also aligns to the de-normalized data “requirement” in the current class of cloud platforms, as well as SQL Azure’s implementation that resulted in some sizing constraints and the consequent best practice of partitioning databases, because parallelized processes can map to database shards and try not to significantly increase the concurrency levels on individual databases, which can still degrade overall performance.

    Idempotent operations – Now that we can run in a distributed but stateless environment, we need to make sure that same process that gets routed to multiple servers don’t result in multiple logical transactions or business state changes. There are processes that could and prefer duplicate transactions, such as ad clicks; but there are also processes that don’t want multiple requests be handled as duplicates. But the stateless (and round-robin load-balancing in Windows Azure) nature of cloud platforms requires us to put more thoughts into scenarios such as when a user manages to send multiple submits from a shopping cart, as these requests would get routed to different servers (as opposed to stateful architectures where they’d get routed back to the same server with sticky sessions) and each server wouldn’t know about the existence of the process on the other server(s). There is no easy way around this, as the application ultimately needs to know how to handle conflicts due to concurrency. Most common approach is to implement some sort of transaction ID that uniquely identifies the unit of work (as opposed to simply relying on user context), then choose between last-writer or first-writer wins, or optimistic locking (though any form of locking would start to reduce the effectiveness of the overall architecture).

    De-normalized, partitioned data (sharding) – Many people perceive the sizing constraints in SQL Azure (currently at 50GB – also note it’s the DB size and not the actual file size which may contain other related content) as a major limitation in Windows Azure platform. However, if a project’s data can be de-normalized to a certain degree, and partitioned/sharded out, then it may fit well into SQL Azure and benefit from the simplicity, scalability, and reliability of the service. The resulting “smaller” databases actually can promote the use of parallelized processes, perform better (load more distributed than centralized), and improve overall reliability of the architecture (one DB failing is only a part of the overall architecture, for example).

    Shared nothing architecture – This means a distributed computing architecture in which each node is independent and self-sufficient, and there is no single point of contention across the system. With data sharding and maintained in many distributed nodes, the application itself can and should be developed using shared-nothing principles. But of course, many applications need access to shared resources. It is then a matter of deciding whether a particular resource needs to be shared for read or write access, and different strategies can be implemented on top of a shared nothing architecture to facilitate them, but mostly as exceptions to the overall architecture.

    Fault-tolerance by redundancy and replication – This is also “design for failures” as referred to many cloud computing experts. Because of the use of commodity servers in these cloud platform environments, system failures are a common thing (hardware failures occur almost constantly in massive data centers) and we need to make sure we design the application to withstand system failures. Similar to thoughts around idempotency above, designing for failures basically means allowing requests to be processed again; “try-again” essentially.

    Lastly, each of the topic areas above is worthy of an individual article and detailed analysis; and lots of content are available on the Web that provide a lot more insight. The point here is, each of the principles above actually has some relationship with, and dependency on, the others. It is the combination of these principles that contribute to an effective distributed computing, and cloud-optimized architecture.

  • Architecture + Strategy

    Run Java with Jetty in Windows Azure

    • 11 Comments
    Windows Azure

    [Update 2011.01.17]NIO is no longer an issue in Windows Azure with SDK 1.3 (see post for more details)

    [Update 2010.03.28] Included Jetty configuration information (see “Configure Jetty” section below)

    Jetty is a Java-based, open source Web Server which provides a HTTP server and Servlet container capable of serving static and dynamic content either from a standalone or embedded instantiations. Jetty is used by many popular projects such as the Apache Geronimo JavaEE compliant application server, Apache ActiveMQ, Apache Cocoon, Apache Hadoop, Apache Maven, BEA WebLogic Event Server, Eucalyptus, FioranoMQ Java Messaging Server, Google App Engine and Web Toolkit plug-in for Eclipse, Google Android, RedHat JBoss, Sonic MQ, Spring Framework, Sybase EAServer, Zimbra Desktop, etc. (just to name a few).

    The Jetty project provides:

    • Asynchronous HTTP Server
    • Standard based Servlet Container
    • Web Sockets server
    • Asynchronous HTTP Client
    • OSGi, JNDI, JMX, JASPI, AJP support

    From a application container perspective, Jetty can be used as an alternative deployment approach for the most popular frameworks in Java, such as Spring (and many of its sub-projects), EJB containers, integration with JEE servers as mentioned above, and a lot more, most supported via configuration as opposed to code-level integrations.

    Java Support in Windows Azure

    Since PDC09 (Professional Developers Conference), we’ve announced support for running Java and Tomcat in Windows Azure, and highlighted a project at Domino’s Pizza that ran a version of their online pizza ordering website in Windows Azure. Below is a short list of current resources that provide information on Java support in Windows Azure:

    However, this doesn’t mean that Tomcat is the only Java application container supported in Windows Azure. In fact, the approach basically consists of rolling in your own JRE (Java runtime), and any Java package that can be instantiated via the command line (instead of needing to install into the O/S). This Java application can then be packaged into a Worker Role application, then deployed into Windows Azure.

    So why a Worker Role when we have a Web Role in Windows Azure? A Web Role essentially uses an IIS front-end, thus it supports ASP.NET applications, and any FastCGI extensions (Java is supported there too, but I’ll save that for another post). But a Worker Role gives us a bit more flexibility, as a Web Role may define a single HTTP endpoint and a single HTTPS endpoint for external clients, whereas a Worker Role may define any number of external endpoints using HTTP, HTTPS, or TCP. Each external endpoint defined for a role must listen on a unique port.

    Thus for things like Tomcat (and Jetty) which want to do their own listening on ports as defined, is more suitable for Worker Roles in Windows Azure. And to do that, a bit of plumbing is required. The biggest challenge is actually hooking up the physical port that the Windows Azure fabric controller assigns to an instance of the Worker Role, even though logically the pool of Worker Roles are intended to receive external HTTP traffic on port 80 (or any port of your choosing). This is because internally inside of the Windows Azure environment, multiple VMs can reside on a single physical box, and that the load balancer may need to forward requests to dynamically provisioned instances residing in different locations, thus the fabric controller uses a different set of internal ports for this internal portion of the communication. Again for Web Roles we don’t need to be concerned with this as the fabric controller automatically configures IIS to listen on the correct internal port.

    And this plumbing is exactly what the Tomcat Solution Accelerator provides (essentially making a call to the Worker Role environment and finding the right port, then making that change in server.xml for Catalina in Tomcat to pick up, then start the server to listen on that port), but if we want to do something else, like Jetty, then we need to do this plumbing ourselves. But don’t worry, it’s actually pretty simple, and really opens up the opportunity to deploy all kinds of stuff into Windows Azure.

    Run Jetty in Windows Azure in <10 Minutes

    Below is a screenshot of my little Worker Role in Windows Azure running a Jetty Web server on port 80 (don’t try the URL; I didn’t leave the application running).

    jetty-azure

    Worker Role Implementation

    To do this, basically just start with a new Cloud project in Visual Studio, and add one Worker Role (just the same as a “Hello World” Worker Role app). And below is the only code I added inside of the Run() method in the WorkerRole class (minus the tracing code which I removed from this view):

    string response = "";
    try
    {
        System.IO.StreamReader sr;
        string port = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["HttpIn"].IPEndpoint.Port.ToString();
        string roleRoot = Environment.GetEnvironmentVariable("RoleRoot");
        string jettyHome = roleRoot + @"\approot\app\jetty7";
        string jreHome = roleRoot + @"\approot\app\jre6";
        Process proc = new Process();
        proc.StartInfo.UseShellExecute = false;
        proc.StartInfo.RedirectStandardOutput = true;
        proc.StartInfo.FileName = String.Format("\"{0}\\bin\\java.exe\"", jreHome);
        proc.StartInfo.Arguments = String.Format("-Djetty.port={0} -Djetty.home=\"{1}\" -jar \"{1}\\start.jar\"", port, jettyHome);
        proc.EnableRaisingEvents = false;
        proc.Start();
        sr = proc.StandardOutput;
        response = sr.ReadToEnd();
    }
    catch (Exception ex)
    {
        response = ex.Message;
        Trace.TraceError(response);
    }

    Essentially, all this does is simply starting a new sub-process in the Worker Role instance to host and run the JRE and Jetty. The rest is figuring out how to provide the correct arguments to the process so that Jetty will load properly and it will listen to the port that this particular Worker Role instance is assigned to, as the forwarding port from the load balancer.

    And of course, this is a pretty simplified scenario (and also because Jetty is pretty easy to launch), though more complicated configuration can also be supported, such as setting environment variables, changing XML configuration files (like what we have to do with Tomcat’s server.xml), executing a .bat file or other scripts, starting up multiple processes, etc.

    Add the Java and Jetty Assets

    Next, copy and paste in the JRE and Jetty files into the WorkerRole’s folder. For me I put them under another folder called “app”, and in their individual folders “jre6” and “jetty7”. Under solution explorer in Visual Studio, this looks like:

    image

    Configure the External Input Endpoint

    And the last significant change is to ServiceDefinition.csdef, by adding the below to the “WorkerRole” element:

    <Endpoints>
      <InputEndpoint name="HttpIn" port="80" protocol="tcp" />
    </Endpoints>

    Which is required to tell the fabric controller that the external input end point for this application is port 80.

    Configure Jetty (updated 2010.03.28)

    [Update 2011.01.17]NIO is no longer an issue in Windows Azure with SDK 1.3 (see post for more details). No need to replace NIO with BIO connector as described below.

    There are many ways Jetty can be configured to run in Azure (such as for embedded server, starting from scratch instead of the entire distribution with demo apps), thus earlier I didn’t include how my deployment was configured, as each application can use different configurations, and I didn’t think my approach was necessarily the most ideal solution. Anyway, here is how I configured Jetty to run for the Worker Role code shown in this post.

    First, I had to change the default NIO ChannelConnector that Jetty was using, from the new/non-blocking I/O connector to the traditional blocking IO and threading model BIO SocketConnector (because the loopback connection required by NIO doesn’t seem to work in Windows Azure).

    This can be done in etc/jetty.xml, where we set connectors, by editing the New element to use org.eclipse.jetty.server.bio.SocketConnector instead of the default org.eclipse.jetty.server.nio.SelectChannelConnector, and remove the few additional options for the NIO connector. The resulting block looks like this:

    <Call name="addConnector">
      <Arg>
        <
    New class="org.eclipse.jetty.server.bio.SocketConnector">
          <Set name="host"><SystemProperty name="jetty.host" /></Set>
          <Set name="port"><SystemProperty name="jetty.port" default="8080" /></Set>
          <Set name="maxIdleTime">300000</Set>
        </New>
      </Arg>
    </Call>

    Now, I chose the approach to use JRE and Jetty files simply as application assets (read-only), mostly because this is a simpler exercise, and that I wanted to streamline the Worker Role implementation as much as possible, without having to do more configurations on the .NET side than necessary (such as due to dependencies on allocating a separate local resource/storage for Jetty, copy files over from the deployment, and use that as a runtime environment where it can write to files).

    As a result of this approach, I needed to make sure that Jetty doesn’t need to write to any files locally when the server is started. Description of what I did:

    • etc/jetty.xml – commented out the default “RequestLog” handler so that Jetty doesn’t need to write to that log
    • etc/jetty.xml – changed addBean “org.eclipse.jetty.deploy.WebAppDeployer”’s “extract” property to “false” so that it doesn’t extract the .war files
    • contexts/test.xml – changed <Set name="extractWAR">’s property to “false” so that it doesn’t extract the .war files as well

    This step above is considered optional, as you can also create a local storage and copy the jetty and jre directories over, then launch the JRE and Jetty using those files. But you’ll also need more code in the Worker Role to support that, as well as using a different location to find the JRE and Jetty to launch the sub-process (the Tomcat Solution Accelerator used that approach).

    Download the project

    Download the project
    The entire solution (2.3MB) including the Jetty assets as configured (but minus the ~80MB of JRE binaries), can be downloaded from Skydrive.

    Summary

    The approach I described in this post uses JRE and Jetty files simply as application assets, since from a development process perspective I was thinking that development and testing of the Java applications to be deployed in Jetty should be done outside of the Windows Azure environment, and Windows Azure can be viewed as the staging/production environment for the applications. From that perspective, we shouldn’t need to have hot deployment of applications into Jetty, and other things like log files written and data/content used by Jetty and the applications, should be persisted to and retrieved from Windows Azure Storage and/or SQL Azure services.

    The implementation is really just intended for deployment into the Windows Azure cloud environment (or the local dev fabric for testing purposes). For developing applications to run in Jetty, we can use any tools we prefer in the Java ecosystem – Eclipse, NetBeans, IntelliJ, Emacs, etc., and do all of the unit testing and packaging into supported forms (such as .war packages). That way, development efforts on the Java side can be just as productive, while the only work that we really need to do is a one-time integration and configuration of the staging/production runtime, if we choose to use Windows Azure as the cloud platform to host an application.

    Similarly, for developing and unit testing the C# managed code inside the Worker Role, we also don’t have to always work inside the Windows Azure environment. In fact, it may be quicker to have the same code written as a Console application and test it there first, then port over to the Worker Role as by that time we’d only need to be concerned with integrating with specific items inside of the Windows Azure environment. And of course, we can re-factor the code to be more parameter-driven so the Java integration code can be nicely decoupled from the Windows Azure integration code.

    Anyway, simple exercise done. Now we can try some more complicated things such as Geronimo, GlassFish, Hadoop, etc. Stay tuned for upcoming posts about these efforts.

    Lastly, these are the tools I used:

  • Architecture + Strategy

    Web 2.0 - A Platform Perspective

    • 10 Comments

    Background & Primer

    "Web as a Platform" has been a much discussed topic since Tim O'Reilly used it as a tagline in the first Web 2.0 conference back in October of 2004, then described in more detail in a 2005 article, and the subsequent "Mind Map" graphic:

    800px-Web_2_0_Map_svg

    Since then many interpretations of the "Web platform" have existed, ranging from technical perspectives that focused on tools such as AJAX, RSS, REST, SOAP, mashups, composite applications; user-generated content and collective intelligence such as Wikipedia, Youtube; social bookmarking/syndication such as del.icio.us, Digg; to social networks such as Facebook, Myspace, etc. Just to list a few, but the list of sites and categories of sites that exemplify Web 2.0 principles has undergone an explosive growth in the past few years.

    Collectively, the rich cluster of "Web 2.0" sites on the internet form a services foundation from which applications and functionalities can be built upon, without needing any additional dedicated infrastructure. This marks a significantly different approach from "Web 1.0" site implementations where each organization has to procure dedicated hardware, software, hosting environment, etc. in order to provision a new application on the internet. As a result, the collection of cloud-based services form a new kind of "platform" to create a new breed of applications.

    Understanding Web as a Platform

    Without making this yet another attempt at trying to define the specifics of Web 2.0 (or even Web 3.0 for that matter) and the internet platform, delegating it to those who focus on semantics, I think we can look at "Web as a Platform" in its broadest terms. That is, a platform that provides some sort of framework which allows people to build stuff upon, while encapsulating (or hiding) some of the underlying complexities.

    But this doesn't point directly to technical solutions; it really encompasses many categories of "stuff" (such as media, social interactions, implicit relationships, semantic connections, monetization methods, etc.) that can be leveraged and implemented on the Web today. I liked how Fred Wilson said it:

    I believe the web is a platform. And that everything we need for an open ad market, or an open data architecture, or frankly most anything else, is available on the "web platform" today.

    So what can we do with the Web platform? There are many perspectives on this as well. Such as Marc Andreesen's "layered" perspective:

    Level 1 - API access - Flickr, Delicious, Twitter, etc.
    Level 2 - API plug-in - Facebook
    Level 3 - Runtime environment - Ning, Salesforce.com, etc.

    And Alex Iskold's "building blocks" perspective:

    Storage Services - Amazon S3, GDrive, Windows Live Skydrive, etc.
    Messaging Services - Amazon Simple Queue Service, BizTalk Services, etc.
    Compute Services - Sun Grid
    Information Services - Amazon E-Commerce, Yahoo! Answers, Virtual Earth, etc.
    Search Services - Google Search API, Alexa Search Platform, Live Search, etc.
    Web 2.0 Services - del.icio.us, Flickr, Basecamp, etc.

    Again, without questioning the validity of these categorizations used (as there are lots of discussion about that as well), I think from a general sense, both perspectives are valid. I think that building blocks do exist, but at the same time, there are multiple layers of building blocks (or categories) in the Web platform.

    What this means, is that building blocks in each layer can be utilized in various combinations/permutations to create the next layer up. These layers span between two extremes - information and people. The layers closer to information consist of Web application platforms as we know today, such as ASP.NET, Silverlight, LAMP, Java, Ruby on Rails, etc.; that require more expert knowledge in development and technology but smaller parts of the overall population. The layers closer to people are still being formed as we speak, but in general they rely on higher forms of abstraction that provide services closer to our lives, while enabling the broad reach of larger pools of audiences (consumerization and democratization of technology comes to mind). And today we are seeing higher and higher layers of platforms being created that allow people to connect, to organize, to find and use resources, to be social, and to basically "live" on the Web.

    Of course, the word "platform" is being used very loosely today, and new "platforms" and layers of platforms are being created almost on a daily basis. Marshall Kirkpatrick took a real brief look at some of the most hyped new platforms today. For example, the most recent and significant incarnations of higher-level Web platforms are probably Facebook Platform and Google OpenSocial.

    From a platform layer perspective, the Facebook Platform and Google OpenSocial, even though aimed at doing different things (lots of debate on this too), are built on top of other existing layers. Applications built on top of the Facebook Platform use a combination of traditional Web app technologies like HTML, CSS, JavaScript, XML, etc., but their benefits are derived from building blocks available on the Facebook Platform, in the form of mashups of external services building blocks, explicit foundation blocks (such as News Feeds, Status, Events, FBML, FQL, configuration and provisioning systems, etc.), and implicit foundation blocks (social graphs, software distribution/dissemination channel, monetization, 50+ million and still growing user base, etc.). A major characteristic of this platform is that it is very easy to develop against, which democratizes development and allows more and more people to participate in the social experience. In essence this platform furtherly narrows the gap between technology and people (thus categorized as a higher-layer platform). This resulted in a wildly viral and vital platform that has accounted more than 5,000 applications deployed today and growing exponentially.

    From a higher level, it seems that a "Web OS" of some sort is starting to take shape, as we can draw many parallels to the layered, subsystems and componentized approaches in modern computer operating system and software architectures. But I am not yet sure that it would be of value to try to apply traditional thinking in defining a "standard" Web platform stack, by needlessly preempting more knowledgeable people, and risk further defragmenting the evolution.

    In general though, by today we can definitely see the Web maturing as a very viable platform. News such as Amazon S3 exceeds 99.99% uptime should remove most doubts about the reliability of cloud-based services. But I think it is a platform with a spectrum of choices (layers and building blocks) where people with different skillsets can look to leverage and add value. The choices available in the full spectrum are all relevant, despite some idealists' claim that newer and higher-level models (such as higher layers of the platform used in the context of this post) will completely commoditize and subsume older and lower-level models. I tend to think that, while it is true that more and more attention will be focused on newer and higher-level models, we will continue to see lots of innovation on the lower-level layered platforms. We will just see that more and more people will be involved in the overall ecosystem, with a large infusion of participants with non-technical skillsets increasingly more involved at the higher levels. This I think is the true goal of Web 2.0, connecting people and democratizing/bridging the technology chasm.

    What's Next?

    It's always interesting to try to take a peek at what may be possible in the future.

    Democratization in software development - Recent advances in the Web platform (raising layers of abstraction), model-driven architectures, etc., will increasingly simplify software development efforts for the higher level platforms. Two very notable examples are Yahoo! Pipes and Microsoft Popfly.

    The Implicit Web - Increasing specialization in making sense of the dynamic aspects of user behaviors and activities in the online world. For example, search engines to finally grasp user intent (via click streams, combinational media consumption habits, etc.). This is also an area where the Facebook Platform may be able to glean from the reactions its applications can elicit from the members, based on the static social graphs.

    Privacy Controls - With so much attention on enabling the "read-write" Web, and increasing openness, a need for better privacy control will inevitably arise. Web idealists argue that traditional data silos (or intellectual property as we know today) will need to be opened up and interoperate in the new world. Again, I believe a hybrid model somewhere between the two extremes (of fully open and completely closed architectures) usually work out better to the benefit of its users. From this perspective, yes the highly protected enterprise data silos today will need to open up, but should be just enough to add value for the users. To do that, some kind of interoperable privacy controls is required.

    Ubiquitous Access w/ Rich User Experiences - A consistent and seamless experience for people accessing their information, applications, and services, across a full spectrum of connected devices and systems. At the same time, highly targeted user experiences implemented for the appropriate form factors are available to take advantage of the latest hardware and device innovations.

    There are many more, such as the data/semantic Web, evolutionary intelligence, changes in social trends, etc. It'll be interesting to see how things pan out in this space.

    Share this post :

    This post is part of a series:

  • Architecture + Strategy

    Popfly as a Web Platform

    • 9 Comments

    popfly-small-logo

    Primer

    Microsoft Popfly (www.popfly.com), currently in beta since October 2007, is a web site and tool to help people create and share web sites, mashups, and other kinds of experiences.

    This service, in my opinion, is a really interesting and innovative product Microsoft has delivered this year. From an architect's perspective, Popfly can be considered as a Web platform, along with the many other interesting ones created this year, such as the Facebook Platform.

    Many people also saw Popfly's potential as a Web platform. For example, Mary Jo Foley correlated it to Yahoo! Pipes, Tom Foremski described how easy it is to build a Facebook app with Popfly, John Mullinax provided a business perspective on how to leverage Popfly, and Denny Boynton with some architectural thoughts.

    A Web Platform

    In an earlier blog post I talked about "Web as a Platform" (in Web 2.0's context) and briefly described a layered and componentized perspective in looking at the Web platform in general. Popfly fits in that perspective very well, and can be categorized into a composition tools layer that doesn't seem to have received a lot of attention from the general Web 2.0 community. Specifically, in the programmable Web aspect of Web 2.0, the focus has been on creating the APIs, frameworks, runtime environments, standards, etc. to facilitate the various kinds of applications and social interactions. But the tasks of developing these applications still rely on traditional code-based environments. Popfly represents a major innovation on the composition tool side, and does it in an elegant way that transformed the bootstrapping requirements of various kinds of services and APIs available in the cloud, into, literally, building blocks that people without any technical background can piece together (like LEGO!) and create all kinds of composite applications (or mashups). It also offers a provisioning and syndication system so these applications can be deployed (or embedded into web pages) anywhere on the Web (and coined the term "mashout").

    Popfly has been compared to Yahoo! Pipes, which provides a very elegant composition tool for aggregating and manipulating syndicated content (and a wickedly cool implementation of JavaScript in its development environment). It is a very powerful platform in terms of programmability in the context of mashing up data. Another is Google Mashup Editor, which is also a very powerful tool that helps people quickly create mashup applications. Without turning this into a comparison of the three tools, in general I think each provides a distinct value and meet different needs. For example, Yahoo! Pipes provides a graphical drag-and-drop development model in using syndicated data, and Google Mashup Editor provides a code environment particularly targeted for utilizing Google services and products; though the target audience for both of them tend to be developers.

    Popfly differs in its approach to democratize development by raising the level of abstraction and narrowing down options in block configurations. This greatly simplifies the process of piecing together building blocks, and it is this simplicity that offers Popfly's greatest advantage at making development social, and potentially more appealing to a wider audience.

    The public beta provides many kinds of building blocks - display, fun & games, images & video (media), local information, maps, news & RSS, shopping, social networks, tools (programming utilities such as RegExp), and others. These building blocks represent configurable components that map to many different kinds of cloud-based service APIs, such as Flickr, Facebook, Live Search, AOL Video Search, Yahoo! Videos, Virtual Earth, Yahoo! Traffic, Digg, Yahoo! News, Twitter, Technorati, etc.; the list goes on. The rich collection (and growing) of building blocks allows not just the mashup of functions and data, but also adding an interchangeable visualization and interaction layer to the applications.

    Popfly boostrapped these cloud-based service APIs, and exposed their methods, input parameters, and results as configurable elements in each building block. In addition, Popfly also pre-defines and maintains compatible relationships between these APIs so in many cases, default configurations are sufficient for creating a mashup without requiring the user to perform any configuration changes. Simply drag and drop, and connect the dots will do.

    Popfly itself is implemented using a combination of traditional Web application technologies (ASP.NET, AJAX, JavaScript, HTML, etc.) hosted in a highly available server infrastructure, and a Silverlight implementation of the in-browser development environment.

    The challenge for Popfly is reaching critical mass in adoption. Just like the Facebook Platform, which is really a software distribution platform, harnesses its power from the lively communities in Facebook. Popfly can achieve similar goals if its adoption can be turned into a self-propelling virtuous cycle, when a healthy growth in adoption can be facilitated.

    Thus Popfly really is a platform in the Web 2.0 world. It provides an environment where people without a significant technical background can build stuff in, and hides the complexities in the underlying infrastructure. It also articulates many of the Web 2.0 principles, such as enabling participation and harnessing collective intelligence, leveraging the long tail, lightweight development models, rich user experiences, etc. For businesses and organizations looking to open up their data and services, or to interact with the user communities, participating in the Popfly ecosystem could be a simple way to enable viral adoption in the distribution channel (and for some, utilizing the monetization methods).

    A 1-Minute Mashup Application

    To illustrate Popfly's simplistic elegance, I created a mashup between a Flickr picture set and a visualization block that uses Silverlight. A snapshot of the application in edit mode is shown below.

    Popfly-ITARCPresoFlickrPhotoTiles

    Without going into a detailed step-by-step replay, all I did was drag/drop the Flickr block, configure it with the Flickr set ID that contains the pictures I want to use, drag/drop the Carousel block, then drag/drop a connector from the Flickr block to the Carousel block. Hooking up the output from Flickr with input parameters in Carousel was done automatically and seamless. That's it! And the application is now ready to be deployed across the Web.

    The resulting mashup application is embedded below. I picked a presentation block that uses Silverlight, but there are blocks that are pure HTML and JavaScript too.

     

    Share this post :

    This post is part of a series:

Page 1 of 28 (137 items) 12345»