• Interoperability @ Microsoft

    More news from MS Open Tech: announcing the open source Metro style theme for jQuery Mobile


    Starting today, the Metro style theme for JQuery Mobile, the popular open source mobile user interface framework, is available for download on GitHub and can be used as a NuGet package in Visual Studio.

    The theme enables HTML5 pages to adapt automatically to the Metro design style when rendered on Windows Phone 7.5. The Metro style theme is open source and available for download here. This new Metro style theme’s development was sponsored by Microsoft Open Technologies, Inc. working closely with Sergei Grebnov, an Apache Cordova committer and jQuery Mobile developer.

    The theme looks just gorgeous, doesn’t it?

    clip_image002 clip_image002 clip_image006image

    The CSS and Javascript theme adapts to the current theme used in Windows Phone and applies the right styling to the jQuery Mobile controls.This allows mobile HTML5 web sites and hybrid applications to naturally integrate into the Windows Phone Metro style experience. This offers developers the choice of rapidly integrating the theme into their existing application but also to contribute to this open source project through GitHub.

    You can see an extensive demo of the theme on this page and you can learn more on this site where we are publishing new articles, references and source code sample for developing with Apache Cordova and the Metro style theme for jQuery Mobile.

    This is another milestone in our continuous engagement with the community. Our team has been working closely with the Windows Phone division to support the mobile HTML5 and JavaScript open source communities over the last year to bring popular open source projects to Windows Phone:

    • A few months ago, we sponsored the development of full Windows Phone support for PhoneGap (now Apache Cordova), the open source framework that lets applications be built for iOS, Android, Windows Phone and other mobile platforms using HTML5, CSS and JavaScript.
    • At the same time significant improvements were brought to jQuery Mobile (read more about this in our previous blog post): feedback from the community has been great and was partly responsible for our decision to expand our engagement with jQuery Mobile and sponsor this work.

    We believe it is important for developers to have choices when targeting Windows Phone, and we also want them to be able to deliver a good experience to the users of their applications, especially when making the choice of using Web standards (HTML5, CSS and JavaScript) to target multiple mobile platforms by picking solutions such as Apache Cordova.

    To do so, developers already enjoy a selection of Apache Cordova Plugins that give their application a Windows Phone touch such as Social Share, Bing Map launcher and Live Tile. Now developers can use the new open source Metro style theme for jQuery Mobile to give their mobile apps and websites the Metro style look and feel, and offer the final users an experience similar to the one they get with native applications.

    As usual we are very interested in hearing from developers and gathering feedback about the experience of developing HTML5-based applications and websites on Windows Phone. Let us know what other features, tools and frameworks you’d like to see.

    Abu Obeida Bakhach
    Program Manager
    Microsoft Open Technologies, Inc.
    A subsidiary of Microsoft Corporation

  • Interoperability @ Microsoft

    Announcing one more way Microsoft will engage with the open source and standards communities


    JeanOpenTechI am really excited to be able to share with you today that Microsoft has announced a new wholly owned subsidiary known as Microsoft Open Technologies, Inc., to advance the company’s investment in openness – including interoperability, open standards and open source.

    My existing Interoperability Strategy team will form the nucleus of this new subsidiary, and I will serve as President of Microsoft Open Technologies, Inc.

    The team has worked closely with many business groups on numerous standards initiatives across Microsoft, including the W3C’s HTML5, IETF’s HTTP 2.0, cloud standards in DMTF and OASIS, and in many open source environments such as Node.js, MongoDB and Phonegap/Cordova.

    We help provide open source building blocks for interoperable cloud services and collaborate on cloud standards in DMTF and OASIS; support developer choice of programming languages to enable Node.js, PHP and Java in addition to .NET in Windows Azure; and work with the PhoneGap/Cordova and jQuery Mobile and other open source communities to support Windows Phone.

    It is important to note that Microsoft and our business groups will continue to engage with the open source and standards communities in a variety of ways, including working with many open source foundations such as Outercurve Foundation, the Apache Software Foundation and many standards organizations. Microsoft Open Technologies is further demonstration of Microsoft’s long-term commitment to interoperability, greater openness, and to working with open source communities.

    Today, thousands of open standards are supported by Microsoft and many open source environments including Linux, Hadoop, MongoDB, Drupal, Joomla and others, run on our platform.

    The subsidiary provides a new way of engaging in a more clearly defined manner. This new structure will help facilitate the interaction between Microsoft’s proprietary development processes and the company’s open innovation efforts and relationships with open source and open standards communities.

    This structure will make it easier and faster to iterate and release open source software, participate in existing open source efforts, and accept contributions from the community. Over time the community will see greater interaction with the open standards and open source worlds.

    As a result of these efforts, customers will have even greater choice and opportunity to bridge Microsoft and non-Microsoft technologies together in heterogeneous environments.

    I look forward to sharing more on all this in the months ahead, as well as to working not only with the existing open source developers and standards bodies we work with now, but with a range of new ones.



  • Interoperability @ Microsoft

    Sublime Text, Vi, Emacs: TypeScript enabled!


    TypeScript is a new open and interoperable language for application scale JavaScript development created by Microsoft and released as open source on CodePlex. You can learn about this typed superset of JavaScript that compiles to plain JavaScript reading Soma’s blog.

    At Microsoft Open Technologies, Inc. we are thrilled that the discussion is now open with the community on the language specification: you can play (or even better start developing with TypeScript) with the bits, read the specification and provide your feedback on the discussion forum. We also wanted to make it possible for developers to use their favorite editor to write TypeScript code, in addition to the TypeScript online playground and the Visual Studio plugin.

    Below you will find sample syntax files for Sublime Text, Vi and Emacs that will add syntax highlighting to the files with a .ts extension. We want to hear from you on where you think we should post these files for you to be able to optimize them and help us make your TypeScript programming an even greater experience, so please comment on this post or send us a message.


    TypeScript support for
    Sublime Text
    TypeScript support for
    TypeScript support for

    Olivier Bloch
    Senior Technical Evangelist
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    How to develop for Windows Phone 8 on your Mac


    UPDATE 01/07/13: added instructions to enable Hyper-V in Parallels Desktop VM


    Interested in developing apps for Windows Phone 8, but you are developing on a Mac? No problem...check out the guide below to find a variety of options.

    First you should consider whether to build native WP8 applications or Web applications. Applications will run directly on the phone platform, and will deliver advanced performance and a fully integrated experience to the end user. Web applications developed with HTML5 and JavaScript will take advantage of the Web standards support of Internet Explorer 10 and the cross platform nature of HTML5 applications.There is a lot of debate about which way to go, native app or Web app with HTML5, and I would say that the answer is… it depends. In this post, I will try to present the main options to go one way or the other based on the assumption that you have a Mac and want to stick to it Smile.

    WP8 application development on a Mac

    To build applications for Windows Phone, you need Visual Studio 2012 and the WP8 SDK. There is a free version that bundles these two and that allows you to do pretty much all you need to build and publish an application to the Windows Phone store:

    • Write and debug code in an advanced code editor
    • Compile to app package
    • Test the application in an emulator leveraging advanced features
    • Connecting and deploying to an actual device and do cross-debugging, and performance analysis
    • … and these are only the basic features available, there are plenty more!

    Visual Studio 2012 runs on Windows 8 and Windows 7 but the Windows Phone emulator relies on Hyper-V, which comes only in Windows 8 64 bit. So basically, you need to have a Windows 8 64 bit install if you want to leverage the emulator, and you need a way to have Hyper-V enabled in your Windows 8 install.

    Using a recent Macintosh, you have a couple of options to run Windows 8:

    1. Run Windows 8 on your Mac natively using Boot Camp
    2. Run Windows 8 in a virtual environment using software like VMWare Fusion 5 or Parallels Desktop 8 for your Mac

    There is plenty of documentation online on how to set up the environments for both options to get Windows to run on your Mac, and you can also find details on MSDN here.

    Boot Camp

    If you want to go the Boot Camp way, once you have set up Windows 8, you can go ahead and follow the default instructions to download and install the WP8 SDK.

    VMWare Fusion 5 or Parallels Desktop 8

    If you want to use VMWare Fusion or Parallels and still be able to use the WP8 Emulator, here are the steps you need to follow:

    • Install VMWare Fusion 5 or Parallels Desktop 8if you don’t have it yet
    • Download Windows 8 64 bits ISO:
      • you can find the evaluation version on the evaluation center here.
      • If you want the retail version then it is a little tricky on a Mac as there is no way to download the retail iso directly. The trick consists in installing the evaluation version of Windows 8 on a VMware Fusion VM or Parallels following the below instructions, then from Windows 8, run the Windows 8 setup (a link is available in the first lines of the email you will receive after the purchase of Windows 8) that will offer the option of downloading the retail ISO after entering you product key as described on this article.
    • Create a new VM setting up the below parameters:
      • On WMWare Fusion 5:
        • ensure that you have the following settings (be sure to check the “Enable hypervisor applications in this Virtual machine” option):


        • Important:
          • Hyper-V requires at least 2 cores to be present.
          • The Windows Phone Emulator will use 258MB or 512MB virtual memory, therefore, don’t be shy with memory assigned to the VM and assign at least 2 GB.
          • In the advanced settings, ensure you have selected “Preferred virtualization engine: Intel VT-x with EPT” option
        • Modify the .vmx file to add or modify the following settings:
          • hypervisor.cpuid.v0 = "FALSE"
          • mce.enable = "TRUE"
          • vhv.enable = "TRUE"
      • On Parallels Desktop 8:
        • Ensure that you have the following settings for the new VM (go into VM Settings>General>CPUs):


        • Still in settings, you need to enable virtualization pass-thru support in Options>Optimizations>Nested Virtualization

    Screen Shot 2013-01-04 at 3.58.43 PM


    • Install Windows 8 on your VMware Fusion or Parallels Desktop VM (you can find plenty of guides online on how to install a VM from an ISO)
    • Once Windows 8 is installed, download and install the WP8 SDK.


    The SDK install will setup the Hyper-V environment and will set things up for you to be able to use the Emulator within the VMWare Fusion or Parallels Desktop image.

    on VMware Fusion… on Parallels Desktop…

    Screen Shot 2012-12-04 at 1.48.09 PM

    Screen Shot 2013-01-04 at 4.04.35 PM 

    You are now set to build, debug and test WP8 applications. You can start your development and debugging by leveraging the emulator and its tools, and you can consider using an actual Windows Phone 8 device, plugging it in your Mac, and setting things up so that the USB device shows up in the VM.

    You can find extensive information on how to use Visual Studio 2012 for Windows Phone 8 development, along with its emulator, and how to publish an application, get samples, as well as everything a developer needs here.

    WP8 Web applications development on a Mac

    Here we are talking about two different things:

    • Development for mobile websites that will render well in the Windows Phone 8 browser.
    • HTML5 application development using the Web Browser control hosted by a native application, model that is used by frameworks and tools such as Apache Cordova (a.k.a. PhoneGap), also known as hybrid applications.

    Windows 8 offers a “native HTML5/JS” model that allows you to develop applications in HTML5 and JavaScript that will execute directly on top of the application platform, but we will not discuss this model here as Windows Phone 8 proposes a slightly different model for HTML5 and JS applications development.

    On Windows Phone 8, in both cases mentioned above, the HTML5/JavaScript/CSS code will be rendered and executed in the same Internet Explorer 10 engine on Windows Phone 8. This means that whether you are writing a mobile website, or a PhoneGap type application, you can do so on your usual tool or editor all the way down to the debugging and testing phases.

    While you can do a lot of debugging in a Web browser for your HTML5/JS code, you will need to do actual tests and debugging on the actual platform (WP8 Emulator or/and actual device). Even if you are using Web standards, you need to consider that the level of support might not be the same on all platforms. And if you are using third party code, you also need to ensure that the code doesn’t contain platform specific elements so that things will run correctly. For example, you need to get rid of any dependencies on WebKit specifics.

    Making sure your Web code is not platform specific

    When writing this code, you need to consider the various platforms that your mobile Web application will be used. Obviously the less specifics there are for each of the platforms, the better for you as a developer! Good news is that HTML5 support is getting better and better across modern mobile browsers. IE10 in Windows Phone 8 is no exception and brings extended standards support, hardware acceleration and great rendering performances. You can take a look at the following site directly from your Windows Phone 8 device to check that out: www.atari.com/arcade


    To learn more on how to make sure your mobile Web code will render well on Internet Explorer 10 on Windows Phone 8 as well as on other modern mobile browsers, you can read this article.

    Testing and debugging your Web application for WP8 on a Mac

    Once you have clean HTML5 code that runs and renders well in a Web browser, you will need to test it on IE10 on a Windows Phone 8 device or emulator.

    In the IE10 desktop, there are powerful debugging tools (“F12”), which is not the case on Windows Phone 8. One of the recommended ways to do advanced debugging is to leverage the “F12” debugging capabilities on IE10 Desktop in order to cover most if not all of the debugging and testing cases for your mobile Web application for Windows Phone 8. For a Mac, you will need to look into the various options to install a Windows 8 virtual machine, which are mentioned in the beginning of this article, and load your code in Internet Explorer 10 within Windows 8. Once IE is launched, press the "F12" key or go to the settings menu and select “F12 Developer tools.”


    In the debugging tool at the bottom, you can then change the User agent setting and the resolution from the “Tools” menu to match what IE10 on Windows Phone 8 exposes.


    Once you have done these tests on Internet Explorer 10 desktop, you can deploy and test on an actual Windows Phone 8 device or on the emulator (see previous chapters on how to set things up to make the emulator work on a Mac).

    Now what?

    With these steps you should be set to start developing and deploying Windows Phone 8 applications from your Mac.

    But there are certainly other tips and tricks that you will figure out and you may already know. We would love to hear from you to make this post even more useful for developers wishing to expand their reach to the Windows Phone 8 platform. Do not hesitate to comment on this post with your suggestions, ideas, tips…

  • Interoperability @ Microsoft

    Roadmap for Outlook Personal Folders (.pst) Documentation


    By Paul Lorimer, Group Manager, Microsoft Office Interoperability

    [UPDATE: 05/24/2010, Two open source projects to facilitate interoperability with Outlook .pst data files]
    [UPDATE: 02/20/2010,
    New Office Documentation Now Publicly Available

    Data portability has become an increasing need for our customers and partners as more information is stored and shared in digital formats. One scenario that has come up recently is how to further improve platform-independent access to email, calendar, contacts, and other data generated by Microsoft Outlook.

    On desktops, this data is stored in Outlook Personal Folders, in a format called a .pst file. Developers can already access the data stored in the .pst file, using Messaging API (MAPI) and the Outlook Object Model—a rich set of connections to all of the data stored by Outlook and Exchange Server—but only if Outlook is installed on the desktop.

    In order to facilitate interoperability and enable customers and vendors to access the data in .pst files on a variety of platforms, we will be releasing documentation for the .pst file format. This will allow developers to read, create, and interoperate with the data in .pst files in server and client scenarios using the programming language and platform of their choice. The technical documentation will detail how the data is stored, along with guidance for accessing that data from other software applications. It also will highlight the structure of the .pst file, provide details like how to navigate the folder hierarchy, and explain how to access the individual data objects and properties.

    This documentation is still in its early stages and work is ongoing. We are engaging directly with industry experts and interested customers to gather feedback on the quality of the technical documentation to ensure that it is clear and useful. When it is complete, it will be released under our Open Specification Promise, which will allow anyone to implement the .pst file format on any platform and in any tool, without concerns about patents, and without the need to contact Microsoft in any way.

    Designing our high volume products to enable such data portability is a key commitment under our Interoperability Principles, which we announced in early 2008. We support this commitment through our product features, documented formats, and implementation of standards. The move to open up the portability of data in .pst files is another step in putting these principles in action.

    Over the past year, Microsoft Office has taken several steps toward increased openness and document interoperability. We’re proud of the work we’ve done around document interoperability, offering customers a choice of file formats and embracing a comprehensive approach that includes transparency into our engineering methods, collaboration with industry stakeholders, and shared stewardship of industry standards.

    We’re excited about the possibilities created for our customers and partners by this kind of effort, and we look forward to continued collaboration with the industry in our pursuit of improved interoperability with Microsoft Office. Stay tuned.

    Paul Lorimer, Group Manager, Microsoft Office Interoperability.

    Related Items:

  • Interoperability @ Microsoft

    Greater Interoperability for Windows Customers With HTML5 Video


    Google recently announced that its Chrome web browser will stop supporting the H.264 video format. At Microsoft we respect that Windows customers want the best experience of the web including the ability to enjoy the widest range of content available on the Internet in H.264 format.

    Today, as part of the interoperability bridges work we do on this team, we are making available the Windows Media Player HTML5 Extension for Chrome, which is an extension for Google Chrome to enable Windows 7 customers who use Chrome to continue to play H.264 video.

    We believe that Windows customers should be able to play mainstream HTML5 video and, as we’ve described in previous posts, Internet Explorer 9 will support playback of H.264 video as well as VP8 video when the user has installed a VP8 codec.

    We are committed to ensuring that Windows customers have the best Web experience, and we have been offering for several years now the extremely popular Windows Media Player plug-in for Firefox, which is downloaded by millions of people a month who want to watch Windows Media content.

    We also recently provided an add-on for Windows 7 customers who choose Firefox to play H.264 video so as to enable interoperability across IE, Firefox and Chrome using HTML5 video on Windows.

    For many reasons - which you can read about on other blog posts here and here - H.264 is an excellent and widely-used video format that serves the web very well today. As such, we will continue to ensure that developers and customers continue to have an optimal Web experience.

    Claudio Caldato,

    Principal Program Manager, Interoperability Strategy Team

  • Interoperability @ Microsoft

    Using LucidWorks on Windows Azure (Part 1 of a multi-part MS Open Tech series)


    LucidWorks Search on Windows Azure delivers a high-performance search service based on Apache Lucene/Solr open source indexing and search technology. This service enables quick and easy provisioning of Lucene/Solr search functionality on Windows Azure without any need to manage and operate Lucene/Solr servers, and it supports pre-built connectors for various types of enterprise data, structured data, unstructured data and web sites.

    In June, we shared an overview of the LucidWorks Search service for Windows Azure. For this post, the first in a series, we’ll cover a few of the concepts you need to know to get the most out of the LucidWorks search service on Windows Azure. In future posts we’ll show you how to set up a LucidWorks service on Windows Azure and demonstrate how to integrate search with Web sites, unstructured data and structured data.

    Options for Developers

    Developers can add search to their existing Web Sites, or create a new Windows Azure Web site with search as a central function.  For example, in future posts in this series, we’ll create a simple Windows Azure web site that will use the LucidWorks search service to index and search the contents of other Web sites.  Then we’ll enable search from the same demo Web site against a set of unstructured data and MySQL structured data in other locations.

    Overview:  Documents, Fields, and Collections

    LucidWorks creates an index of unstructured and structured data.  Any individual item that is indexed and/or searched is called a Document.  Documents can be a row in a structured data source or a file in an unstructured data source, or anything else that Solr/Lucene understands.

    An individual item in a Document is called a Field.  Same concept – fields can be columns of data in a structured source or a word in an unstructured source, or anything in between.  Fields are generally atomic, in other words they cannot be broken down into smaller items.

    LucidWorks calls groups of Documents that can be managed and searched independently of each other Collections. Searching, by default is on one collection at a time, but of course programmatically a developer can create search functionality that returns results for more than one Collection.

    Security via Collections and Filters

    Collections are a great way to restrict a group of users, controlled by access to Windows Azure Web sites and by LucidWorks.  In addition, LucidWorks Admins can create Filters inside a Collection.  User identity can be integrated with an existing LDAP directory, or managed programmatically via API.

    LucidWorks additional Features

    LucidWorks adds value to Solr/Lucene with some very useful UI enhancements that can be enabled without programming. 

    Persistent Queries and Alerts, Auto-complete, spellcheck and similar terms.

    Users can create their own persistent queries.  Search terms are automatically monitored and Alerts are delivered to a specified email address using the Name of the alert as the subject line. You can also specify how often the persistent query should check for new data and how often alerts are generated.

    Search term Typeahead can be enabled via LucidWorks’ auto-complete functionality. Auto-complete tracks the characters the user has already entered and displays terms that start with those characters.

    When results re displayed, LucidWorks can spell-check queries and offer alternative terms based on similar spellings of words and synonyms in the query.  


    Search engines use Stopwords to remove common words from queries and query indexes like “a”, “and”, or “for” that add no value to searches.   LucidWorks has an editable list of Stopwords that is a great start to increase search relevance. 

    Increasing Relevance with Click Scoring

    Click scoring tracks common queries and query results and tracks which results are most often selected against query terms and scores relevance based on the comparison results.  Results with a higher relevance are placed higher in search result rankings, based on user activity.

    LucidWorks on Windows Azure – Easy Deployment

    The best part of LucidWorks is how easily Enterprise Search can be added as a service.  In our next LucidWorks blog post we’ll cover how to quickly get up and running with Enterprise search by adding a LucidWorks service to an existing Windows Azure Web site. 

  • Interoperability @ Microsoft

    Speed and Mobility: An Approach for HTTP 2.0 to Make Mobile Apps and the Web Faster


    This week begins face to face meetings at the IETF on how to approach HTTP 2.0 and improve the Internet. How the industry moves forward together on the next version of HTTP – how every application and service on the web communicates today – can positively impact user experience, operational and environmental costs, and even the battery life of the devices you carry around.

    As part of this discussion of HTTP 2.0, Microsoft will submit to the IETF a proposal for “HTTP Speed+Mobility." The approach we propose focuses on all the web’s end users – emphasizing performance improvements and security while at the same time accounting for the important needs of mobile devices and applications.

    Why HTTP 2.0?

    Today’s HTTP has historical limitations based on what used to be good enough for the web. Because of this, the HTTPbis working group in the Internet Engineering Task Force (IETF) has approved a new charter to define HTTP “2.0” to address performance limitations with HTTP. The working group’s explicit goal is to keep compatibility with existing applications and scenarios, specifically to preserve the existing semantics of HTTP.

    Why this approach?

    Improving HTTP starts with speed. There is already broad consensus about the need to make web browsing much faster.

    We think that apps—not just browsers—should get faster too. More and more, apps are how people access web services, in addition to their browser.

    Improving HTTP should also make mobile better. For example, people want their mobile devices to have better battery life. HTTP 2.0 can help decrease the power consumption of network access. Mobile devices also give people a choice of networks with different costs and bandwidth limits. Embedded sensors and clients face similar issues. HTTP 2.0 can make this better.

    This approach includes keeping people and their apps in control of network access. Specifically, the client remains in control over the content that it receives from the web. This extends a key attribute of the existing HTTP protocol that has served the Web well. The app or browser is in the best position to assess what the user is currently doing and what data is already locally available. This approach enables apps and browsers to innovate more freely, delivering the most relevant content to the user based on the user’s actual needs.

    We think that rapid adoption of HTTP 2.0 is important. To make that happen, HTTP 2.0 needs to retain as much compatibility as possible with the existing Web infrastructure. Awareness of HTTP is built into nearly every switch, router, proxy, load balancer, and security system in use today. If the new protocol is “HTTP” in name only, upgrading all of this infrastructure would take too long. By building on existing web standards, the community can set HTTP 2.0 up for rapid adoption throughout the web.

    Done right, HTTP 2.0 can help people connect their devices and applications to the Internet fast, reliably, and securely over a number of diverse networks, with great battery life and low cost.


    The HTTP Speed+Mobility proposal starts from both the Google SPDY protocol (a separate submission to the IETF for this discussion) and the work the industry has done around WebSockets.

    SPDY has done a great job raising awareness of web performance and taking a “clean slate” approach to improving HTTP to make the Web faster. The main departures from SPDY are to address the needs of mobile devices and applications.

    Looking ahead

    We are looking forward to a vigorous, open discussion within the IETF around the design of HTTP 2.0. We are excited by the promise of an HTTP 2.0 that will serve the Internet for decades to come. As the effort progresses, we will continue to provide updates on this blog. Consistent with our other web standards engagements, we will also provide early implementations of the HTTP 2.0 specification on the HTML5 Labs site.

    - Sandeep Singhal, Group Program Manager, Windows Core Networking

    - Jean Paoli, General Manager, Interoperability Strategy

  • Interoperability @ Microsoft

    Azure + Java = Cloud Interop: New Channel 9 Video with GigaSpaces Posted


    Today Microsoft is hosting the Learn Windows Azure broadcast event to demonstrate how easy it is for developers to get started with Windows Azure. Senior Microsoft executives like Scott Guthrie, Dave Campbell, Mark Russinovich and others will show how easy it is to build scalable cloud applications using Visual Studio.  The event is be broadcasting live and will also be available on-demand.

    For Java developers interested in using Windows Azure, one particularly interesting segment of the day is a new Channel 9 video with GigaSpaces. Their Cloudify offering helps Java developers easily move to their applications, without any code or architecture changes, to Windows Azure

    This broadcast follows yesterday’s updates to Windows Azure around an improved developer experience, Interoperability, and scalability. A significant part of that was an update on a wide range of Open Source developments on Windows Azure, which are the latest incremental improvements that deliver on our commitment to working with developer communities so that they can build applications on Windows Azure using the languages and frameworks they already know.

    We understand that developers want to use the tools that best fit their experience, skills, and application requirements, and our goal is to enable that choice. In keeping with that, we are extremely happy to be delivering new and improved experiences for popular OSS technologies such as Node.js, MongoDB, Hadoop, Solr and Memcached on Windows Azure.

    You can find all the details on the full Windows Azure news here, and more information on the Open Source updates here.

  • Interoperability @ Microsoft

    OpenXML Document Viewer v1 released: viewing .DOCX files as HTML


    [05/18- Update:
    this translator is highlighted in today's Document Interoperability Inititice (DII) event that just happened in London ]

    The OpenXML Document Viewer project idea came from the discussions with the participants of the Document Interoperability Initiative (DII) workshops (in particular last year’s Cambridge event). The point was to find a way to simply be able to view Open XML files as HTML. Following up, Microsoft provided funding to start the Open XML Viewer project, an open source project developed by MindTree Limited. The first beta version was unveiled at the last DII in Brussels, giving a first peak of the viewer (see a demo here).

    Today I’m excited to announce the version 1.0 of Open XML Document Viewer. It provides direct translation for Open XML Documents (.DOCX) to HTML, enabling access to the information in the Open XML format from any platform with a Web browser. The project, which already includes a plug-in for Firefox IE7 and IE8 and now also offers a plug-in for Opera, allows users to view Open XML documents (.DOCX) within the browser on Windows and Linux platforms without the need to install Microsoft Office or other productivity products.
    Check out the demo my colleague Jean-Christophe Cimetiere has recorded to see the Open XML Document Viewer in action from the end user perspective:

    For more detail on the supported features go visit the project site http://www.openxmlviewer.com 

    In principle, the functionality of the viewer is simply to translate OpenXML files into HTML for direct consumption in a web browser.

    Here’s a scenario (the sample document is attached):

    · You have an Open XML document (.DOCX). Let’s view it in Office Word 2007 first:


    · Then, let’s say you email this file to your friend who’s using OpenSUSE Linux. Your friend saves the document on the desktop and drags & drops it into the Opera browser:


    · The Open XML Document Viewer kicks off and creates the HTML that’s displayed by the browser:


    The experience is similar with Firefox on Linux and and with Internet Explorer 7/8, Firefox 3.0.x, and Opera 9.x on Windows:


    Next let’s examine the high level architecture:


    The core of the project is the Translation Engine that does most of the work, meaning opening the .DOCX document, reading, mapping and transforming to HTML. The Translation engine is exposed as a client side browser plug-in with support for Firefox, Opera, and Internet Explorer, and as a cross platform command line translator for use in server side applications.

    The result is a translator that enables Open XML document (.DOCX) visibility within browser applications without the use of any of the usual office productivity or word processing applications, across multiple platforms and environments, as either a server side application or as a client side end user solution. Developers, Independent Software Vendors (ISVs), Solutions Integrators & Mobile Solution providers can use these tools to enable their customers to view Open XML documents on heterogeneous platforms and browser applications. Be sure to check out the Demo web site. It showcases server side document processing scenarios that represent very typical use cases.

    We’re very excited with this new version and look forward to your feedback.

    Join us at http://www.codeplex.com/OpenXMLViewer

    Sumit Chawla, Technical PM/Architect, Microsoft Interoperability Team

  • Interoperability @ Microsoft

    Katana 2 is Available!


    Microsoft Open Technologies, Inc. is pleased to share the news of a new version of KatanaKatana 2.0.0!

    Recent Updates

    We’ve mentioned a few developments over the last couple of months leading up to this release – specifically, as part of updates to the MS Open Tech Hub projects, the new MVC 5 security feature based on OWIN authentication middleware was provided by the Katana team. Also, the new server implementation for IETF HTTP/2.0 Draft announced earlier this month made several new end points available using Katana server components.

    There has also been news coverage of OWIN and Katana on ASP.NET and Michael Desmond at MSDN Magazine provides a good overview as well. You can also see a great video overview of Katana via Web Camps TV on Channel 9.

    Introduction to Katana

    Katana creates a server implementation of the work done in the independent open source project called the Open Web Inter­face for .NET (OWIN). OWIN defines interactions between Web servers and application components. The vision for Katana is a broad and vibrant ecosystem of Microsoft .NET Framework-based Web servers and application components. Katana adds some of these OWIN-based capabilities with built-in bindings to frameworks such as SignalR and the ASP.NET Web API.

    Developers are able to pick and choose features that they want to use in their applications by selecting middleware components and installing them into their project via NuGet. Katana middleware are then added to an application pipeline where they can handle incoming requests. Steps for adding packages and configuring the pipeline are documents are here.

    This model reduces interactions between Web servers and local applications to a very simple, portable interface. A great overview of the features and syntax are in this ASP.NET article - Getting Started with the Katana Project and more details for developers are in this White Paper.

    Features of Katana 2.0

    Katana 2.0.0 has a number of features of note for IT pros and Developers. A fill list is here in the project Roadmap, but here are some of the highlights:

    Build your own OWIN Server: Microsoft.Owin.Hosting provides default services and helper types for building your own OWIN-compatible host and Microsoft.Owin provides a framework-agnostic set of types for working with HTTP and Web socket requests and responses. Also Microsoft.Owin.Host.HttpListener provides a non-IIS HTTP server to OWIN applications, which can be hosted on a Katana or a custom host application (such as a console application or Windows service).

    Basic Diagnostics for OWIN: Microsoft.Owin.Diagnostics - Middleware components that provide some rudimentary tracing and diagnostics capabilities, as well as a default startup page.

    Authentication: Options for Authenticating Cookie-based forms authentication, OAuth2, Facebook, Google, Twitter and Microsoft Live accounts.

    Running and Debugging in Visual Studio: The OwinHost NuGet package gives developers the ability to have a complete F5 experience in Visual Studio 2013 using its new custom server capability.

    Get Involved

    Our community is small, but dedicated and growing! Add your name to our CodePlex project, join in on our discussions, and contribute via our new Contributor GitHub. See you there!

  • Interoperability @ Microsoft

    Open Source OData Tools for MySQL and PHP Developers


    To enable more interoperability scenarios, Microsoft has released today two open source tools that provide support for the Open Data Protocol (OData) for PHP and MySQL developers working on any platform.

    The growing popularity of OData is creating new opportunities for developers working with a wide variety of platforms and languages. An ever increasing number of data sources are being exposed as OData producers, and a variety of OData consumers can be used to query these data sources via OData’s simple REST API.

    In this post, we’ll take a look at the latest releases of two open source tools that help PHP developers implement OData producer support quickly and easily on Windows and Linux platforms:

    • The OData Producer Library for PHP, an open source server library that helps PHP developers expose data sources for querying via OData. (This is essentially a PHP port of certain aspects of the OData functionality found in System.Data.Services.)
    • The OData Connector for MySQL, an open source command-line tool that generates an implementation of the OData Producer Library for PHP from a specified MySQL database.

    These tools are written in platform-agnostic PHP, with no dependencies on .NET.

    OData Producer Library for PHP


    Last September, my colleague Claudio Caldato announced the first release of the Odata Producer Library for PHP, an open-source cross-platform PHP library available on Codeplex. This library has evolved in response to community feedback, and the latest build (Version 1.1) includes performance optimizations, finer-grained control of data query behavior, and comprehensive documentation.

    OData can be used with any data source described by an Entity Data Model (EDM). The structure of relational databases, XML files, spreadsheets, and many other data sources can be mapped to an EDM, and that mapping takes the form of a set of metadata to describe the entities, associations and properties of the data source. The details of EDM are beyond the scope of this blog, but if you’re curious here’s a simple example of how EDM can be used to build a conceptual model of a data source.

    The OData Producer Library for PHP is essentially an open source reference implementation of OData-relevant parts of the .NET framework’s System.Data.Services namespace, allowing developers on non-.NET platforms to more easily build OData providers. To use it, you define your data source through the IDataServiceMetadataProvider (IDSMP) interface, and then you can define an associated implementation of the IDataServiceQueryProvider (IDSQP) interface to retrieve data for OData queries. If your data source contains binary objects, you can also implement the optional IDataServiceStreamProvider interface to handle streaming of blobs such as media files.

    Once you’ve deployed your implementation, the flow of processing an OData client request is as follows:

    1. The OData server receives the submitted request, which includes the URI to the target resource and may also include $filter, $orderby, $expand and $skiptoken clauses to be applied to the target resource.
    2. The OData server parses and validates the headers associated with the request.
    3. The OData server parses the URI to resource, parses the query options to check their syntax, and verifies that the current service configuration allows access to the specified resource.
    4. Once all of the above steps are completed, the OData Producer for PHP library code is ready to process the request via your custom IDataServiceQueryProvider and return the results to the client.

    These processing steps are the same in .NET as they are in the OData Producer Library for PHP, but in the .NET implementation a LINQ query is generated from the parsed request. PHP doesn’t have support for LINQ, so the producer provides hooks which can be used to generate the PHP expression by default from the parsed expression tree. For example, in the case of a MySQL data source, a MySQL query expression would be generated.

    The net result is that PHP developers can offer the same querying functionality on Linux and other platforms as a .NET developer can offer through System.Data.Services. Here are a few other details worth nothing:

    • In C#/.NET, the System.Linq.Expressions namespace contains classes for building expression trees, and the OData Producer Library for PHP has its own classes for this purpose.
    • The IDSQP interface in the OData Producer Library for PHP differs slightly from .NET’s IDSQP interface (due to the lack of support for LINQ in PHP).
    • System.Data.Services uses WCF to host the OData provider service, whereas the OData Producer Library for PHP uses a web server (IIS or Apache) and urlrewrite to host the service.
    • The design of Writer (to serialize the returned query results) is the same for both .NET and PHP, allowing serialization of either .NET objects or PHP objects as Atom/JSON.

    For a deeper look at some of the technical details, check out Anu Chandy’s blog post on the OData Producer Library for PHP or see the OData Producer for PHP documentation available on Codeplex.

    OData Connector for MySQL

    The OData Producer for PHP can be used to expose any type of data source via OData, and one of the most popular data sources for PHP developers is MySQL. A new code generator tool, the open source OData Connector for MySQL, is now available to help PHP developers implement OData producer support for MySQL databases quickly and simply.

    The OData Connector for MySQL generates code to implement the interfaces necessary to create an OData feed for a MySQL database. The syntax for using the connector is simple and straightforward:

    php MySQLConnector.php /db=mysqldb_name /srvc=odata_service_name /u=db_user_name /pw=db_password /h=db_host_name

    figure2The MySQLConnector generates an EDMX file containing metadata that describes the data source, and then prompts the user for whether to continue with code generation or stop to allow manual editing of the metadata before the code generation step.

    EDMX is the Entity Data Model XML format, and an EDMX file contains a conceptual model, a storage model, and the mapping between those models. In order to generate an EDMX from a MySQL database, the OData Connector for MySQL needs to be able to do database schema introspection, and it does this through the Doctrine DBAL (Database Abstraction Layer). You don’t need to understand the details of EDMX in order to use the OData Connector for MySQL, but if you’re curious see the .edmx File Overview article on MSDN.

    If you’re familiar with EDMX and wish to have very fine-grained control of the exposed OData feeds, you can edit the metadata as shown in the diagram, but this step is not necessary. You can also set access rights for specific entities in the DataService::InitializeService method after the code has been generated, as described below.

    If you stopped the process to edit the EDMX, one additional command is needed to complete the generation of code for the interfaces used by the OData Producer Library for PHP:

    php MySQLConnector.php /srvc=odata_service_name

    Note that the generated code will expose all of the tables in the MySQL database as OData feeds. In a typical production scenario, however, you would probably want to fine-tune the interface code to remove entities that aren’t appropriate for OData feeds. The simplest way to do this is to use the DataServiceConfiguration object in the DataService::InitializeService method to set the access rights to NONE for any entities that should not be exposed. For example, you may be creating an OData provider for a CMS, and you don’t want to allow OData queries against the table of users, or tables that are only used for internal purposes within your CMS.

    For more detailed information about working with the OData Connector for MySQL, refer to the user guide available on the project site on Codeplex.

    These tools are open-source (BSD license), so you can download them and start using them immediately at no cost, on Linux, Windows, or any PHP platform. Our team will continue to work to enable more OData scenarios, and we’re always interested in your thoughts. What other tools would you like to see available for working with OData?

  • Interoperability @ Microsoft

    MPEG-DASH Tutorial: Embedding an adaptive streaming video within your HTML5 application


    Poor quality streaming video solutions resulted in an estimated $2.16 Billion of lost revenue in 2012 (according to the 2013 Conviva Viewer Experience Report). That’s a LOT of zeros!

    Since we at Microsoft Open technologies, Inc. (MS Open Tech) believe this is simply unacceptable, we’d like to share some ways in which developers can leverage open source code to ensure their own delivery of video is of the highest possible standard.

    For this tutorial, we have chosen to use the dash.js player to deliver MPEG-DASH video to any browser that supports the W3C Media Source Extensions (MSE).

    What is MPEG-DASH and dash.js?

    MPEG-DASH is an ISO standard for the adaptive streaming of video content, which offers significant benefits for those who wish to deliver high-quality, adaptive video streaming output. Many of these benefits directly address opportunities for improving user engagement identified in the Conviva report, such as:

    • 226% increase in video consumption given a buffer-less experience
    • Fourfold increase in likelihood of watching the video if start-up is less than two seconds
    • 25% increase in consumption for higher quality streams

    With MPEG-DASH, the video stream will automatically drop to a lower definition when the network becomes congested. This reduces the likelihood of the viewer seeing a "paused" video while the player downloads the next few seconds to play (aka buffering). As network congestion reduces, the video player will in turn return to a higher quality stream. This ability to adapt the bandwidth required also results in a faster start time for video. That means that the first few seconds can be played in a fast-to-download lower quality segment and then step up to a higher quality once sufficient content has been buffered.

    Dash.js is an open source MPEG-DASH video player written in JavaScript. Its goal is to provide a robust, cross-platform player that can be freely reused in applications that require video playback. It provides MPEG-DASH playback in any browser that supports the W3C Media Source Extensions (MSE), today that is Chrome and IE11 (other browsers have indicated their intent to support MSE).

    Creating a browser-based streaming video player

    In simple terms, the intention of the below example is to demonstrate how easy it can be to build an MPEG-DASH player into your website. If you have any problems applying this example to your own real world use case, pop over to the dash.js community mailing list, where we will be happy to help you out.

    To create a simple web page that displays a video player with the expected controls such a play, pause, rewind etc., you will need to:

    • Create an HTML page
      • Add the video tag
    • Add the dash.js player
    • Initialize the player
    • Add some CSS style
    • View the results in a browser that implements MSE

    The only part of this process that may be new to most of you is the "Initialize the player" step. This step can be completed in just a handful of lines of JavaScript code. Using dash.js, it really is that simple to embed MPEG-DASH video in your browser based application - including native applications that use HTML and JavaScript!

    Creating the HTML page

    The first step is to create a standard HTML page containing the <video> element, save this file as basicPlayer.html. You will tell the player to display its controls (by including the “control"), but won’t need to initialize any other aspects of the player in the HTML. This configuration could be managed completely in JavaScript, if desired.

    Here is the HTML you should have in basicPlayer.html:

    <!DOCTYPE html>
      <head><title>Adaptive Streaming in HTML5</title></head>
        <h1>Adaptive Streaming with HTML5</h1>
        <video id="videoplayer" controls></video>

    Since there is nothing unusual about this HTML. let’s move quickly on to the dash.js player code.

    Adding the dash.js player

    To add the dash.js reference implementation to the application, you’ll need to grab the dash.all.js file from the 1.0 release of dash.js project. This should be saved in the JavaScript folder of your application. This file is a convenience file that pulls together all the necessary dash.js code into a single file. If you have a look around the dash.js repository, you will find the individual files, test code and much more, but if all you want to do is use dash.js, then the dash.all.js file is what you need.

    If you prefer you could use the code from the master branch. The master branch contains the latest fully tested version of the code. At the time of writing this tutorial will work with the mater branch, and this should always be the case. The adventurous might want to use the development branch, which contains all the latest changes that have been accepted by the project community. This is the code that will go into the next release of dash.js. While there has been some testing on this branch, it is development code to be used at your own risk.

    Should you encounter any problems with any version of the code, please discuss them on the projects mailing list. If you uncover any bugs please report them via our issue tracker. In addition, as an open source project, we welcome appropriate code contributions, please fork the project on GitHub and issue a pull request.

    Whichever version of dash.all.js you choose to use, you will need to be load it into your application, to do this add a script tag to the head section of basicPlayer.html:

    <!-- DASH-AVC/265 reference implementation -->
    <script src="js/dash.all.js"></script>

    Next, create a function to initialize the player when the page loads. Add the following script after the line in which you load dash.all.js:

    // setup the video element and attach it to the Dash player
    function setupVideo() {
      var url = "http://wams.edgesuite.net/media/MPTExpressionData02/BigBuckBunny_1080p24_IYUV_2ch.ism/manifest(format=mpd-time-csf)";
      var context = new Dash.di.DashContext();
      var player = new MediaPlayer(context);

    This function first creates a DashContext. This is used to configure the application for a specific runtime environment. From a technical point of view, it defines the classes that the dependency injection framework should use when constructing the application. In most cases, you will use Dash.di.DashContext.

    Next, instantiate the primary class of the dash.js framework MediaPlayer. This class contains the core methods needed such as play and pause, manages the relationship with the video element and also manages the interpretation of the Media Presentation Description (MPD) file which describes the video to be played. You will be working with this MediaPlayer from now on.

    The startup() function of the MediaPlayer is called to ensure that the player is ready to play video. Amongst other things this function ensures that all the necessary classes (as defined by the context) have been loaded. Once the player is ready, you can attach the video element to it using the attachView() function. This enables the MediaPlayer to inject the video stream into the element and also control playback as necessary. Finally, pass the URL of the MPD file to the MediaPlayer so that it knows about the video it is expected to play.

    The setupVideo() function just created will need to be executed once the page has fully loaded. Do this by using the onload event of the body element. Change your <body> element to:

    <body onload="setupVideo()">

    Finally, set the size of the video element using CSS. In an adaptive streaming environment, this is especially important because the size of the video being played may change as playback adapts to changing network conditions. In this simple demo simply force the video element to be 80% of the available browser window by adding the following CSS to the head section of the page:

    video {
      width: 80%;
      height: 80%;

    Playing video

    That's it. You now have a fully functional JavaScript MPEG-DASH player that will work in any browser that supports MSE. Point your browser at your basicPlayback.html file, click play on the video controls and watch your video in all its adaptive streaming glory.

    From here it is a relatively small step to create, for example, a Windows Store application using dash.js. Or you could create a module for Drupal, Joomla, WordPress or some other content management system.

    Our goal with dash.js is to make it as reusable as possible. If you need any help getting it to work for you contact us through the project mailing list. We will be happy to help!

  • Interoperability @ Microsoft

    HTML5 Video and Interoperability: Firefox Add-On Provides H.264 Support on Windows


    As you know, Microsoft is committed to interoperability, and the IE team has previously blogged about and provided developer previews and samples showing “Same Markup” – the same HTML, CSS, and script working across browsers – in action.

    Today, as part of the interoperability bridges work we do on this team, we’re making available a new Firefox add-on that enables Firefox users on Windows to play H.264-encoded video on HTML5 by using the built-in capabilities found in Windows 7.

    Microsoft has already been offering for several years now the Windows Media Player plug-in for Firefox, which is downloaded by millions of people a month who want to watch Windows Media content.

    This new plug-in, known as the HTML5 Extension for Windows Media Player Firefox Plug-in, is available for download here at no cost.

    It extends the functionality of the earlier plug-in for Firefox, and enables web pages that that offer video in the H.264 format using standard W3C HTML5 to work in Firefox on Windows. Because H.264 video on the web is so prevalent, this interoperability bridge is important for Firefox users who are Windows customers.

    H.264 is a widely-used industry standard, with broad and strong hardware support. This standardization allows users to easily take what they've recorded on a typical consumer video camera, put it on the web, and have it play in a web browser on any operating system or device with H.264 support, such as on a PC with Windows 7.

    H.264 is also a very well established and widely supported video compression format, developed for use in high definition systems such as HDTV, Blu-ray and HD DVD as well as low resolution portable devices. It also offers better quality at lower file sizes than both MPEG-2 and MPEG-4 ASP (DivX or XviD).

    The HTML5 Extension for Windows Media Player Firefox Plug-in continues to offer our customers value and choice, since those who have Windows 7 and are using Firefox will now be able to watch H.264 content through the plug-in.

    Microsoft is already deeply engaged in the HTML5 process with the W3C as we believe that HTML5 will be important in advancing rich, interactive web applications and site design.


    Claudio Caldato,

    Principal Program Manager, Interoperability Strategy Team

  • Interoperability @ Microsoft

    MS Open Tech Open Sources Rx (Reactive Extensions) – a Cure for Asynchronous Data Streams in Cloud Programming


    Erik Meijer, Architect, Microsoft Corp.
    Claudio Caldato, Lead Program Manager, Microsoft Open Technologies, Inc.

    Updated with a quote from Ferranti Computer Systems NV

    Updated: added quotes from Netflix and BlueMountain Capital Management

    If you are a developer that writes asynchronous code for composite applications in the cloud, you know what we are talking about, for everybody else Rx Extensions is a set of libraries that makes asynchronous programming a lot easier. As Dave Sexton describes it, “If asynchronous spaghetti code were a disease, Rx is the cure.”

    Reactive Extensions (Rx) is a programming model that allows developers to glue together asynchronous data streams. This is particularly useful in cloud programming because helps create a common interface for writing applications that come from diverse data sources, e.g., stock quotes, Tweets, computer events, Web service requests.

    Today, Microsoft Open Technologies, Inc., is open sourcing Rx. Its source code is now hosted on CodePlex to increase the community of developers seeking a more consistent interface to program against, and one that works across several development languages. The goal is to expand the number of frameworks and applications that use Rx in order to achieve better interoperability across devices and the cloud.

    Rx was developed by Microsoft Corp. architect Erik Meijer and his team, and is currently used on products in various divisions at Microsoft. Microsoft decided to transfer the project to MS Open Tech in order to capitalize on MS Open Tech’s best practices with open development.

    There are applications that you probably touch every day that are using Rx under the hood. A great example is GitHub for Windows.

    According to Paul Betts at GitHub, "GitHub for Windows uses the Reactive Extensions for almost everything it does, including network requests, UI events, managing child processes (git.exe). Using Rx and ReactiveUI, we've written a fast, nearly 100% asynchronous, responsive application, while still having 100% deterministic, reliable unit tests. The desktop developers at GitHub loved Rx so much, that the Mac team created their own version of Rx and ReactiveUI, called ReactiveCocoa, and are now using it on the Mac to obtain similar benefits."

    And Scott Weinstein with Lab49 adds, “Rx has proved to be a key technology in many of our projects. Providing a universal data access interface makes it possible to use the same LINQ compositional transforms over all data whether it’s UI based mouse movements, historical trade data, or streaming market data send over a web socket. And time based LINQ operators, with an abstracted notion of time make it quite easy to code and unit test complex logic.”

    Netflix Senior Software Developer Jafar Husain explained why they like Rx. "Rx dramatically simplified our startup flow and introduced new opportunities for performance improvements. We were so impressed by its versatility and quality, we used it as the basis for our new data access platform. Today we're using both the Javascript and .NET versions of Rx in our clients and the technology is required learning for new members of the team."

    And Howard Mansell, Quantitative Strategist with BlueMountain Capital Management added, “We are very pleased that Microsoft are Open-Sourcing the Reactive Extensions for .NET. This will allow users to better reason about performance and optimize their particular use cases, which is critical for performance and latency sensitive applications such as real-time financial analysis.”

    From Belgium, Guido Van de Velde, Director  MECOMS Product Organisation for Ferranti Computer Systems NV, explains how Rx is important for their global company: “Ferranti uses Rx in its vertical solution for the utility market, MECOMS™, to process and manage all data and events from the Smart Grid. Its architecture allows the setup of data processing pipelines which can scale and deliver excellent performance. Performance testing together with Microsoft showed that this architectures supports up to hundreds of millions of smart meters and other sensors, running on  commodity hardware. Thanks to Rx we can focus on component functionalities and don’t have to worry about interfaces and connections between the different components saving significant development time.”

    Part of the Rx development team will be on assignment with the MS Open Tech Hub engineering program to accelerate the open development of the Rx project and to collaborate with open source communities. Erik will continue to drive the strategic directions of the technology and leverage MS Open Tech Hub engineering resources to update and improve the Rx libraries. With the community contribution we want to see Rx be adopted by other platforms. Our goal is to build an open ecosystem of Rx-compliant libraries that will help developers tackle the complexity of asynchronous programming and improve interoperability.

    We are also happy to see that our decision is welcome by open source developers.

    “Open sourcing Rx just makes sense. My hope is that we’ll see a couple of virtuous side-effects of this decision. Most likely will be faster releases for bug fixes and performance improvements, but the ability to understand the inner workings of the Rx code should encourage the creation of additional tools and Rx providers to remote data sources,” said Lab 49’s Scott Weinstein.

    According to Dave Sexton, http://davesexton.com/blog, “It’s a solid library built around core principles that hides much of the complexity of controlling and coordinating asynchrony within any kind of application. Opening it will help to lower the learning curve and increase the adoption rate of this amazing library, enabling developers to create complex asynchronous queries with relative ease and without any spaghetti code left over.”

    Starting today, the following libraries are available on CodePlex:

    • Reactive Extensions
      • Rx.NET: The Reactive Extensions (Rx) is a library for composing asynchronous and event-based programs using observable sequences and LINQ-style query operators.
      • RxJS: The Reactive Extensions for JavaScript (RxJS) is a library for composing asynchronous and event-based programs using observable sequences and LINQ-style query operators in JavaScript which can target both the browser and Node.js.
      • Rx++: The Reactive Extensions for Native (RxC) is a library for composing asynchronous and event-based programs using observable sequences and LINQ-style query operators in both C and C++.
    • Interactive Extensions
      • Ix: The Interactive Extensions (Ix) is a .NET library which extends LINQ to Objects to provide many of the operators available in Rx but targeted for IEnumerable<T>.
      • IxJS: An implementation of LINQ to Objects and the Interactive Extensions (Ix) in JavaScript.
      • Ix++: An implantation of LINQ for Native Developers in C++.
    • Bindings
      • Tx: a set of code samples showing how to use LINQ to events, such as real-time standing queries and queries on past history from trace and log files, which targets ETW, Windows Event Logs and SQL Server Extended Events.
      • LINQ2Charts: an example for Rx bindings. Similar to existing APIs like LINQ to XML, it allows developers to use LINQ to create/change/update charts in an easy way. We would love to see more Rx bindings like this one.
      • With these libraries we are giving developers open access to both push-based and pull-based data via LINQ in Microsoft’s three fundamental programming paradigms (native, JScript and Managed code).

    We look forward to seeing you guys use the library, share your thoughts and contribute to the evolution of this fantastic technology built for all you developers.

  • Interoperability @ Microsoft

    Redis on Windows – stable and reliable


    The MS Open Tech team has been spending quite a bit of time testing the latest build of Redis for Windows (available for download from the MS Open Tech Github repo). As we approach completion of our test plan, we thought we’d share some very promising results.

    In phase I of our stress testing, we put Redis on Windows through various tests with execution times ranging from 1 to 16 days, and configurations ranging from a simple single-master setup to more complex configurations such as the one shown below, with one master and four replicas. You can find an overview of the testing strategy and configurations that we used on the wiki page here.


    The results are encouraging – we found only one bug, which we’ve already fixed.

    These tests have been done with the port of the Redis 2.6.8 version from Linux to Windows, and this version includes all of the work we announced in January, including 64-bit support. Our goal is to ensure developers that they can trust using Redis on Windows on scenarios where reliability is an important requirement, and we plan to keep testing the code on more ‘demanding’ scenarios to assure that we haven’t missed anything.

    If you have comments or recommendations on any scenario we should add to our test plan, or any other suggestions on how we can improve our testing strategy, please let us know. We’ll be happy to consider using any app or scenario that Redis developers feel would be a good test case for Redis.

    Claudio Caldato
    Principal Program Manager Lead
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    Here’s to the first release from MS Open Tech: Redis on Windows


    The past few weeks have been very busy in our offices as we announced the creation of Microsoft Open Technologies, Inc. Now that the dust has settled it’s time for us to resume our regular cadence in releasing code, and we are happy to share with you the very first deliverable from our new company: a new and significant iteration of our work on Redis on Windows, the open-source, networked, in-memory, key-value data store.

    The major improvements in this latest version involve the process of saving data on disk. Redis on Linux uses an OS feature called Fork/Copy On Write. This feature is not available on Windows, so we had to find a way to be able to mimic the same behavior without changing completely the save on disk process so as to avoid any future integration issues with the Redis code.

    The version we released today implements the Copy On Write process at the application level: instead of relying on the OS we added code to Redis so that some data structures are duplicated in such a way that Redis can still serve requests from clients while saving data on disk (thus achieving the same effect of Fork/Copy On Write does automatically on Linux).

    You can find the code for this new version on the new MS Open Tech repository in GitHub, which is currently the place to work on the Windows version of Redis as per guidance from Salvatore Sanfilippo, the original author of the project. We will also continue working with the community to create a solid Windows port.

    We consider this not to be production ready code, but a solid code base to be shared with the community to solicit feedback: as such, while we pursue stabilization, we are keeping the older version as default/stable on the GitHub repository. To try out the new code, please go to the bksavecow branch.

    In the next few weeks we plan to extensively test the code so that developers can use it for more serious testing. In the meantime, we will keep looking at the ‘save on disk’ process to find out if there are other opportunities to make the code perform even better. We will promote the bksavecow branch to master as soon as we (and you!) are confident the code is stable.

    Please send your feedback, file suggestions and issues to our GitHub repository. We look forward to further iterations and to working with the Redis community at large to make the Windows experience even better.

    Claudio Caldato

    Principal Program Manager

    Microsoft Open Technologies, Inc.

    A subsidiary of Microsoft Corporation.

  • Interoperability @ Microsoft

    phpBB: Available for the Microsoft Web Platform


    From the Microsoft Web Platform Team Blog:

    “ Today Microsoft is announcing that the Windows Web Application Gallery and Web Platform Installer (Web PI) now supports the download of the new phpBB release, which supports Windows, IIS and SQL Server. logo_phpbb_thumb_1

    Version 3.0.7-PL1 of phpBB takes advantage of a number of features for PHP applications on the Microsoft Web Platform with Windows, IIS and SQL Server including:

    • SQL Server Driver for PHP 1.1, provides key interoperability for PHP applications to use SQL Server for data storage. Released under the OSI approved MS-PL license and available on CodePlex.
    • WinCache Extension for PHP 1.0.1, provides increased performance for PHP applications on Windows and IIS. Released under the BSD license, is available from the PHP Extension Community Library (PECL) website. “

    More on the Microsoft Web Platform Team Blog: Announcing phpBB: Available for the Microsoft Web Platform.

  • Interoperability @ Microsoft

    Microsoft working with Joyent and the Node community to bring Node.js to Windows


    I am pleased to announce that Microsoft has joined Joyent and Ryan Dahl in their effort to make Windows a supported platform in Node.

    Our first goal is to add a high-performance IOCP API to Node to give developers the same high-performance/scalability on Windows that Node is known for, given that the IOCP API performs multiple simultaneous asynchronous input/output operations.

     At the end of this initial phase of the project, we will have official binary node.exe releases on nodejs.org, meaning that Node.js will run on Windows Azure, Windows 2008 R2, Windows 2008 and Windows 2003.

    You can read more about all this on nodejs.org as well as on Joyeur.com.

    While this is just the beginning of the journey to make Node.js on Windows a great platform for Node developers, I’m really excited about making this happen.

    So, stay tuned, as there’s a lot more to come!

    Claudio Caldato,

    Principal Program Manager, Interoperability Strategy Team

  • Interoperability @ Microsoft

    SQL Server Reporting Services SDK for PHP: adding business intelligence and reporting features to PHP applications


    Wouldn’t it be awesome if PHP developers building reporting applications could use the wider range of ready-to-use tools and services to create, deploy, and manage their reports. Today the SQL Server Reporting Services SDK for PHP turns that scenario into a simple reality enabling PHP developers to easily create reports and integrate them in their web applications.

    Announcing the SQL Server Reporting Services SDK (Software Development Kit) for PHP

    I’m excited to announce that the first version of the SQL Server Reporting Services SDK for PHP is available today on Codeplex, as an open source project: http://ssrsphp.codeplex.com.

    This SDK enables PHP applications to simply utilize SQL Server Reporting Services, Microsoft’s Reporting and Business Intelligence solution. Best of all, these scenarios can be done using the free (as in “free beer”!) SQL Server 2008 Express with Advanced Services edition. This edition includes the SQL Server 2008 Express database engine as well as graphical administration tools and the Reporting Services server components for creating, managing, and deploying tabular, matrix, graphical, and free-form reports (SQL Server 2008 Express Advanced can be downloaded here).

    SQL Server Reporting Services SDK for PHP in a nutshell

    The SDK offers a simple Application Programming Interface (API) to interoperate with SQL Server Reporting Services. The API provides simple methods to perform the most common operations:

    • list available reports within a PHP applications,
    • provide custom parameters from a PHP web form,
    • manage the rendering of the reports within a PHP application.


    The API is built on top of the SQL Server Reporting Services Web Service API using SOAP as the underlying communication mechanism. PHP applications can then manage reports, parameters, credentials, and output formats with SQL Server 2008 Reporting Services.

    The design of the report is created with Business Intelligence Development Studio which comes with SQL Server 2008 Express with Advanced Services. Developers can alter the style of the output formats to fit their needs.

    From Reporting Services in SQL Server Express Edition,your access to remote data sources (SQL Server, OLEDB, ODBC, MySQL, Oracle and others) goes through a SQL Server Express instance installed on the same server, using either:

    • a linked server feature to allow your application to connect to a data source by creating views pointing to the original database.
    • importing of data so you can extract data from the original datasource and import it into the SQL Server Express instance.

    The Hello World demo scenario

    Using the SQL Server Reporting Services SDK for PHP, we’ve created a simple scenario showcasing how to manage reports within a PHP application. This sample is part of the package that you download from the project site.

    The application displays first the list of reports that are available:


    Once the user picks a report, he can select parameters which have been predefined for the report, for example:


    For the developer, it’s fairly simple to build such form. It requires only to call the “GetReportParameters” method provided by the SDK and then parse the result and associate the appropriate HTML controls.
    Here’s a snippet (the full Hello World demo is part of the SDK download):


    Finally, when the user validates its choices, the report is generated on the server side and returned to the PHP applications, which does the final processing to display the information in the context of the application. Here the HTML output for our sample report:


    Join the conversation

    Today, I’m actually presenting the SDK at the Jump In! Developer Web Camp event underway in Zurich. I’m sure I’ll get a lot comments from the PHP experts attending, But what about you: does the SQL Server Reporting Services SDK for PHP respond to your scenarios?
    Of course, feedback is welcome!

    To join the conversation, please visit SQL Server Reporting Services SDK for PHP on Codeplex: http://ssrsphp.codeplex.com.

    Claudio Caldato, Senior Program Manager, Interoperability Strategy Team.

  • Interoperability @ Microsoft

    Customizable, Ubiquitous Real Time Communication over the Web (CU-RTC-Web)


    UPDATE: See our latest W3C WebRTC Working Group blog post on 01-17-2013 http://aka.ms/WebRTCPrototypeBlog describing our new CU-RTC-Web prototype that you can download on HTML5 Labs.


    Matthew Kaufman - Inventor of RTMFP, the most widely used browser-to-browser RTC protocol on the web
    Principal Architect, Skype, Microsoft Corp.

    Martin Thomson
    Senior Architect, Skype, Microsoft Corp.

    Jonathan Rosenberg - Inventor of SIP and SDP offer/answer
    GM Research Product & Strategy, Skype, Microsoft Corp.

    Bernard Aboba
    Principal Architect, Lync, Microsoft Corp.

    Jean Paoli
    President, Microsoft Open Technologies, Inc.

    Adalberto Foresti
    Senior Program Manager, Microsoft Open Technologies, Inc.



    Today, we are pleased to announce Microsoft’s contribution of the CU-RTC-Web proposal to the W3C WebRTC working group.

    Thanks in no small part to the exponential improvements in broadband infrastructure over the last few years, it is now possible to leverage the digital backbone of the Internet to create experiences for which dedicated media and networks were necessary until not too long ago.

    Inexpensive, real time video conferencing is one such experience.

    The Internet Engineering Task Force and the World Wide Web Consortium created complementary working groups to bring these experiences to the most familiar and widespread application used to access the Internet: the web browser. The goal of this initiative is to add a new level of interactivity for web users with real-time communications (Web RTC) in the browser.

    While the overarching goal is simple to describe, there are several critical requirements that a successful, widely adoptable Web RTC browser API will need to meet:

    • Honoring key web tenets – The Web favors stateless interactions which do not saddle either party of a data exchange with the responsibility to remember what the other did or expects. Doing otherwise is a recipe for extreme brittleness in implementations; it also raises considerably the development cost which reduces the reach of the standard itself.
    • Customizable response to changing network quality – Real time media applications have to run on networks with a wide range of capabilities varying in terms of bandwidth, latency, and packet loss.  Likewise these characteristics can change while an application is running. Developers should be able to control how the user experience adapts to fluctuations in communication quality.  For example, when communication quality degrades, the developer may prefer to favor the video channel, favor the audio channel, or suspend the app until acceptable quality is restored.  An effective protocol and API should provide developers with the tools to tailor the application response to the exact needs of the moment.
    • Ubiquitous deployability on existing network infrastructure – Interoperability is critical if WebRTC users are to communicate with the rest of the world with users on different browsers, VoIP phones, and mobile phones, from behind firewalls and across routers and equipment that is unlikely to be upgraded to the current state of the art anytime soon.
    • Flexibility in its support of popular media formats and codecs as well as openness to future innovation – A successful standard cannot be tied to individual codecs, data formats or scenarios. They may soon be supplanted by newer versions that would make such a tightly coupled standard obsolete just as quickly. The right approach is instead to support multiple media formats and to bring the bulk of the logic to the application layer, enabling developers to innovate.

    While a useful start at realizing the Web RTC vision, we feel that the existing proposal falls short of meeting these requirements. In particular:

    • No Ubiquitous deployability: it shows no signs of offering real world interoperability with existing VoIP phones, and mobile phones, from behind firewalls and across routers and instead focuses on video communication between web browsers under ideal conditions. It does not allow an application to control how media is transmitted on the network. On the other hand, implementing innovative, real-world applications like security consoles, audio streaming services or baby monitoring through this API would be unwieldy, assuming it could be made to work at all. A Web RTC standard must equip developers with the ability to implement all scenarios, even those we haven’t thought of.
    • No fit with key web tenets: it is inherently not stateless, as it takes a significant dependency on the legacy of SIP technology, which is a suboptimal choice for use in Web APIs. In particular, the negotiation model of the API relies on the SDP offer/answer model, which forces applications to parse and generate SDP in order to effect a change in browser behavior. An application is forced to only perform certain changes when the browser is in specific states, which further constrains options and increases complexity. Furthermore, the set of permitted transformations to SDP are constrained in non-obvious and undiscoverable ways, forcing applications to resort to trial-and-error and/or browser-specific code. All of this added complexity is an unnecessary burden on applications with little or no benefit in return.


    The Microsoft Proposal for Customizable, Ubiquitous Real Time Communication over the Web

    For these reasons, Microsoft has contributed the CU-RTC-Web proposal that we believe does address the four key requirements above.

    • This proposal adds a real-time, peer-to-peer transport layer that empowers web developers by having greater flexibility and transparency, putting developers directly in control over the experience they provide to their users.
    • It dispenses with the constraints imposed by unnecessary state machines and complex SDP and provides simple, transparent objects.
    • It elegantly builds on and integrates with the existing W3C getUserMedia API, making it possible for an application to connect a microphone or a camera in one browser to the speaker or screen of another browser. getUserMedia is an increasingly popular API that Microsoft has been prototyping and that is applicable to a broad set of applications with an HTML5 client, including video authoring and voice commands.

    The following diagram shows how our proposal empowers developers to create applications that take advantage of the tremendous benefits offered by real-time media in a clear, straightforward fashion.


    We are looking forward to continued work in the IETF and the W3C, with an open and fruitful conversation that converges on a standard that is both future-proof and an answer to today’s communication needs on the web. We would love to get community feedback on the details of our CU-RTC-Web proposal document and we invite you to stay tuned for additional content that we will soon publish on http://html5labs.com in support of our proposal.

  • Interoperability @ Microsoft

    OData interoperability with .NET, Java, PHP, iPhone and more


    OData-logo Wouldn’t that be cool to have more ways to unlock your data and free it from applications silos?
    Today at MIX10, we presented about how Open Data Protocol (OData) can contribute to a more programmable web through demos consuming a Netflix OData feed in various scenarios. We also announced a series of new and updated OData SDKs for PHP, Java, Objective C (iPhone & Mac,) and JavaScript (AJAX and Palm WebOS). The SDKs can be found on the www.odata.org website.

    OData SDKs for PHP, Java, Objective C (iPhone & Mac,) and JavaScript (AJAX and Palm WebOS).

    Today we are announcing a new version of the OData SDK for PHP (previously called Toolkit for PHP with ADO.NET/WCF Data Services). This version includes new features like the capability of handling large result sets of data using an automated paging mechanism and a new sample built on top of an OData feed exposing the Netflix catalog, which we are covering in detail in this blog post. Link for more detail on the OData SDK for PHP.

    We also announced today the new OData SDK for Objective C which facilitates the development of applications for iPhone and Mac OS X connecting with OData services. This early version is a Community Technology Preview (CTP) and it supports read operations only and it has been tested on a limited set of scenarios. The download includes a sample iPhone application to browse the new NetFlix OData service hosted in Azure.


    Link for more details on the OData SDK for Ojective C Community Technology Preview (CTP)

    Finally, Noelios has just updated the Restlet Extension for OData – a set of tools and libraries for Java. Read Jerome Louvel’s post Restlet supports OData, the Open Data Protocol for more details. Noelios has also released a new detailed tutorial for developers who want to access OData services in Java.

    The list of OData SDKs is available at http://www.odata.org/developers/odata-sdk.

    About Open Data protocol - oData

    In essence, the purpose of the OData is to feed the web with more consumable data and give the developers and entrepreneur more power to create new scenarios.

    OData enables data integration across a broad range of clients, servers, services, and tools. OData builds on a few conventions, popularized by AtomPub, to using REST-based data services. These services allow resources, identified using Uniform Resource Identifiers (URIs) and defined in an abstract data model, to be read and edited by web clients using simple HTTP messages. For more details, consult the protocol documentation on the OData site where you will also find a list of services and products that are already using OData.
    Read "Open Data for the Open Web" by Doug Purdy for more perspective on OData.

    The Netflix demo scenario

    Today at MIX10, Doug Purdy demoed how you can quickly build a simple application consuming OData feeds, with Silverlight and also showed a demo running on the Palm webOS leveraging the OData JavaScript library. We’re following up, using the OData SDK for PHP and the OData feed exposed by Netflix, we’ve built a web application that allows users to search through the Netflix movie archives.

    The demo starts with a search form with multiple pull-down menus you can use to narrow the search for titles in the catalog. To keep the demo simple, we limited on purpose the set of the fields that could be used to build an advanced search on the OData service. We actually use only the “Genre” and “Language” options which are prepopulated with values coming from the Netflix OData feed and the “Name” (movie title):


    Once the user has selected his criteria and hit search, the PHP application calls the Netflix OData feed through a simple method call, highlighted below:


    And then a list of corresponding titles is returned by the Netflix oData feed. The result set is filtered and sorted by the Netflix service; you just have to display the data in a pretty HTML page:


    Netflix’s OData backend runs on Windows Azure and SQL Azure to produce the OData feeds. OData being an open specification, there are many ways to build a “data producer”. Here are a few applications and services exposing OData feeds:

    • SharePoint 2010
    • SQL Azure
    • Windows Azure Table Storage
    • IBM Websphere

    The complete list of currently available solutions is here: http://www.odata.org/producers. We clearly expect to see more OData producers coming for various platforms and languages.

    How did we build the sample application?

    You can watch the following Channel9 video with Claudio Caldato demoing and explaining the PHP sample. Claudio has been instrumental in driving the development of cross platform OData SDKs and building the OData community with Microsoft partners.

    Get Microsoft Silverlight

    Using the oData SDK for PHP to consume an OData feed is really quick and easy. You have to consider two main steps:

    1. Generating the proxy classes: the SDK includes a tool that will read the definition of the OData Service and create the corresponding PHP proxy classes. It will create one class per collection that is exposed by the service. You can see here all the collection available in the Netflix service:

    2. The next step is to write the code for application logic. Your code will call the PHP proxy classes so that you can easily program against the OData.

    The process is very similar with all the oData SDKs whether it is for PHP, Java, Objective C (iPhone & Mac), or JavaScript (AJAX and Palm WebOS). They all work the same way. To summarize, here’s the OData SDK for PHP architecture diagram which shows the key steps and elements:


    Join the conversation

    We’ve been working hard to get OData support on as many platforms as we can so a developer on any platform can both consume and produce these feeds. It’s only the beginning of the journey, and you can expect more to come. Of course, feedback is welcome!

    To join the conversation, please visit www.odata.org.

    Additional information to bookmark, two MIX10 sessions:

    -- Jean-Christophe Cimetiere, Sr. Technical Evangelist

  • Interoperability @ Microsoft

    Latest WebSockets Release Interoperates with Firefox, Eclipse's Jetty


    We have updated the WebSockets prototype on our HTML5 Labs site, which brings the implementation in line with the recently released WebSockets 06 Protocol Specification.  

    We have extended our interoperability testing so that now, along with LibWebSockets, we tested interoperability with Jetty, an open-source project providing an HTTP server, HTTP client, and javax.servlet container, developed by the Eclipse community, and we accepted the invitation of Patrick @Docksong.com to test our code with a Firefox Mindfield version he built with an implementation of the 06 Protocol Specification.

    We tested the WebSockets interoperability between our HTML5 Labs prototype client and Jetty server, which recently added support for the 06 version of the spec (you can find the Jetty code here.)

    We also tested the WebSockets interoperability with a test Firefox build that supports the 06 protocol specification. We are hosting a chat demo page on Azure, which can be opened in Firefox and will use native browser WebSocket instead of the Silverlight-based one. 

    WebSockets is a technology designed to simplify much of the complexity around bi-directional, full-duplex communications channels, over a single Transmission Control Protocol (TCP) socket. It can be implemented in web browsers, web servers as well as used by any client or server application.

    This fourth update of our WebSocket prototype brings ping-pong support: automatic client to server ping every 50 seconds. It also now supports the binary and fragment frames feature defined in the WebSocket protocol specification, but they are not yet exposed to javascript because the W3C API working group is still working on defining a set of APIs that can work with binary data. 

    Jetty Testing

    Our testing involved setting up Jetty server on Win2K8 server machine, and hosting a chat WebSocket endpoint, which has the same functionality as this chat sample

     We then directed our existing chat client web page to use the Jetty-hosted endpoint (instead of WCF-hosted endpoint), and we confirmed that the chat app was fully functional. 

    This screenshot shows the chat page opening a WebSocket connection to ws://localhost:4502/chat



    This screenshot shows Jetty server accepting the WebSocket connection from the browser


    This screenshot shows the chat page connected to Jetty WebSocket connection


    And, as I said earlier, we are hosting the Jetty chat endpoint on Azure, and have updated our existing chat demo to use it. To deploy the Jetty endpoint in Azure, we used the recently released Windows Azure Starter Kit for Java, developed by our Interoperability team.

    Firefox Testing

    Our testing involved hosting a chat WebSocket endpoint using the WCF-based HTML5 Labs prototype.

     We modified our existing chat page to use native browser WebSocket API (instead of the HTML5 Labs WebSocketDraft API), and we confirmed that the chat app was fully functional.  

    This screenshot shows the chat page works in Firefox using native browser WebSocket API 

    This prototype forms part of our HTML5 Labs Web site, a place where we prototype early and not yet fully stable drafts of specifications developed by the W3C and other standard organizations. The WebSocket API is currently being standardized by the W3C and the WebSocket protocol is being standardized by the IETF.

    Building these prototypes in a timely manner will also help us have informed discussions with developer communities, and give implementation experience with the draft specifications that will generate feedback to improve the eventual standards.

    Claudio Caldato,

    Principal Program Manager, Interoperability Strategy Team

  • Interoperability @ Microsoft

    ActorFx Applied in Dynamically Distributed Social Media Apps



    From the ActorFx team:

    Brian Grunkemeyer, Senior Software Engineer, Microsoft Open Technologies Hub

    Joe Hoag, Senior Software Engineer, Microsoft Open Technologies Hub


    Today we’d like to talk about yet another ActorFx example that shows the flexibility and tests the scalability of the ActorFx Framework. 

    ActorFx provides an open source, non-prescriptive, language-independent model of dynamic distributed objects for building highly available data structures and other logical entities via a standardized framework and infrastructure. ActorFx is based on the idea of the mathematical Actor Model for cloud computing.


    The example is a social media example called Fakebook that demonstrates a simple way to apply actors to large-scale problems by deconstructing them into small units.  This demo highlights ActorFx’s ability to dramatically simplify cloud computing by showing you that surprisingly few concepts are necessary to build a scalable and interesting application with a relatively simple programming model.  The example is part of the latest build on the ActorFx CodePlex site.


    ActorFx Framework Background

    The ActorFx Framework lets developers easily build objects for the cloud, called actors.  These actors allow developers to send messages, respond to messages, and create other actors.  Our Actor Framework adds in a publish/subscribe mechanism for events as well.  Using these, we have built the start of a Base Class Library for the Cloud, including CloudList<T> and CloudDictionary<TKey, TValue> modelled after .NET collections.  We’ve harnessed the ActorFx Framework’s pub/sub mechanism to build an ObservableCloudList<T> that is useful for data binding collections to UI controls.  These primitives are sufficient to build a useful system we call Fakebook, a sample social media application. 


    Fakebook Design

    Social media apps have many properties that map quite well to actors.  Most data storage is essentially embarrassingly parallel – users have their own profile, their own photos, their own list of friends, etc.  Whenever operations span multiple users, such as making friends or posting news items, this requires calling into other people’s objects or publishing messages to alert them to new information.  All of these map well to the Actor Framework’s concept of actors, including message passing between actors and our publish/subscribe event pattern.

    Let’s decompose a social network into its constituent parts.  At the core, a social network provides a representation of a person’s profile, their friends, their photos, and a newsfeed.  All of this is tied together using a user name as an identifier.  Other features could be added on top of this, such as graph search, a nice timeline view, the ability to play games, or even advertising.  However, those are out of scope for our Fakebook sample, which is demonstrating that the core parts could be built using actors. 

    See Figure 1 for a clear breakdown of the UI into constituent parts.


    Figure 1 - Fakebook UI Design


    The core idea of Fakebook is to model a user as a set of actors.  In our model, a Fakebook user is represented by the following actors:

    ·         A person actor that tracks their name and all basic profile information (like gender, birthday, residence, etc). 

    ·         A friends list, which is a CloudList<String> containing the account name of all friends.

    ·         A photos dictionary, which is a CloudStringDictionary<PictureInfo>, mapping image name to a type representing a picture and its interesting characteristics (such as author, licensing information, etc).

    ·         A set of news posts made by the author, as a CloudList<String>.

    ·         A newsfeed actor that aggregates posts from all of a person’s friend’s news feeds.

    Interaction between actors can be done in one of two ways: actor-to-actor method calls (either via IActorProxy’s Request or Command methods), or via events sent via our publish/subscribe mechanism.  For Fakebook, we use a mix of approaches.  When constructing other actors or adding friends, we use IActorProxy’s Request. 

    The newsfeed actor works entirely based on publish/subscribe events.  The newsfeed actor is subscribed to all of the person’s friends’ news posts.  Similarly, the newsfeed actor is also subscribed to the person’s friends list, so whenever a friend is added or removed (unfriended), the newsfeed actor knows to subscribe or unsubscribe as appropriate.  For interacting with the client, we also use publish/subscribe messages.  The client can data-bind an ObservableCloudList<T> to a UI control like a ListBox.  This allows users to seamlessly push updates from the actor in the cloud to the client-side UI.  See Figure 2 for an example showing how publish/subscribe messages are sent between actors as well as from actors to the client UI.



    Figure 2 - Fakebook Publish/Subscribe messages:  Actor-to-Actor and Actor-to-Client


    We’ve extended support for LINQ queries to allow transforming observable collections from one type to another.  This is very useful for bridging from cloud-based data types (such as account names stored as strings) to rich client-side view model types (such as a FakebookPerson object).  LINQ’s Select can be used to change an ObservableCloudList<String> into an IEnumerable<FakebookPerson>, and we’ve done the engineering work to preserve the event stream of updates as well.  Consider the following code:


        privateObservableCloudList<String> _friends = …;


        // Here, we need an ObservableCloudList<String> converted to an observable sequence

        // that also passes through INotifyCollectionChanged events.  When exploring how to

        // build this, we settled on using a Select LINQ operator for our type.  Due to

        // some problems with C# method binding rules, we had to add an interface

        // to express the right type information.  But, this works.

        var people = _friends.Select(accountName =>

            newFakebookPerson(App.FabricAddress, accountName, App.ConnectThroughGateway));

        Dispatcher.Invoke(() => FriendsListBox.ItemsSource = people);


    By using client-side view model types, a client application’s data binding code can easily access properties such as a person’s first & last names as well as profile picture.  This information is not as easily obtainable via just the user’s account name, so a transformation like this helps keep the level of abstraction high when writing client apps while preserving the dynamic nature of the underlying data. 


    Deploying Code

    Fakebook’s NewsFeed and Person actors were implemented in a curious way.  For our collection actors, we built the application then used a complex, actor-specific deployment directory structure with many supporting files in place to describe the actor.  This has the advantage that our infrastructure knows what a list actor is, in great detail.  However, for Fakebook a more flexible approach was employed.  The NewsFeed and Person actors both define .NET assemblies with a type each that contains a number of actor methods.  But we use our empty actor implementation to deploy these assemblies to each of them.  This makes it easy to update the implementation, and is a bit easier to manage.

    For the Actor Runtime’s infrastructure, we create a service consisting of the empty actor app, called “fabric:/actor/fakebook”.  We then create instances of that service with names like “fabric:/actor/fakebook/People” and “fabric:/actor/fakebook/Newsfeed”.  Using a partitioning scheme, we then create individual IActorStates for individual accounts.  So an account name like “Bob.Smith” is used as a partition key for the People and Newsfeed actors.  Fakebook makes use of similarly partitioned list and dictionary actors as well.



    The Actor Framework provides for high availability by allowing multiple replicas of an actor to run within the cloud.  If a machine holding one replica crashes, then we already have other replicas to choose from to continue running the service.  The number of replicas is configurable, and for Fakebook we are using a total of two replicas per service.  The Actor Runtime will designate one as the primary and a second as the secondary.  Additional replicas all become secondaries. 

    As you may have read in the Actor Framework documentation, actors achieve a limited transaction-like set of semantics via a quorum commit.  Changes are not committed to the primary until they are first written to a quorum of secondaries.  After that, the changes are committed to the primary and acknowledged to the client. 

    Meanwhile, requests from the client to an actor are auto-idempotent.  Consider a client that makes a request and never receives an acknowledgement.  This could happen for multiple reasons.  The server could have crashed before completing the operation, after completing the operation & before acknowledging the operation.  Similarly, network connectivity could have been lost.  The Actor Framework solves this via that a unique sequence number for every request, and storing the result of the last request in the actor’s replicated state.  By doing this, if a client issues request #5 and loses its network connection, it can reconnect then safely re-issue request 5.  If the operation already ran, the client will get the previously cached result from actor state.  If the request didn’t successfully complete the first time, the request will now execute.  Importantly, the complexity of handling this is built into the client-side & server-side logic of the Actor Framework, so users do not need to think about this.  They simply call a method.

    One not-yet-implemented feature is persistence.  All data in actors is made highly available via replication across machines, but all that data is stored in memory.  Actor state is not currently persisted to disk or any other network storage mechanism like an Azure blob or table.  One consequence is that if the power goes out to the entire cluster, you lose everything.  This is an active area for future development.


    Scalability & Performance

    A traditional database system might model a user using several tables, with one row for profile information, and multiple rows representing friends & images for each person.  Scalability challenges would be hit once you exceed the number of operations possible on an individual database.  (ie, SQL Server’s new Hekaton engine may max out around 60,000 operations/second.)  While databases can go a long ways, enterprises need systems capable of scaling to a much wider scale.  Fakebook is an attempt at building a competing vision for data storage.  While scale up is important, an actor model with relatively easily separable data like Fakebook should be ideal for scale out.  Each actor can maintain its own state in memory, local to that actor.  The hope is a database acting as a single point of failure is not necessary.

    One common criticism of actor models is that while they are easy to write, they require a significant amount of optimization.  We found the same problem, and needed to invest in multi-tenant actors to improve the scalability.  Performance is still very much a work in progress for the Actor Framework, but we hope to show some of the optimizations we found particularly valuable, as well as some we hope to make in the future.



    One of the challenges is to get the most use out of each machine.  There are a finite number of sockets on each machine.  Similarly, new actors require allocating state and entries in a naming table that can slow things down.  The creation of service instances on the Actor Runtime is unfortunately slow and can gate our performance.  Mapping one actor to one process is a horrible idea, and mapping one actor to shared state within a process can often be insufficient.  To solve this, we looked into partitioned service instances for hosting multi-tenant actors.

    Think of partitioning as creating buckets for actors on various machines.  Those buckets can be empty, or you can fill them with multiple actors.  By doing this, the cost of creating the buckets is amortized over the number of partitions.  Each individual actor is created within its appropriate bucket, and receives its own isolated version of actor state.  See Figure 3.


    Figure 3 - Partitioned List Actors

    This allows for actor creation times to be vastly faster, by about 3 orders of magnitude. 

    Now let’s draw a more complete picture.  The Actor Runtime will use processes on various machines for each service type (like a list service, a dictionary service, our empty actor service, etc).  Each process hosts zero or more service instances (both primaries and secondaries, though let’s ignore secondaries for now).  Processes take a while to spin up and service instances are expensive to create.  In a naïve hosting scenario without partitioning, consider a list service and a dictionary service with many instances, mapping to one collection each.  Here is what will be running on a cluster.  Service instances are in blue below.


    Figure 4 - Non-Partitioned Hosting

    In the picture above, creating new individual collections requires creating new service instances, which is an expensive operation.  With partitioning in the picture, we create fewer service instances. 



    Figure 5 - Partitioned Hosting


    Replication is done in the Actor Framework at the service instance level.  So in this picture, “List A” and “List B” both share the same IActorState in terms of replication.  However, this could lead to conflicts if both lists used a field of the same name, allowing one list to scribble over values from a separate list.  The Actor Framework provides a further level of state isolation (an IsolatedActorState) that is a sub-space within an IActorState.  So partitioned actors share the same replication mechanism & characteristics, but get their own isolated view of their state.

    The mapping from name of an individual list is affected by partitioning.  Without partitioning, the Actor Runtime will load balance at the service instance level and allocates them to machines in a reasonable way.  However when using partitioning, the mapping from name of a list to a list partition is done by hashing the name then mapping it onto a range.  For example, a name like “List A” may hash to a value between 1 and 100, and all hash values in the range 1-25 are mapped to list partition 1, 26-50 to list partition 2, etc.  

    Fakebook employs partitioning for each of the actors it uses – lists, dictionaries, person actors & news feed actors.  This substantially reduced the cost to create new people.


    Proxy Problems

    Another challenge involves actor to actor communication, where we use IActorProxy to represent a communications channel between two actors.  Our initial design required a unique proxy object for every actor-to-actor communication, and cached proxies for quick reuse later.  In a simple example, I envisioned 256 people on Fakebook, and each of those 256 people would be a friend with all 255 other people.  However, this approach was flawed.  Keeping those IActorProxy objects cached required keeping just under 64K sockets open.  We tried developing this on one machine and ran out of sockets! 

    To fix this, we are exploring two approaches – sharing sockets when talking between actor partition buckets, as well as ditching the caching & adding in async support for establishing new actor proxies.  We anticipate this will contribute substantial wins.


    Batching Data Transfers

    One of the most significant ways to improve performance is to reduce the chattiness of your protocols over the network.  As a simple example, one of our Actor Framework performance tests adds one million integers to a CloudList<T> and sorts them.  Adding one million elements one at a time was horribly inefficient.  Instead, we needed to change the protocol to support batching to get a higher level of performance.  Additionally, readers must note that method calls that turn into network calls are significantly less reliable than normal method calls.  These considerations led to a new interface, IListAsync<T>:

    namespace System.Cloud.Collections


        publicinterfaceIListAsync<T> : ICollectionAsync<T>


            Task<T> GetItemAsync(int index);

            Task SetItemAsync(int index, T value);

            Task<int> IndexOfAsync(T item);

            Task InsertAsync(int index, T item);

            Task RemoveAtAsync(int index);


            // Less chatty versions

            Task AddAsync(IEnumerable<T> items);

            Task RemoveRangeAsync(int index, int count);




    The AddAsync and RemoveRangeAsync methods will perform much better.  Additionally, since they are async methods, they allow clients to more easily adopt async method calls as a common design pattern, then use continuations to resume execution when those operations have completed.

    A similar change was made to actor methods.  In the example of setting up friendships between multiple people, changing the ForceMakeFriends method to take an array of friends led to a substantial improvement.

    More details on how to run the Fakebook example can be found in the CodePlex site, and we are continuing work on the v0.60 release.   Subscribe to the CodePlex feed and our Blog to be kept up to date on the latest developments.


    For those of you who are using ActorFx in implementations, we’d always like to hear more about how it’s useful to you and how it can be improved. We are looking forward to your comments/suggestions, and stay tuned for more cool stuff coming in our next release!

  • Interoperability @ Microsoft

    //build/ today with open source frameworks on Windows Phone 8


    Added support for Windows Phone 8 in Apache Cordova, Sencha Touch, Cocos2D, Ogre3D and other open source frameworks.

    The cool news for developers keeps on rolling at //build/ 2012. We’re thrilled to relay the announcements from a broad range of open source communities that their support for Windows Phone 8 goes live on “Day 1” of the SDK availability, along with other partners. There are several open source frameworks to choose from today.

    The Windows Phone team and Microsoft Open Technologies, Inc. engaged early in the process with open source communities to enable Windows Phone 8 in these popular open source and cross platform frameworks. We provided technical support and information, gave early access to the tools and MS Open Tech contributed code to the Cocos 2D and Ogre3D projects.

    The market opportunity just got bigger and easier for all developers with this news. We believe it is important that developers have choices and can reuse their skills and code to build Windows Phone 8 applications.

    This added support for Windows Phone 8 in diverse open source and cross platform frameworks was made possible thanks to new features in Windows Phone 8: native C++ programming and Internet Explorer 10 expanded HTML5 support.

    Developers who have applications based on these frameworks can publish them to the Windows Phone Store in record time. And this applies to various domains, like gaming with C++ or C# frameworks such as Cocos 2D, Ogre 3D and SharpDX, or cross platform development with HTML5 and JavaScript leveraging Apache Cordova, Trigger.io, Sencha Touch or jQuery Mobile. Developers using popular open source tools and frameworks such as SQLite or GalaSoft MVVM toolkit will also be able to reuse their code and skills.

    “Nearly 50% of Sencha customers have expressed interest in building apps for Windows Phone 8 in the next 6-12 months. Supporting Windows Phone 8 is a natural choice for Sencha to enable our customers to build universal apps for mobile devices.” - Abraham Elias, CTO Sencha Inc.

    Jay Garcia, CTO at Modus Create, and his team are developing a mobile companion application for the game Diablo III:
    Using Blizzard’s Diablo III web APIs in combination with PhoneGap and Sencha Touch, we were able to hugely increase the game’s fan base because we could build and publish our application to both iOS and Android with the same HTML5 and JavaScript code base. It literally took us a few days to get the same code to run on Windows Phone 8 thanks to this newly added support.”

    You can read more about Modus Create work to migrate their application to Windows Phone 8 on their blog post.

    Craig Walker, CTO at Xero commented on the new support for Windows Phone 8 in Sencha Touch:
    Using web standards-based technologies such as Sencha Touch and Apache Cordova for our mobile accounting software application Xero Touch helped us target a wide range of platforms so our customers could focus on their business, not the underlying technology. Support for these technologies in Windows Phone 8 tools made it an easy Xero Touch build for our dev team, and a smart addition for our customers who need flexibility managing their business on the go.”

    Microsoft Open Technologies, Inc., supported the jQuery Mobile and Sencha Touch communities to deliver themes that will allow developers to integrate their applications into the Windows Phone 8 user experience.

    As Craig Walker from Xero stresses, it is crucial for developers to be able to deliver a seamless consumer experience integrated into the platform. You can see below a video demonstrating the Sencha Touch theme for Windows Phone 8.

    Brett Nagy, Technical Director at Microgroove, and his team got a chance to try the Windows Phone 8 tools and the early Sencha Touch support for Windows Phone 8:
    Our apps have been making companies more productive for well over a decade. Sencha Touch support for Windows Phone 8 has made our engineer team more productive by allowing us to easily re-use code from one mobile platform to another.
    Within a couple of hours, we had the basic Windows Phone 8 themed version of an existing app without requiring any changes to its JavaScript codebase. Now that producing builds that run on Windows Phone 8 is part of our regular workflow, the next step is to build out functionality that really takes advantage of that platform. Knowing that we can do that in HTML + JS allows us to extend our reach beyond iOS and Android with minimal change to our projects timelines.

    For developers using jQuery Mobile, Sergey Grebnov from Akvelon, who previously published a jQuery Mobile theme for Windows Phone 7.5 is releasing a new jQuery Mobile theme for Windows phone 8. You can see below a short demo of how to apply the theme to a Windows Phone 8 application.

    This is the first time so many open source and cross platform frameworks are on board with Windows Phone on the first day of a new SDK version release. It is great to see how much communities are eager to work with Windows Phone.

    And today is just the beginning. We want to continue this effort to help open source developers enable their frameworks on Windows Phone 8. It’s important for developers to reuse their skills, expand the market opportunity to make money on our devices, and build the next generation of apps. Imagine the possibilities.

    Go check out the various frameworks and let us know if you think of other ones you would love to be able to use to build Windows Phone 8 applications.

  • Interoperability @ Microsoft

    Prototyping Early W3C HTML5 Specifications


    Today we launched the HTML5 Labs Web site, a place where we prototype early and not yet fully stable drafts of specifications developed by the W3C and other standard organizations. 

    These prototypes will help us have informed discussions with developer communities, and give implementation experience with the draft specifications that will generate feedback to improve the eventual standards. It also lets us give the community some visibility on those specifications we consider interesting from a scenario point of view, but which are still not at the stage where we can consider them ready for official product support.

    Microsoft's approach with Internet Explorer as outlined in a blog post by Dean Hachamovitch, the Corporate Vice President for Internet Explorer, is to implement standards as they become site-ready for broader adoption.

    Writing Sites to IE Based on Stable HTML5

    For developers, this means that they can write sites to Internet Explorer and be confident that it is based on stable HTML5 and will work in future browser upgrades.  For users, it means that sites continue to work as they upgrade their browsers and they don't get locked in to older browsers.

    At the same time, Microsoft sees an important need in continuing to drive experimentation and testing of new specifications in the standards organizations. It is part of the process of ensuring that specifications are actually ready for real-world usage.

    This new HTML5 Labs Web site is the place where our Interoperability Labs will publish prototype implementations of certain unstable and in-progress W3C, IETF, ECMA and other standards specifications still undergoing a lot of change. So, developers should expect that code and web pages based on these prototypes will have to be re-written as the specifications mature.

    First Prototypes: WebSockets and IndexedDB

    The first two prototypes we are delivering today are Web Sockets and IndexedDB.

    WebSockets is a technology designed to simplify much of the complexity around bi-directional, full-duplex communications channels, over a single Transmission Control Protocol (TCP) socket. It can be implemented in web browsers, web servers as well as used by any client or server application. The WebSocket API is currently being standardized by the W3C and the WebSocket protocol is being standardized by the IETF.

    For its part, IndexedDB is a developing W3C Web standard for the storage of large amounts of structured data in the browser, as well as for high performance searches on this data using indexes. IndexedDB can be used for browser implemented functions like bookmarks, as well as for web applications like email. IndexedDB also enables offline scenarios where the browser might be disconnected from the Internet or server.

    We chose these two specifications primarily because they are potentially very useful but currently unstable. These are the two specifications we currently believe the community stands to benefit the most from, but both are in flux. 

    The details of the HyBi protocol underlying WebSockets are being hotly debated in IETF right now, and the IndexedDB spec will soon be updated to reflect decisions made at a recent W3C working group meeting.

    A Call to Action

    So please experiment with these prototypes and tell us and other working group participants whether the APIs are usable. We are making them available to help improve the final specifications. 

    Other implementers can use these prototypes to determine whether we have interpreted the specifications in the same way, and a larger audience can get a better sense of what potential will be unlocked when these specifications have stabilized into interoperable implemented standards.

    Also, please participate in the appropriate standards bodies to help finalize the specifications.

    Many thanks,

    Jean Paoli

    GM: Interoperability Strategy

  • Interoperability @ Microsoft

    Open source release from MS Open Tech: Pointer Events initial prototype for WebKit



    Adalberto Foresti, Principal Program Manager, Microsoft Open Technologies, Inc.
    Scott Blomquist, Senior Development Engineer, Microsoft Open Technologies, Inc.

    It’s great to see that the W3C Pointer Events Working Group has expanded its membership and published the first working draft last week in the process to standardize a single input model across all types of devices. To further contribute to the technical discussions, today Microsoft Open Technologies, Inc., published an early open source HTML5 Labs Pointer Events prototype of the W3C Working Draft for WebKit. We want to work with the WebKit developer community to enhance this prototype. Over time, we want this prototype to implement all the features that will be defined by the W3C Working Group’s Pointer Events specification. The prototype will help with interoperability testing with Internet Explorer.

    The Web today is fragmented into sites designed for only one type of input. The goal of a Pointer Events standard is to help Web developers to only need to code to one pointer input model across all types of devices and to have that code work across multiple browsers. Google, Microsoft, Mozilla, Nokia and Zynga are among the industry members working to solve this problem in the W3C Pointer Events WG.

    Microsoft submitted the Pointer Events specification to the W3C just three months ago. The working group is using Microsoft’s Member submission as a starting point for the specification, which is based on the APIs available today in IE10 on Windows 8 and Windows Phone 8.

    Our team developed this Pointer Events prototype of the W3C Working Draft for WebKit as a starting point for testing interoperability between Internet Explorer and WebKit in this space. As we have done in the past on HTML5 Labs, the prototype intends to inform discussions and provide information grounded on implementation experience. Please provide feedback on this initial implementation in the comments of this blog and in the WebKit mailing lists. We also would love to get some advice on how/when to submit this patch to the main WebKit trunk.

    Overall, we believe that we are on a solid path forward in this standardization process. In a short time, we have a productive working group, a first W3C Working Draft specification, and an early proof of concept for WebKit that should provide valuable insights. We’re looking forward to working closely with the community to develop this open source code in WebKit so we can start testing interoperability with Internet Explorer.

  • Interoperability @ Microsoft

    Full Support for PhoneGap on Windows Phone is Now Complete!


    Congratulations to all the people involved in the PhoneGap community for the recent release of version 1.3 of their HTML5 open source mobile framework.

    This release includes many new features, and you can find more details here. You may remember that we announced back in Sept that Microsoft was helping to bring Windows Phone support in PhoneGap: I am happy to say we can now check
    this box!

    We’re also pleased to note that all features in PhoneGap 1.3 are now supported for Windows Phone, as you can see on their site here.

    Also, beyond the core PhoneGap features, developers can enjoy a selection of PhoneGap plugins that support social networks - including Facebook, LinkedIn, Windows Live and Twitter - and a solid integration into Visual Studio
    Express for Windows Phone.

    We have also developed further plugins to give HTML5 developers a feel for Windows Phone’s unique features like Live Tile Update and Bing Maps Search.

    Please check out Jesse MacFadyen’s blog, PhoneGap’s dev lead, on his experiences developing PhoneGap on Windows Phone.

    For more technical details of using the framework, see Glen and Jesse’s technical walk thru blogs. For a quick a spin of what PhoneGap and Visual Studio allow you to do, see this WP7 and Android camera app created in 3 minutes! Bits are located here; plugins are here.

    Looking ahead:

    As mentioned in PhoneGap’s announcement blog post, the next PhoneGap 1.4 release will be from the Cordova incubation project at Apache.  We at Microsoft are proud to be members of this project and to offer technical resources.  We welcome the involvement of Adobe, IBM and RIM and look forward to collaboratively growing PhoneGap at its new home in Apache while helping evolve an open web for any device.

    Microsoft’s commitment to HTML5 in IE9 has been instrumental in achieving this level of support. We are also building on our HTML5 investment through initiatives like bringing jQuery Mobile support as we outlined few
    weeks ago
    . Partnering with open source communities to bring this level of openness continues to be an important goal here at Microsoft.

    So, stay tuned for more news on our support for popular mobile open source frameworks on WP7.5!

    Abu Obeida Bakhach

    Interoperability Strategy Program Manager

  • Interoperability @ Microsoft

    First Stable Build of Node.js on Windows Released


    Great news for all Node.js developers wanting to use Windows: today we reached an important milestone - v0.6.0 – which is the first official stable build that includes Windows support.

    This comes some four months after our June 23rd announcement that Microsoft was working with Joyent to port Node.js to Windows. Since then we’ve been heads down writing code.

    Those developers who have been following our progress on GitHub know that there have been Node.js builds with Windows support for a while, but today we reached the all-important v0.6.0 milestone.

    This accomplishment is the result of a great collaboration with Joyent and its team of developers. With the dedicated team of Igor Zinkovsky, Bert Belder and Ben Noordhuis under the leadership of Ryan Dahl, we were able to implement all the features that let Node.js run natively on Windows.

    And, while we were busy making the core Node.js runtime run on Windows, the Azure team was working on iisnode to enable Node.js to be hosted in IIS. Among other significant benefits, Windows native support gave Node.js significant performane improvements, as reported by Ryan on the Node.js.org blog.

    Node.js developers on Windows will also be able to rely on NPM to install the modules they need for their application. Isaac Shlueter from the Joyent team is also currently working on porting NPM on Windows, and an early experimental version is already available on GitHub. The good news is that soon we’ll have a stable build integrated in the Node.js installer for Windows.

    So stay tuned for more news on this front.

    Claudio Caldato,

    Principal Program Manager, Interoperability Strategy Team


  • Interoperability @ Microsoft

    New Office Documentation Now Publicly Available


    By Paul Lorimer, Group Manager, Microsoft Office Interoperability 

    [UPDATE: 05/24/2010, Two open source projects to facilitate interoperability with Outlook .pst data files]

    Office LogoCustomers of Microsoft Office often work in heterogeneous IT environments, so their need for interoperability among disparate business systems is critical. They expect their trusted vendors to work together to make it happen, and Microsoft has demonstrated a strong commitment to the pursuit of interoperability through collaboration with industry players, open access and transparency when it comes to our intellectual property in this area, participating in the creation and maintenance of industry standards, and building our products in a way that makes interoperating with them easier by design.

    Ever since we released Microsoft Office 2007 SP2, we have been releasing tools and publishing tens of thousands of pages of documentation to help developers (including competitors) interoperate with the various products in the Office suite. Today we took another big step in our commitment to open access and transparency, delivering some highly anticipated documentation we’ve promised over the past year or so:

    • More documentation for Microsoft Office 2010. In July 2009, when Office 2010 was still in technical preview, we published thousands of pages of detailed technical documentation for the protocols used by our products to communicate with Office 2010. This enabled third parties to develop software that interoperates with Office 2010, informed by how other Microsoft products do so, while Office 2010 was still several months away from broad release. Today, as promised, we added thousands more pages to the canon. The addition of this new documentation will help other vendors bring interoperable products to market faster, increasing customer choice and satisfaction and driving better business results.
    • Brand new documentation for Microsoft Outlook files. Data portability is increasingly important for our customers and partners as more information is stored and shared in digital formats. One particular request we’ve heard is for improved access to email, calendar, contacts, and other data generated by Microsoft Outlook. On desktops, this data is stored in Outlook Personal Folders, in a format called a .pst file. Last fall we promised to release documentation that would make it easier for developers to read, create, and interoperate with the data in .pst files across a variety of platforms, using the programming language of their choice. After seeking input on the documentation from the community, today we delivered on that promise (here's the link to the documentation on MSDN: http://msdn.microsoft.com/en-us/library/ff385210.aspx).

    This kind of transparency provides exciting possibilities for our customers and partners. We’re proud of the work we’ve done in this area, and you can count on Microsoft to continue its vigorous pursuit of interoperability through a comprehensive approach, of which transparency is one of the keystones.

    Paul Lorimer, Group Manager, Microsoft Office Interoperability

    Related items:

  • Interoperability @ Microsoft

    jQuery Adds Support for Windows Store Apps, Creates New Opportunities for JavaScript Open Source Developers


    jQueryonWinRTThe popular open source JavaScript Web framework jQuery is adding full support for Windows Store applications in the upcoming v2.0 release, thanks to recent contributions from appendTo with technical support from Microsoft Open Technologies, Inc. (MS Open Tech). Considering the opportunity Windows Store apps represent for developers, this is a great news for JavaScript developers who can now develop apps for Windows 8 using what they already know along with their existing JavaScript code, hopefully leading to a new wave of jQuery-based Windows Store applications.

    The Windows 8 application platform introduced support for HTML5 and JavaScript development leveraging the same standard-based HTML5 and JavaScript engines as Internet Explorer. As developers would expect, some popular open source JavaScript frameworks can already be used in the context of a Windows Store application, like backbone.js, Knockout.JS, YUI. You can learn more about how to build a Windows 8 app with YUI in this YUI blog from Jeff Burtoft, HTML5 evangelist for Microsoft.

    Windows 8 provides access to all the WinRT APIs within the HTML5 development environment. Developers should be aware that there are some additional security features to consider when developing Windows 8 applications or HTML5-based cross platform applications for Windows. You can learn more about these features on MSDN.

    jQuery paves the way for open source JavaScript frameworks use in Windows Store applications

    According to the buildwith.com site, jQuery is the most widely used JavaScript framework on the Web. This makes it even more exciting that jQuery 2.0 will fully support Windows Store applications as this will benefit developers who already use jQuery and also demonstrates how other JavaScript frameworks can be integrated into the Windows 8 application model.

    “The jQuery team is excited about the new environments where jQuery 2.0 can be used. HTML and JavaScript developers want to take their jQuery knowledge with them to streamline the development process wherever they work. jQuery 2.0 gives them the ability to do that in Windows 8 Store applications. We appreciate the help from appendTo for both its patches and testing of jQuery 2.0 and MS Open Tech for its technical support.”Dave Methvin, president, jQuery Foundation

    appendTo, long-time JavaScript and Web development experts and jQuery contributors, extended its expertise to the Windows 8 application development, working with the jQuery community with technical support from MS Open Tech to enable jQuery support for the Windows 8 application model.

    While jQuery meets the language criterion for Windows Store applications, Windows 8 exposes all the WinRT APIs within the HTML5 development environment, which comes with a new security model that made some code and common practices of jQuery flagged as unsafe in the context of a Windows Store application. AppendTo reviewed and re-authored portions of jQuery core to bring it into alignment with the Windows security model, as well as identified key areas where alternative patterns would need to be substituted for actually-used conventions.” Jonathan Sampson, director of Support for appendTo.

    appendTo submitted code directly to the jQuery Core project, which will integrate this support, and the alternative patterns mentioned by Sampson were submitted to the net.tuts+ site to help jQuery developers understand the Windows 8 security model and easily build Windows 8 applications using jQuery. You can read appendTo’s blog post with more details on this work.

    Although these patterns apply to the jQuery framework, most of them transfer to all JavaScript frameworks and will definitely help you if you are planning to use your favorite open source JavaScript framework to build Windows 8 applications.

    Mobile cross platform development frameworks and tools

    HTML5 is now supported on all modern mobile platforms and open source tools such as Apache Cordova (aka PhoneGap), allowing developers to publish their applications built with HTML5 and JavaScript to multiple platforms with minimal effort and maximum code reuse. As in all HTML5/JavaScript development, developers love to be able to use their favorite frameworks, to help with their MVC model, database, UI or simply JavaScript code structure.

    Developers can already use some of these mobile cross-platform development frameworks and tools on Microsoft Devices as we mentioned in a previous post about Windows Phone 8 support added to popular open source tools and frameworks. MS Open Tech continuously engages with open source communities (contributing code, providing technical support, getting developers early access to future versions of the platforms, helping with testing devices, etc.), and we’ve found that developers are eager to publish their HTML5 apps to Windows 8 and Windows Phone 8 Stores.

    "At HP IT, we use Enyo to build apps for conference attendees. Our Enyo-based conference apps deliver a first-class user experience on Windows 8 and Windows Phone 8 — not to mention iOS, Android and a host of other platforms. The ability to serve users across platforms and device types with a single app is a huge win for us." — Sharad Mathur, sr. director, Software, Architecture & Business Intelligence Printing & Personal Systems HP IT

    Here are some recent notable developments in HTML5 mobile cross platform development:

    If you are an HTML5 and JavaScript developer, you should definitely consider building Windows 8 applications leveraging not only your development experience and skills but also your existing JavaScript code and libraries. Take a look at the jQuery new patterns proposed by appendTo, and start coding for Windows — who knows, you might be sitting on the next Cut the Rope!

  • Interoperability @ Microsoft

    New bridge broadens Java and .NET interoperability


    [Update: more details from Noelios Technologies as well as a complete tutorial

    Much of the work that we have collaborated on in the past several months has been centered around PHP, but rest assured we have been focused on other technologies as well. Take Java, for example. A big congratulations goes out this week to Noelios Technologies, which just released a new bridge for Java and .NET.

    Reslet-org Noelios Technologies is shipping a new version of the Restlet open source project, a lightweight REST framework for Java that includes the Restlet Extension for ADO.NET Data Services. The extension makes it easier for Java developers to take advantage of ADO.NET Data Services.

    Microsoft collaborated with the France-based consulting services firm and provided funding to build this extension to the Restlet Framework. It’s always very exciting for me, as a French citizen living in the United States, to witness French companies like Noelios collaborating with Microsoft to develop new scenarios and bridges between different technologies. Noelios specializes in Web technologies like RESTful Web, Mobile Web, cloud computing, and Semantic Web, and offers commercial licenses and technical support plans for the Restlet Framework to customers around the world.

    ADO.NET puts data sources within reach

    For those who are relatively new to ADO.NET Data Services, it is a set of recently added .NET Framework features that provides a simple way to expose a wide range of data sources, such as relational databases, XML files, and so on, through a RESTful service interface. Formerly known as “Project Astoria,” ADO.NET Data Services defines a flexible addressing and query interface using a URL convention, and supports the usual resource manipulation methods for data sources, including the full range of Create, Read, Update, and Delete operations.

    Microsoft Visual Studio 2008 SP1 and the upcoming Visual Studio 2010 fully support ADO.NET Data Services, including the capability to create and consume data services directly from the development environment. If you want more information about ADO.NET Data Services, look here. I recommend the “How do I…” videos; the links are located on the right side of the page.

    A closer look at the Restlet Extension architecture

    The Restlet Extension for ADO.NET Data Services provides a high-level client API that extends the Restlet Framework’s core capability by providing access to remote data services that are hosted on ASP.NET servers or the Windows Azure cloud computing platform.

    Java developers use the extension’s code generator to create Java classes that correspond to data entities exposed through ADO.NET Data Services. The Java application is then able to access the data via a simple method call. The runtime components in the Restlet engine and the extension take care of the communication between the Java client application and ADO.NET Data Services.


    REST makes it all possible

    The Restlet Extension project is a great example of the infinite possibilities that REST affords. Java developers using the Restlet Extension for ADO.NET Data Services can now connect their applications to a .NET platform with relative ease, which means more choices for Java developers and new opportunities for Microsoft.

    Looking beyond just the Java-Microsoft bridge, REST is a truly compelling architecture model for enabling interoperability between all kinds of different platforms, regardless whether the applications are run on premise or in the cloud. We’ve recently presented several scenarios that leverage REST (“Viewing government data with Windows Azure and PHP: a cloud operability scenario using REST,” and “A new bridge for PHP developers to .NET through REST: Toolkit for PHP with ADO.NET Data Services”), and we plan to continue sharing similar scenarios between various technologies.

    A big thanks to Stève Sfartz, Jerome Louvel and Thierry Boileau

    A very big thanks goes out to my French colleague Stève Sfartz in the DPE Division at Microsoft France. Steve was instrumental in initiating and driving the collaboration during the Restlet Extension project. He has been working for quite some time with Noelios Technologies Cofounders Jerome Louvel and Thierry Boileau using the Restlet Framework to illustrate interoperability scenarios between Java and Microsoft technologies using REST.

    If you’re interested in being part of or contributing to the Restlet community, visit www.restlet.org/community/.

    And if you want more information about Java interoperability, take a look at the list of Java-Microsoft interoperability projects at www.interoperabilitybridges.com/projects/tag/Java.aspx. It includes Apache POI (OpenXML Java API), Apache Stonehenge (practical SOA/Web services interoperability across platforms), Azure .NET Services SDK for Java, and Eclipse Tools for Silverlight.

    The Restlet Extension for ADO.NET Data Services represents yet another bridge added to our growing list of interoperability solutions, and we are very happy about this!

    —Jean-Christophe Cimetiere, Sr. Technical Evangelist

  • Interoperability @ Microsoft

    Announcing PHP SDK for Windows Azure… and much more!


    I’ve just arrived at TechEd India where I’m going to talk about interoperability in my sessions “Build Mission Critical Applications on the Microsoft Platform Using Eclipse, Java & Ruby” and “Developing PHP Applications using Microsoft Software & Services”. In addition to presenting the on-going activities that Microsoft is driving to strengthen interoperability, I’m excited to be able to demo a new set of interoperability projects related to PHP. I’m going to give you a glimpse of these projects in this post for those that are unable to join us in India.

    The first PHP interoperability bridge that we’re announcing is the PHP SDK for Windows Azure. This SDK is the result of an open source development project by RealDolmen, for which Microsoft is providing funding. I’d like to personally thank Maarten Balliauw of RealDolmen for his work on the project. The goal of the SDK is to provide high-level abstractions that enable PHP developers to interoperate readily with Windows Azure.

    Keep in mind that the Azure Services Platform has been designed to be open, standards-based and interoperable.

    The Azure Services Platform’s support for XML, REST and SOAP standards means that any of the Azure services can be called from other platforms and programming languages. To facilitate the interoperability between the Azure Services Platform and non-Microsoft languages and technologies, Microsoft has  provided funding for two other SDK projects that support 3rd party programming languages: Java SDK for Microsoft .NET Services and Ruby SDK for Microsoft .NET Services

    The PHP SDK for Windows Azure focuses on REST and provides the following core features:

    • PHP classes for Windows Azure blobs, tables & queues
    • Helper Classes for HTTP transport, AuthN/AuthZ, REST & error management
    • Manageability, instrumentation & logging support


    Windows Azure is the foundation of the Azure Services Platform and it includes the services hosting environment for the platform. At MIX 2009, Microsoft announced the inclusion of FastCGI in Windows Azure’s hosting environment. The FastCGI protocol enables developers to run web applications on Windows Azure that were written using 3rd party programming languages including PHP. This opens up new options for PHP developers to deploy their applications. For example, in the context of the PHP SDK for Windows Azure you have the 2 following options for deploying your PHP web applications:


    A Technology Preview of the PHP SDK for Windows Azure will be released by RealDolmen under a “BSD” license. This version of the SDK supports interoperability with Windows Azure blog storage. A functionally complete version of the SDK – additionally supporting tables and queues - is expected to be available from the download project site by the fall of 2009. Of course you're welcomed to try out and provide suggestions & feedback to the project by joining the user forum.

    The second piece of announcement, I’m excited to make is the launch of a series of third party projects that offer samples and toolkit that enable PHP developers to easily include in their web applications the following Microsoft technologies:


    Features for PHP developers

    Embedding Silverlight in PHP

    Include Silverlight controls in PHP web applications

    Web Slices and Accelerators in PHP

    Include IE Webslices & Accelerators in PHP web applications

    SQL CRUD Application Wizard for PHP

    Automatically generated a simple “Create, Read, Update, Delete (CRUD)” PHP application from a table in SQL Server

    Virtual Earth Integration Kit for PHP

    Include Microsoft Virtual Earth maps in PHP web applications

    Microsoft is providing funding for a series of projects, of which this first batch have been developed by Accenture. The third party projects are available on Codeplex.com under a BSD license:

    More to come; stay tuned and once again I encourage you to take a look. Feedback is very welcomed.

    Vijay Rajagopalan, Principal Architect, Microsoft Corp.

  • Interoperability @ Microsoft

    A new bridge for PHP developers to .NET through REST: Toolkit for PHP with ADO.NET Data Services


    [Update - March 16, 2010: the toolkit is now called "OData SDK for PHP", and "ADO.NET Data Services" is now called "WCF Data Services". Check related posts on OData

    Today, I’m excited to announce that we are releasing a new project that bridges PHP and.NET.
    More precisely, we are releasing today the Toolkit for PHP with ADO.NET Data Services which makes it easier for PHP developers to take advantage of ADO.NET Data Services, a set of features recently added to the .NET Framework. ADO.NET Data Services offer a simple way to expose any sort of data in a RESTful way. The Toolkit for PHP with ADO.NET Data Services is an open source project funded by Microsoft and developed by Persistent Systems Ltd. and is available today on Codeplex: phpdataservices.codeplex.com

    You can see an overview and quick demo of the toolkit in the following Channel9 video with Pablo Castro (software architect of ADO.NET Data Services) and me:

    Get Microsoft Silverlight

    A little bit more about ADO.NET Data Services

    ADO.NET Data Services (formerly known as Project “Astoria”) is a technology used to expose a wide range of data sources through a RESTful service interface. Data sources can be relational databases, XML files, and so on. ADO.NET Data Services defines a flexible addressing and query interface using a URL convention, as well as the usual resource manipulation methods on data sources (it supports the full range of Create/Read/Update/Delete operations).

    There is full support for ADO.NET Data Services in Visual Studio 2008 SP1 as well as in the upcoming Visual Studio 2010; this includes direct support for both creating and consuming data services directly from the development environment. You can find more information about ADO.NET Data Services here, (I recommend the “How do I…” videos).

    Architecture of the Toolkit for PHP with ADO.NET Data Services

    You should consider two aspects of the PHP Toolkit:

    • At design time: the PHP Toolkit generates proxy classes based on the metadata exposed by the ADO.NET Data Services (built with Visual Studio, including Express editions).
    • At run time: you call from your code the PHP proxy classes, so that you can easily program against the ADO.NET Data Service using a set of local PHP classes that represent the structure of the remote data. Using RESTful services over HTTP, the communication between the PHP application and ADO.NET Data Services is taken care of by the PHP proxy classes and the Toolkit libraries, but of course you can look at (or edit) this code.


    Running the Toolkit for PHP with ADO.NET Data Services step by step

    In the following steps we assume that you have already created the ADO.NET Data Services on top of the Northwind sample SQL Server database (check this “How do I…” video). The service I created exposes data like this, through a simple URL:


    The next step is to use the PHPDataSvcUtil.php utility that is part of the toolkit, and point it to the URL of the Data Service. It will read the Data Service metadata and create the PHP proxy classes (called northwinddb.php in our example):


    The code generated (northwinddb.php) looks like this:


    At runtime, you simply include in your code the northwinddb.php files and the URL of the data service:


    And then you can start writing your PHP code to access the data collections. Note the first highlighted line: it defines the query over the data service. Many options are available, the full description of the query format can be found here.


    And here is the result:


    I hope you enjoy reading this quick introduction to the Toolkit for PHP with ADO.NET Data Services.
    Feel free to check the project site on Codeplex phpadodataservices.codeplex.com. As always your feedback is welcomed!

    Claudio Caldato, Senior Program Manager,
    Interoperability Technical Strategy team.

  • Interoperability @ Microsoft

    Introducing the WebSockets Prototype


    As we launch our new HTML5 Labs today, this is one of two guest blogs about the first two HTML5 prototypes. It is written by Tomasz Janczuk, a Principal Development Lead in Microsoft’s Business Platform Division.

    In my blog post from last summer I wrote about a prototype .NET implementation of two drafts of the WebSockets protocol specification - draft-hixie-thewebsocketprotocol-75 and  draft-hixie-thewebsocketprotocol-76 - making their way through the IETF at that time.

    Since then, there have been a number of revisions to the protocol specification, and it is time to revisit the topic. Given the substantial demand for code to experiment with, we are sharing the Windows Communication Foundation server and Silverlight client prototype implementation of one of the latest proposed drafts of the WebSockets protocol: draft-montenegro-hybi-upgrade-hello-handshake-00.

    You can read more about the effort and download the .NET prototype code at the new HTML5 Labs site.

    What is WebSockets?

    WebSockets is one of the HTML 5 working specifications driven by the IETF to define a duplex communication protocol for use between web browsers and servers. The protocol enables applications that exchange messages between the client and the server with communication characteristics that cannot be met with the HTTP protocol.

    In particular, the protocol enables the server to send messages to the client at any time after the WebSockets connection has been established and without the HTTP protocol overhead. This contrasts WebSockets with technologies based on the HTTP long polling mechanism available today.

    For this early WebSockets prototype we are using a Silverlight plug-in on the client and a WCF service on the server. In the future, you may see HTML5 Labs using a variety of other technologies.

    What are we making available?

    Along with the downloadable .NET prototype implementation of the WebSocket proposed draft-montenegro-hybi-upgrade-hello-handshake specification, we are also hosting a sample web chat application based on that prototype in Windows Azure here. The sample web chat application demonstrates the following components of the prototype:  

    1. The server side of the WebSocket protocol implemented using Windows Communication Foundation from .NET Framework 4. The WCF endpoint the sample application communicates with implements the draft WebSocket proposal.
    2. The client side prototype implementation consisting of two components:
      1. A Silverlight 4 application that implements the same draft of the WebSocket protocol specification.
      2. A jQuery extension that dynamically adds the Silverlight 4 application above to the page and creates a set of JavaScript WebSocketDraft APIs that delegate their functionality to the Silverlight application using the HTML bridge feature of Silverlight.

    The downloadable package contains a .NET prototype implementation consisting of the following components:

    1. A WCF 4.0 server side binding implementation of the WebSocket specification draft.
    2. A prototype of the server side WCF programming model for WebSockets.
    3. Silverlight 4 client side implementation of the protocol.
    4. .NET 4.0 client side implementation of the protocol.
    5. A HTML bridge from the Silverlight to JavaScript that enables use of the prototype from JavaScript applications running in browsers that support Silverlight.
    6. Web chat and stock quote samples.

    Given the prototype nature of the implementation, the following restrictions apply:

    1. A Silverlight client (and a JavaScript client, via the HTML bridge) can only communicate using the proposed WebSocket protocol using ports in the range 4502-4534 (this is related to Network Security Access Restrictions applied to all direct use of sockets in the Silverlight platform).
    2. Only text messages under 126 bytes of length (UTF-8 encoded) can be exchanged.
    3. There is no support for web proxies in the client implementation.
    4. There is no support for SSL.
    5. Server side implementation limits the number of concurrent WebSocket connections to 5.

    This implementation has been tested to work on Internet Explorer 8 and 9.

    Why is this important?

    Through access to emerging specifications like WebSockets, the HTML5 Labs sandbox gives you implementation experience with the draft specifications, helps enable faster iterations around Web specifications without getting locked in too early with a specific draft, and gives you the opportunity to provide feedback to improve the specification. This unstable prototype also has the potential to benefit a broad audience.

    We want your feedback

    As you try this implementation we welcome your feedback and we are looking forward to your comments!


  • Interoperability @ Microsoft

    MS Open Tech publishes HTML5 Labs prototype of a Customizable, Ubiquitous Real Time Communication over the Web API proposal


    Prototype with interoperability between Chrome on a Mac and IE10 on Windows


    Martin Thomson
    Senior Architect, Skype, Microsoft Corp.

    Bernard Aboba
    Principal Architect, Lync, Microsoft Corp.

    Adalberto Foresti
    Principal Program Manager, Microsoft Open Technologies, Inc.

    The hard work continues at the W3C WebRTC Working Group where we collectively aim to define a standard for customizable, ubiquitous Real Time Communication over the Web. In support of our earlier proposal, Microsoft Open Technologies, Inc., (MS Open Tech) is now publishing a working prototype implementation of the CU-RTC-Web proposal on HTML5Labs to demonstrate a real world interoperability scenario – in this interop case, voice chatting between Chrome on a Mac and IE10 on Windows via the API.

    By publishing this working prototype in HTML5 Labs, we hope to:

    • Clarify the CU-RTC-Web proposal with interoperable working code so others can understand exactly how the API could be used to solve real-world use cases.
    • Show what level of usability is possible for Web developers who don’t have deep knowledge of the underlying networking protocols and interface formats.
    • Encourage others to show working example code that shows exactly how their proposals could be used by developers to solve use cases in an interoperable way.
    • Seek developer feedback on how the CU-RTC-Web addresses interoperability challenges in Real Time Communications.
    • Provide a source of ideas for how to resolve open issues with the current draft API as the CU-RTC-Web proposal is cleaner and simpler.

    Our earlier CU-RTC-Web blog described critical requirements that a successful, widely adoptable Web RTC browser API will need to meet:

    • Honoring key web tenets – The Web favors stateless interactions that do not saddle either party of a data exchange with the responsibility to remember what the other did or expects. Doing otherwise is a recipe for extreme brittleness in implementations; it also raises considerably the development cost, which reduces the reach of the standard itself.
    • Customizable response to changing network quality – Real time media applications have to run on networks with a wide range of capabilities varying in terms of bandwidth, latency, and packet loss. Likewise, these characteristics can change while an application is running. Developers should be able to control how the user experience adapts to fluctuations in communication quality. For example, when communication quality degrades, the developer may prefer to favor the video channel, favor the audio channel, or suspend the app until acceptable quality is restored. An effective protocol and API should provide developers with the tools to tailor the application response to the exact needs of the moment.
    • Ubiquitous deployability on existing network infrastructure – Interoperability is critical if WebRTC users are to communicate with the rest of the world with users on different browsers, VoIP phones, and mobile phones, from behind firewalls and across routers and equipment that is unlikely to be upgraded to the current state of the art anytime soon.
    • Flexibility in its support of popular media formats and codecs as well as openness to future innovation – A successful standard cannot be tied to individual codecs, data formats or scenarios. They may soon be supplanted by newer versions that would make such a tightly coupled standard obsolete just as quickly. The right approach is instead to support multiple media formats and to bring the bulk of the logic to the application layer, enabling developers to innovate.

    CU-RTC-Web extends the media APIs of the browser to the network. Media can be transported in real time to and from browsers using standard, interoperable protocols.


    The CU-RTC-Web first starts with the network. The RealtimeTransportBuilder coordinates the creation of a RealtimeTransport. A RealtimeTransport connects a browser with a peer, providing a secured, low-latency path across the network.

    At the network layer, CU-RTC-Web demonstrates the benefits of a fully transparent API, providing applications with first class access to this layer. Applications can interact directly with transport objects to learn about availability and utilization, or to change transport characteristics.

    The CU-RTC-Web RealtimeMediaStream is the link between media and the network. RealtimeMediaStream provides a way to convert the browsers internal MediaStreamTrack objects – an abstract representation of the media that might be produced by a camera or microphone – into real-time flows of packets that can traverse networks.

    Rather than using an opaque and indecipherable blob of SDP: Session Description Protocol (RFC 4566) text, CU-RTC-Web allows applications to choose how media is described to suit application needs. The relationship between streams of media and the network layer they traverse is not some arcane combination of SDP m= sections and a=mumble lines. Applications build a real-time transport and attach media to that transport.

    Microsoft made this API proposal to the W3C WebRTC Working Group in August 2012, and revised it in October 2012, based on our experience implementing this prototype. The proposal generated both positive interest and healthy skeptical concern from working group members. One common concern was that it was too radically different from the existing approach, which many believed to be almost ready for formal standardization. It has since become clear, however, that the existing approach (the RTCWeb protocol and WebRTC APIs specifications) is far from complete and stable, and needs considerable refinement and clarification before formal standardization and before it’s used to build interoperable implementations.

    The approach proposed in CU-RTC Web also would allow for existing rich solutions to more easily adopt and support the eventual WebRTC standard. A good example is Microsoft Lync Server 2013 that is already embracing Web technologies like REST and Hypermedia with a new API called the Microsoft Unified Communications Web API (UCWA see http://channel9.msdn.com/posts/Lync-Developer-Roundtable-UCWA-Overview). UCWA can be layered on the existing draft WebRTC API, however it would interoperate more easily with WebRTC implementations if the standard adopted would follow a cleaner CU-RTC-Web proposal.

    The prototype can be downloaded from HTML5Labs here. We look forward to receiving your feedback: please comment on this post or send us a message once you have played with the API, including the interop scenario between Chrome on a Mac and IE10 on Windows.

    We’re pleased to be part of the process and will continue to collaborate with the working group to close the gaps in the specification in the coming months as we believe the CU-RTC-Web proposal can provide a simpler and thus more easily interoperable API design.

  • Interoperability @ Microsoft

    The WebSockets Prototype Gets Another Update


    We have just updated the WebSockets prototype on our HTML5 Labs site, bringing the implementation in line with the recently released IETF WebSockets 09 Protocol Specification.

    This latest release updates both the server and client prototype implementations based on the IETF 09 specification, and brings no  significant feature changes.

    We will release additional HTML5 labs prototypes if there are further changes to the specification.


    Claudio Caldato,

    Principal Program Manager, Interoperability Strategy Team

  • Interoperability @ Microsoft

    Two open source projects to facilitate interoperability with Outlook .pst data files


    Microsoft today announced the availability of two new open source projects that complement technical documentation recently released for Microsoft Outlook Personal Folders (.pst). From the press release:

    Combined, the documentation and tools advance interoperability with data stored in .pst files, reflecting customer requests for greater access to data stored and shared in digital formats generated by Microsoft Outlook and for enhanced data portability.”

    The two open source projects, available on Codeplex.com under the Apache 2.0 license are the following:

    • The PST Data Structure View Tool (http://pstviewtool.codeplex.com/) is a graphical tool allowing the developers to browse the internal data structures of a PST file. The primary goal of this tool is to assist people who are learning .pst format and help them to better understand the documentation.
    • The PST File Format SDK (http://pstsdk.codeplex.com/) is a cross platform C++ library for reading .pst files that can be incorporated into solutions that run on top of the .pst file format. The capability to write data to .pst files is part of the roadmap will be added to the SDK.

    To get more details about how these two projects came to life and understand what type of scenarios they enable, watch this video with Daniel Ko, development manager in the Outlook team.

    Get Microsoft Silverlight

    If you’re specifically interested about potential scenarios enabled by the SDK, watch this segment of the video:


    -- Jean-Christophe Cimetiere, Sr. Technical Evangelist, @openatmicrosoft

  • Interoperability @ Microsoft

    New open source options for Windows Azure web sites: MediaWiki and phpBB


     Need to set up a powerful wiki quickly? Looking for an open source bulletin board solution for your Windows Azure Web Site? Today, we are announcing the availability of MediaWiki and phpBB in the Windows Azure Web Applications gallery. MediaWiki is the open source software that powers WikiPedia and other large-scale wiki projects, and phpBB is the most widely used open source bulletin board system in the world.

    You can deploy a free Windows Azure Web Site running MediaWiki or phpBB with just a few mouse clicks. Sign up for the free trial if you don’t already have a Windows Azure subscription, and then select the option to create a new web site from the gallery.

    This will take you to a screen where you can select from a list of applications to be automatically installed by Windows Azure on the new web site you’re creating. You’ll see many popular open source packages there, including MediaWiki and phpBB. Select the option you’d like, and then you’ll be prompted for a few configuration details such as the URL for your web site and database settings for the application:

    Fill in the required fields, click the Next button, and you’ll soon have a running ready-to-use web site that is hosting your selected application.

    The Windows Web App Gallery also includes MediaWiki and phpBB, so you can deploy either of them on-premises as well. See the MediaWiki and phpBB entries in the gallery.

    The MediaWiki project now includes the Windows Azure Storage extensions that allow you to story media files on Windows Azure. You can use this functionality for MediaWiki sites deploy to Windows Azure Web Sites, or for other deployments as well. More information can be found on the MediaWiki wiki.

    A big thanks to everyone who helped to make MediaWiki and phpBB work so well on Windows Azure! Markus Glazer, volunteer developer at Wikimedia Foundation, submitted the MediaWiki package to the Windows Azure Web Sites Gallery and integrated MediaWiki with Windows Azure Storage. Nils Adermann from the phpBB community submitted the updated phpBB 3.0.11 package to the Windows Azure Web Sites Gallery with the necessary changes for integration with Windows Azure.

    The addition of phpBB and MediaWiki is a great example of Windows Azure’s support for open source software applications, frameworks, and tools. We’re continuing to work with these and other communities to make Windows Azure a great place to host open source applications. What other open source technologies would you like to be able to use on Windows Azure?

    Doug Mahugh
    Senior Technical Evangelist
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    New open source releases: Reactive Extensions (Rx) libraries for Python and Ruby


    Microsoft Open Technologies, Inc is releasing two new open source libraries for Reactive Extensions (Rx) today that support Python and Ruby.

    Rx is a programming model that allows developers to use a common interface on multiple platforms to interact with diverse data sources and formats, such as stock quotes, Tweets, real-time events, streaming data, and Web services. Developers can use Rx to create observable sequences, and applications can subscribe to these sequences and receive asynchronous notifications as new data arrives. Rx was open-sourced by MS Open Tech in November, 2012, and since then has become an important component behind the scenes in several high-availability, including NetFlix and GitHub.

    Developers direct an Observer interface to observe (subscribe to) a data source, which is called an Observable Interface in Rx. The Observer Interface waits for and then reacts to pushed data until it is sent a signal from the Observable Interface that there is no more data to react to. An Observable interface maintains a list of dependent Observer interfaces and notifies them automatically of any state changes. Employing such a model is useful for performance and reliability in many scenarios, especially in UI-heavy client environments in which the UI thread is blocked while waiting for events.

    Rx is available for different platforms such as .NET, JavaScript, C/C++, and Windows Phone frameworks, and as of today, Ruby and Python as well. You can download the libraries, as well as learn about their prerequisites at the Rx MSDN Developer Center.

    You can find the projects on CodePlex: Rx for Ruby is available here, and Rx for Python is available here. Try them out and please share feedback!  This is our initial effort for both Ruby and Python and we are looking forward to working actively with the Ruby and Python communities to make sure that implementing Rx is as easy and flexible as possible. You can leave comments here, or start a discussion on CodePlex for Ruby or Python.

  • Interoperability @ Microsoft

    New SDK and Sample Kit demonstrates how to leverage the scalability of Windows Azure with PHP


    From the floor of the PHP Tek Conference in Chicago, with my colleague Peter Laudati, we’re excited to announce the availability of the Windows Azure SDK for PHP version 3.0. This Open Source SDK gives PHP developers a “speed dial” library to take full advantage of Windows Azure’s coolest features. On top of many improvements and bug fixes for this version (see the list from Maarten Balliauw’s preview), we’re particularly excited about the new service management possibilities and the new logging infrastructure.

    Beyond the new features, we also feel that version 3.0 of this SDK marks an important milestone because we’re not only starting to witness real world deployment, but also we’re seeing more people joining the project and contributing. We’ve been talking a lot to Maarten Balliauw from RealDolmen who is the lead developer on this open source project and he also shares the same sentiment: “It’s interesting to see the Windows Azure SDK for PHP mature: people are willing to contribute to it and incorporate their experience with the SDK and the platform.

    The most compute intensive part of Facebook app www.hotelpeeps.com is powered by PHP on Windows Azure

    clip_image001My colleague Todi Pruteanu from Microsoft Romania worked with Lucian Daia and Alexandru Lapusan from Zitec to help them get started with PHP on Windows Azure. The result is impressive.  The most compute intensive part of the Hotel Peeps Facebook application is now running on Windows Azure, using the SDK for PHP, as well as SQL Azure. Read the interview of Alexandru to get the details on what and how they did it (you can also check out the case study here). I like this quote from the interview: “HotelPeeps Trends running on the Windows Azure platform is the epitome of interoperability. Some people think that a PHP application running on Microsoft infrastructure is science fiction, but that’s not the case.
    Another interesting aspect is also the subsequent contribution by Zitec of an advanced “logging” component to the Windows Azure SDK for PHP. This new component provides the possibility of storing logs from multiple instances in a centralized location, namely Azure Tables.

    More contributions from the community

    As the SDK gets more widely adopted, there is an exciting trend toward more community involvement. For example, Damien Tournoud from the CommerceGuys who is working on developing the Drupal integration module for Windows Azure, recently contributed a patch fixing bugs related to inconsistencies in URL-encoding of parameters in the HTTP_Client library.  As we continue to improve the SDK to ensure great interoperability with popular applications like WordPress, Drupal and Joomla! we look forward to engagement more deeply with those communities to make the experience even better.

    New! Windows Azure Sample Kit for PHP

    Today we are also announcing the Windows Azure Sample Kit for PHP.  It is a new project hosted on github that will be the primary repository for all sample php code / apps that developers can use to learn how to take advantage of the various features of Windows Azure in php.  Today we are releasing two samples to the repository: the Guestbook application (example of how to use the Windows Azure storage objects – blobs, queues and tables as well as a simple web/worker pattern) and “Deal of the Day” (more on this one later).  We look forward to feedback on the samples and I am also hoping to see some forks and new samples coming from the community!

    New features to easily manage auto-scaling of applications on Windows Azure

    As I mentioned the version 3.0 of the Windows Azure SDK for PHP includes a new “service management” library, which provides easy ways to monitor the activity of your running instances (Windows Azure web roles & workers roles virtual machines), and to start/stop automatically instances based on usage. Then it becomes easy for you to decide which parameters (CPU, bandwidth, # of connections, etc.) and thresholds to use to scale up and down, and maintain the optimum quality of service for your web applications.

    The scenario is simple: let’s say you are running an e-commerce site and you want to run daily promotions to get rid of overstocked items. So you’re going to offer crazy deals every day starting at 8am, each deal being advertised to your subscribers by an email blast. You will have to be ready ready to absorb a major spike in traffic, but the exact time is difficult to predict as the news of the deal may take some time to travel through twitter.  When the traffic does materialize, you want the site to run & scale independently – providing service assurance but also minimizing your costs (by shutting down unnecessary capacity as loads go down).  This is the scenario for the “Deal of the Day” sample application.

    What’s the “Deal of the Day” (DotD) sample app and what to expect?

    dotdscreenDeal of the Day (DotD) is a sample application written in PHP to show how to utilize Windows Azure’s scalability features from within PHP. We’ve kept is simple and built it in a way that’s easy to deconstruct and learn from.

    As a sample application, DotD did not undergo extensive testing, nor does the code include all the required error catching, security verifications and so on, that an application designed for real production would require. So, do expect glitches. And if you do witness issues, send us a screenshot showing error messages with a description. I’ll get a prize to the first 100 bug trackers!

    However, to give you an opportunity to see the sample application working, we’ve decided to deploy a live version on Windows Azure to let you test it for real and give the chance to win actual fun prizes! (and sorry for our friends outside of USA, but prizes can be shipped only to a US address Sad smile)

    Wanna play? Just go this way: http://dealoftheday.cloudapp.net/
    Looking for the code, just get it on GitHub here: https://github.com/Interop-Bridges/Windows-Azure-Sample-Kit-4-PHP/tree/master/dealoftheday_sample

    Architecture of the DotD sample app

    fh_diagram2The DotD sample app is comprised of several pieces which fit together to create the overall experience:

    • Storage –responsible for containing all business data (product information & images, comments) and monitoring data (diagnostic information). All data is stored in Windows Azure Tables, Queues, and Blobs.
    • Web Roles – Point of interaction of the application with visitors. Number of active Web Roles varies depending on the load. They are all the same, running the core of the applications logic, producing the user interface (HTML) and handling user inputs. All Web Roles share the storage elements described above.
    • Worker Roles – Worker roles sit in the background processing events, managing data, and provide load balancing for scale out. The diagram shows two Worker Roles, one for managing the applications “scalability” (adding/removing Web roles) and one for asynchronously processing some of the applications tasks in the background (another way to achieve scalability)
    • Content Delivery Network (CDN) – Global content distribution that provides fast content delivery based on visitor location.

    Each of these parts is essential to the performance and scalability of DotD and for more details I invite you to read this introduction article, and then to dig deeper by reading part I (Performance Metrics) and Part II (Role Management) of our “Scaling PHP applications on Windows Azure” series. We will expand the series with additional in depth articles, the next one will be around monitoring the performance of your app.

    We look forward to your feedback on the SDK and the Sample Kit.  Once again the URL is https://github.com/Interop-Bridges/Windows-Azure-Sample-Kit-4-PHP


    Craig Kitterman
    Web: http://craig.kitterman.net

  • Interoperability @ Microsoft

    MS Open Tech Contributes Support for Windows ETW and Perf Counters to Node.js


    Here’s the latest about Node.js on Windows. Last week, working closely with the Node.js core team, we checked into the open source Node.js master branch the code to add support for ETW and Performance Counters on Windows. These new features will be included in the new V0.10 when it is released. You can download the source code now and build Node.js on your machine if you want to try out the new functionality right away.

    Developers need advanced debugging and performance monitoring tools. After working to assure that Node.js can run on Windows, our focus has been to provide instrumentation features that developers can use to monitor the execution of Node applications on Windows. For Windows developers this means having the ability to collect Event Tracing for Windows ® (ETW) data and use Performance Counters to monitor application behavior at runtime. ETW is a general-purpose, high-speed tracing facility provided by the Windows operating system. To learn more about ETW, see the MSDN article Improve Debugging And Performance Tuning With ETW.


    With ETW, Node developers can monitor the execution of node applications and collect data on key metrics to investigate and performance and other issues. One typical scenario for ETW is profiling the execution of the application to determine which functions are most expensive (i.e. the functions where the application spends the most time). Those functions are the ones developers should focus on in order to improve the overall performance of the application.

    In Node.js we added the following ETW events, representing some of the most interesting metrics to determine the health of the application while it is running in production:

    • NODE_HTTP_SERVER_REQUEST: node.js received a new HTTP Request
    • NODE_HTTP_SERVER_RESPONSE: node.js responded to an HTTP Request
    • NODE_HTTP_CLIENT_REQUEST: node.js made an HTTP request to a remote server
    • NODE_HTTP_CLIENT_RESPONSE: node.js received the response from an HTTP Request it made
    • NODE_NET_STREAM_END: TCP Socket close
    • NODE_GC_START: V8 starts a new GC
    • NODE_GC_DONE: V8 finished a GC

    For Node.js ETW events we also added some additional information about the JavaScript track trace at the time the ETW event was generated. This is important information that the developer can use to determine what code has been executed when the event was generated.


    Most Node developers are familiar with Flamegraphs, which are a simple graphical representation of where time is spent during application execution. The following is an example of a Flamegraph generated using ETW.


    For Windows developers we built the ETWFlamegrapth tool (based on Node.js) that can parse etl files, the log files that Windows generates when ETW events are collected. The tool can convert the etl file to a format that can be used with the Flamegraph tool that Brendan Gregg created.

    To generate a Flamegraph using Brendan’s tool, you need to follow the simple instructions listed in the ETWFlamegraph project page on Github. Most of the steps involve processing the ETW files so that symbols and other information are aggregated into a single file that can be used with the Flamegraph tool.

    ETW relies on a set of tools that are not installed by default. You’ll either need to install Visual Studio (for instance, Visual Studio 2012 installs the ETW tools by default) or you need to install the latest version of the Windows SDK tools. For Windows 7 the SDK can be found here.

    To capture stack traces:

    1. xperf -on Latency -stackwalk profile
    2. <run the scenario you want to profile, ex node.exe myapp.js>
    3. xperf -d perf.etl
    4. SET _NT_SYMBOL_PATH=srv*C:\symbols*http://msdl.microsoft.com/downloads/symbols
    5. xperf -i perf.etl -o perf.csv -symbols

    To extract the stack for process node.exe and fold the stacks into perf.csv.fold, this includes all information about function names that will be shown in the Framegraph.

    node etlfold.js perf.csv node.exe. (etlfold.js is the file found in the ETWFlamegraph project on GitHub).

    Then run the flamegraph script (requires perl) to generate the svg output:

    flamegraph.pl perf.csv.fold > perf.svg

    If the Node ETW events for JavaScript symbols are available then the procedure becomes the following.

    1. xperf -start symbols -on NodeJS-ETW-provider -f symbols.etl -BufferSize 128
    2. xperf -on Latency -stackwalk profile
    3. run the scenario you want to profile.
    4. xperf -d perf.etl
    5. xperf -stop symbols
    6. SET _NT_SYMBOL_PATH=srv*C:\symbols*http://msdl.microsoft.com/downloads/symbols
    7. xperf -merge perf.etl symbols.etl perfsym.etl
    8. xperf -i perfsym.etl -o perf.csv -symbols

    The remaining steps are the same as in the previous example.

    Note: for more advanced scenarios where you may want to have stack traces that include the Node.js core code executed at the time the event is generated, you need to include node.pdb (the debugging information file) in the symbol path so the ETW tools can resolve and include them in the Framegraph.


    In addition to ETW, we also added Performance Counters (PerfCounters). Like ETW, Performance counters can be used to monitor critical metrics at runtime, the main differences being that they provide aggregated data and Windows provides a great tool to display them. The easiest way to work with PerfCounters is to use the Performance monitor console but PerfCounters are also used by System Center and other data center management applications. With PerfCounters a Node application can be monitored by those management applications, which are widely used for instrumentation of large cloud and enterprise-based applications.

    In Node.js we added the following performance counters, which mimic very closely the ETW events:

    • HTTP server requests: number of incoming HTTP requests
    • HTTP server responses: number of responses
    • HTTP client requests: number of HTTP requests generated by node to a remote destination
    • HTTP client responses: number of HTTP responses for requests generated by node
    • Active server connections: number of active connections
    • Network bytes sent: total bytes sent
    • Network bytes received: total bytes received
    • %Time in GC: % V8 time spent in GC
    • Pipe bytes sent: total bytes sent over Named Pipes.
    • Pipe bytes received: total bytes received over Named Pipes.

    All Node.js performance counters are registered in the system so they show up in the Performance Monitor console.


    While the application is running, it’s easy to see what is happening through the Performance Monitor console:


    The Performance Monitor console can also display performance data in a tabular form:


    Collecting live performance data at runtime is an important capability for any production environment. With these new features we have given Node.js developers the ability to use a wide range of tools that are commonly used in the Windows platform to ensure an easier transition from development to production.

    More on this topic very soon, stay tuned.

    Claudio Caldato
    Principal Program Manager Lead
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    News from MS Open Tech: Initial HTTP Speed+Mobility Open Source Prototype Now Available for Download


    Microsoft Open Technologies, Inc. has just published an initial open source prototype implementation of HTTP Speed+Mobility. The prototype is available for download on html5labs.com, where you will also find pointers to the source code.

    The IETF HTTPbis workgroup met in Paris at the end of March to discuss how to approach HTTP 2.0 in order to meet the needs of an ever larger and more diverse web. It would be hard to downplay the importance of this work: it will impact how billions of devices communicate over the internet for years to come, from low-powered sensors, to mobile phones, to tablets, to PCs, to network switches, to the largest datacenters on the planet.

    Prior to that IETF meeting, Jean Paoli and Sandeep Singhal announced in their post to the Microsoft Interoperability blog that Microsoft has contributed the HTTP Speed+Mobility proposal as input to that conversation.

    The prototype implements the websocket-based session layer described in the proposal, as well as parts of the multiplexing logic incorporated from Google’s SPDY proposal. The code does not support header compression yet, but it will in upcoming refreshes.

    The open source software comprises a client implemented in C# and a server implemented in Node.js running on Windows Azure. The client is a command line tool that establishes a connection to the server and can download a set of web pages that include html files, scripts, and images. We have made available on the server some static versions of popular web pages like http://www.microsoft.com and http://www.ietf.org, as well as a handful of simpler test pages.

    We invite you to inspect the open source code directly in order to familiarize yourself with how everything works; we have also made available a readme file at this location describing the various options available, as well as the meaning of the output returned to the console.

    So, please download the prototype, try it out, and let us know what you think: every developer is a stakeholder in the HTTP 2.0 standardization process. We look forward to hearing your feedback, and to applying it to upcoming iterations of the prototype code.

    Adalberto Foresti
    Senior Program Manager
    Microsoft Open Technologies, Inc.
    A subsidiary of Microsoft Corporation

  • Interoperability @ Microsoft

    Beta of Windows Phone Toolkit for Amazon Web Services released


    I am pleased to announce the beta release of the Windows Phone Toolkit for Amazon Web Services (AWS). Built by Microsoft as an open source project, this toolkit provides developers with a speed dial that lets them quickly connect and integrate Windows Phone applications with AWS (S3, SimpleDB, and SQS Cloud Services)

    To create cloud-connected mobile applications, developers want to have choice and be able to reuse their assets and skills. For developers familiar with AWS, whether they’ve been developing for Android, iOS or any other technology, this toolkit will allow them to comfortably port their applications to the Windows Phone Platform.

    Terry Wise, Director of Business Development for Amazon Web Services, welcomes the release of the Windows Phone Toolkit for Amazon Web Services to the Developer community.

    “Our approach with AWS is to provide developers with choice and flexibility to build applications the way they want and give them unlimited storage, bandwidth and computing resources, while paying only for what they use. We welcome Windows Phone developers to the AWS community and look forward to providing customers with new ways to build and deploy Windows Phone applications,” he says.

    Jean Paoli, General Manager of Interoperability Strategy at Microsoft, adds that Windows Phone was engineered from the get-go to be a Cloud-friendly phone.

    “The release of the Windows Phone Toolkit for AWS Beta proves that Microsoft’s goal of building a Cloud-friendly phone is true across vendor boundaries. It literally takes minutes to create a Cloud-ready application in C# with this toolkit. We look forward to this toolkit eventually resulting in many more great apps in the rapidly growing Windows Phone marketplace,” he said.

    Developers can download the toolkit , along with the complete source code under the Apache license. A Getting Started guide can be found on the Windows Phone Interoperability Bridges site along with other resources.

    And as always your feedback on how to improve this beta is welcome!

  • Interoperability @ Microsoft

    Windows Azure Command-Line Tool for Mac and Linux


    Yesterday, Bill Laing of the Windows Azure team announced support for virtual machines running the Windows Server operating system as well as Linux distros such as Ubuntu, CentOS, and OpenSUSE. Now you can run existing Linux payloads on Windows Azure virtual machines, with no need to change any of your code. This capability makes Windows Azure a great platform for IaaS deployment of applications that run on Windows or Linux servers. You can find more information about what's new in Windows Azure on Scott Guthrie's blog post today and the MeetWindowsAzure event that he'll be kicking off this afternoon.

    There are two ways to work with the new virtual machine and web site capabilities of Windows Azure: through the management portal, or at the command line. This article covers the concepts behind the command-line tool, but those who prefer to use a GUI can also provision web sites and deploy Windows or Linux virtual machines from the Windows Azure portal. An easy-to-use GUI takes you through every step of the process.

    Many developers prefer the power and flexibility of command-line tools, however, which can be automated via a scripting language. If you’re working exclusively on Windows machines, the Windows PowerShell cmdlets  are your best option, but for mixed environments, the Windows Azure command-line tool for Mac and Linux provides a consistent experience across Linux, Mac OS, and Windows desktops.

    Installation of the command line tool is very simple. If you’re working on a Mac OS X machine, you can use the Mac installer, and for Windows or Linux you’ll just need to install the latest version of Node.js and then type this command.

    npm install azure --global

    That will install the Windows Azure SDK for Node.js, which includes the command-line tool. Alternatively, you can download the command line tools or the Windows PowerShell cmdlets from this download page.

    To verify that you have the tool installed and ready to use, type the command azure --help and you‘ll see the output shown to the right. This screen tells you which version of the tool you’re using, and how to get information about each of the commands.

    The first thing to understand is the basic structure of the commands. In general terms, you type azure followed by a topic (what you’re working with), a verb (what to do), and various optional parameters to provide additional information. Here’s a diagram that provides a general framework for understanding the command-line syntax.

    Some commands have other required command-line parameters in addition to what’s shown here. For more information about specific command syntax, see the reference documentation.

    The command-line tool allows you to provision new web sites and virtual machines, and that activity needs to be associated with a Windows Azure subscription. So before you start using the tool, you’ll need to download a publish settings file from the Windows Azure portal and then import it as a local configuration setting. For more information about how to do this, see the how-to guide How to use the Windows Azure Command-Line Tools for Mac and Linux, which also covers the basics of deploying web sites and virtual machines.

    Let’s take a look at some of the other things you can do with the command-line tool …

     Locations and affinity groups. When you deploy a virtual machine, you must tell Windows Azure the location where you’d like for your virtual machine to be deployed – North Central US, for example. The azure vm location list command provides a list of available locations that you can use.

    You can also use an affinity group to specify the location. You can create your own affinity groups (here’s how) and then use an affinity group instead of a location when you deploy a virtual machine, cloud service, or storage account. The use of an affinity group tells Windows Azure “please host these services as close together as possible,” with a goal of reducing network latency. The azure account affinity-group list command lists your available affinity groups.

    Cloning a customized virtual machine. After you’ve deployed a virtual machine and customized it by installing and configuring software via SSH or other means, you may want to deploy additional instances of that virtual machine that will include your customizations. To do this, stop the virtual machine and use the vm capture <vm-name> <target-image-name> command to capture a cloned copy of it. Then you can deploy new instances of your customized virtual machine through the vm create command.

    Virtual machine data disks. When you deploy a virtual machine, you may want to attach a separate data disk, which is a .vhd file in Windows Azure blob storage that provides additional storage for a virtual machine. The azure vm disk command provides options for creating data disks and attaching them to virtual machines. Use the azure help vm disk command to list the available options.

    Virtual machine endpoints. When you deploy multiple instances of a virtual machine, you need to set up port mapping between the virtual machines and the load balancer. The load balancer uses an internal IP address to route traffic to each virtual machine, and these mappings are defined through the azure vm endpoint create command. In a blog post later this month, we’ll take a hands-on look at the details of configuring multiple virtual machines behind a load balancer.

     Windows Azure cloud services. Although the main focus of the command-line tool is working with virtual machines and web sites (IaaS scenarios), it can also be used to view the cloud services that you have deployed through web roles and worker roles. The azure service list command lists your cloud services, and the azure service delete command deletes a cloud service.

    Working with Linux virtual machines. The command-line tool supports both Linux and Windows operating systems for deployment on virtual machines, and for most of the commands there is no difference between working with Windows and working with Linux. Some differences are inherent in the operating system itself, however. For example, Windows uses RDP whereas Linux uses SSH. The article An Introduction to Linux on Windows Azure provides an overview of what you need to know to take full advantage of Linux virtual machines on Windows Azure.

    Write custom service management tools and workflows in Node.js. You can provision and manage virtual machines from your own code, through the new iassClient module that provides access to the service management API from Node.js. For more information, see the reference documentation.

    As you can see, the Windows Azure command-line tool for Mac and Linux opens a whole new world of possibilities for developers. Working from a Linux or Mac desktop, you can now deploy and manage virtual machines and web sites on Windows Azure. You can also migrate an existing Linux application to Windows Azure without changing a line of code, and then begin taking advantage of powerful Windows Azure services at any time. It’s all about developer choice: your choice of client operating system, server operating system, programming language, frameworks, and tools – all supported by Windows Azure!

    Doug Mahugh
    Senior Technical Evangelist
    Microsoft Open Technologies, Inc.
    A subsidiary of Microsoft Corporation

  • Interoperability @ Microsoft

    SQL Server Driver for PHP 2.0 CTP adds PHP's PDO style data access for SQL Server


    Hi, I’m Ashay Chaudhary, Program Manager at Microsoft. Today at DrupalCon SF 2010, we are reaching an important milestone by releasing a Community Technology Preview (CTP) of the new SQL Server Driver for PHP 2.0, which includes support for PHP Data Objects (PDO). Alongside our efforts, the Commerce Guys, a company providing ecommerce solutions with Drupal, is also presenting a beta version of Drupal 7 running on SQL Server using this new PDO Application Programming Interfaces (API) in the SQL Server Driver for PHP 2.0.

    The SQL Server Driver for PHP 2.0 with PDO will enable popular PHP applications like Drupal 7 to use the PDO “PHP style” and interoperate smoothly with Microsoft’s SQL Server database.

    For PHP developers, this will reduce the complexity of targeting multiple databases and will make it easier to take advantage of SQL Server features (like business intelligence & reporting) as well as SQL Azure features (like exposing OData feeds).

    About the SQL Server Driver for PHP 2.0 CTP with PDO

    My team and I have been working hard over the past months to introduce PDO into the existing SQL Server Driver for PHP. The decision to add PDO was a direct result of the feedback we received from the PHP community.

    The new version now supports the API defined by PDO. Of course, we continue to maintain the existing SQL Server native API. To provide better support and consistency for both API, we are creating a common layer including the core features shared across the two API, as shown on the architecture diagram below:

    SQL ServerDriverforPHP_PDO

    The SQL Server Driver for PHP 2.0 CTP is available for download at the Microsoft Download Center (installation through Web PI available as well: http://www.microsoft.com/web/drupal/). 
    Don’t be surprised if you don’t find the source yet, we have packaged only the binaries for now. Rest assured that the SQL Server Driver for PHP 2.0 will be available as a shared source project (like current version 1.1) in our next public release. We expect the final version in the second half of this year. Stay tuned!

    Porting Drupal 7 to SQL Server using the SQL Server Driver for PHP 2.0

    Putting the SQL Server Driver for PHP 2.0 to the test of real world applications is a key aspect of our development process. We started a discussion with the Commerce Guys (a company providing ecommerce solutions with Drupal) who were interested in porting the upcoming version 7 of Drupal to SQL Server, and quickly realized we had a great opportunity to partner with the Drupal community.  So Microsoft provided some funding and initial support through technical specifications and early builds of the driver for Commerce Guys, in order for them to independently develop updates to the code for a contributed module for Drupal 7. After initial success with Drupal 7 working with SQL Server, Commerce Guys disclosed that the Views module, one of the most popular contributed modules for Drupal, also works well with SQL Server.

    If you happen to be at DrupalCon SF, join us for the “Drupal 7 and Microsoft SQL Server” session (this afternoon at 4:15pm) to see it in action.

    For more details about the work Commerce Guys did on Drupal 7, I invite you to read their blog: http://www.commerceguys.com/about/news/

    Give it a try, send your feedback

    Microsoft is very excited about this new milestone and the early success we’ve seen with Drupal 7. Having reached this important milestone, we are not done yet and continue to polish it up. We plan to ship CTPs on a regular basis, so stay tuned!

    Of course, we appreciate feedback, which you can submit by visiting our SQL Server Driver for PHP forum or by visiting SQL Server’s Connect site.

    Ashay Chaudhary
    Program Manager, SQL Server Driver for PHP

  • Interoperability @ Microsoft

    The IndexedDB Prototype Gets an Update


    I'm happy to be able to give you an update today on the IndexedDB prototype, which we released late last year.

    The version 1.0 prototype that we released in December was based on an editor's draft specification from November 2, 2010. I'm happy to announce that this new version includes some of the changes that were added to the specification since then, and which bring it in-line with the latest version of the spec that is available on the W3C web site. However, it is important to note that while this prototype is very close to the latest spec, it is not 100 percent compliant.

    The protoype forms part of our HTML5 Labs Web site, a place where we prototype early and not yet fully stable drafts of specifications developed by the W3C and other standard organizations.  These prototypes will help us have informed discussions with developer communities, and give implementation experience with the draft specifications that will generate feedback to improve the eventual standards. It also lets us give the community some visibility on those specifications we consider interesting from a scenario point of view, but which are still not at the stage where we can consider them ready for official product support.

    The goal of IndexedDB is to introduce a relatively low-level API that allows applications to store data locally and retrieve it efficiently, even if there is a large amount of it. The API is low-level to keep it really simple and to enable higher-level libraries to be built in JavaScript and follow whatever patterns Web developers think are useful as things change over time.

    Folks from various browser vendors have been working together on this for a while now, and Microsoft has been working closely with the teams at Mozilla, Google and other W3C members that are involved in this to design the API together.

    If you notice that this prototype of IndexedDB behaves differently and doesn't work with code you have written, it may be due to some of the following changes:

    • VERSION_CHANGE transaction as described in the spec is implemented except for one feature. The feature NOT implemented is the versionchange event to notify other open database connections, as in the specification. The workaround for this is to not launch two Internet Explorer tabs to open the same database.
    • The createObjectStore() method of the asynchronous database object is now a synchronous operation as described in the specification. Also, this method can only be called from within the onsuccess() handler of the IDBVersionChangeRequest object returned by the setVersion() method. See the samples in the CodeSnippets folder for the exact syntax.
    • The deleteObjectStore() method of the asynchronous database object can only be called from within the onsuccess() handler of the IDBVersionChangeRequest object returned by the setVersion() method. See the samples in CodeSnippets folder for examples.
    • The transaction method of the asynchronous database object now accepts parameters as described in the specification. See the sample in the CodeSnippets folder for examples.
    • The asynchronous transaction object now implements auto-commit. The Javascript code needs to have the close() method on the asynchronous database object for auto-commit to work. See the samples in the CodeSnippets folder for examples.

    The goal of the prototypes is to enable early access to the API and get feedback from Web developers, as well as to keep it up to date with the latest changes in the specifications as they are published. But, since these are early days, remember that there is still time to change and adjust things as needed.

    You can find out more about this experimental release and download the binaries from this archive, which contains the actual API implementation plus samples to get you started.

    Claudio Caldato,

    Principal Program Manager, Interoperability Strategy Team

  • Interoperability @ Microsoft

    Discussing Microsoft’s Openness at Linux Tag


    LinuxTag is the leading meeting place for Linux and open source in Europe. Held in Berlin last week it was the place to learn about new innovations and trends, as well as to connect with core expertise for professional users, decision makers, developers, beginners and of course the community. I was lucky enough to attend in support of Microsoft Deutschland.

    I spent the majority of the four days discussing and demonstrating the depth and breadth of Microsoft’s engagement in open source and open standards. As I assist the efforts at Microsoft Open Technologies, Inc. (MS Open Tech) – who are proud to support Microsoft Corporation’s commitment to openness – it’s always a joy because it’s such a positive story. This is particularly true at large events like LinuxTag where it is impossible to predict who we will be talking to next. It is a great feeling to be able to point to concrete contributions that MS Open Tech has made, and continue to make, to projects relevant to almost any individuals interests.

    Even more pleasing is the obvious desire of the key open source community members to work with us. A significant number of visitors to our stand were keen to understand how they can work with Microsoft to ensure interoperability between solutions. That’s an important reason why MS Open Tech exists, and why we were at LinuxTag.

    Many visitors wanted to see how the VM Depot site could help their projects. VM Depot is a site on which the community can publish freely redistributable virtual machine images for Windows Azure. Once I’d explained how it worked, it was common for community leaders and core developers to have clearly identified significant value for their projects. In some cases they saw it as an opportunity to increase the visibility of their open source products, in others as an opportunity to provide evaluation installs, and in others it was an opportunity to empower their development community.

    My thanks go to all the team at Microsoft Deutschland and LinuxTag for making it possible to meet so many great open source leaders in such a short time. We look forward to continuing the conversations.

  • Interoperability @ Microsoft

    Using Drupal on Windows Azure: Hands-On with 4 new Drupal Modules


    Since the launch of Windows Azure a couple years ago, we’ve been working on driving Interoperability scenarios that enable various developers to harness the power of the Windows Azure cloud platform. In parallel, we’ve supported interoperability projects, in particular on PHP and Drupal, in which the focus is showing how to simply bridge different technologies, mash them up and ultimately offer new features and options to the developers (azurephp.interoperabilitybridges.com).

    Today, I’d like to show you the result of some hands-on work with Drupal on Windows Azure: We are announcing today the availability 4 new Drupal Modules, Bing Maps, Windows Live ID, OData and the Silverlight Pivot Viewer that can be used with Drupal running on Windows Azure. The modules are developed by Schakra and MindTree.

    To showcase this work, the new Drupal 7was deployed on Windows Azure with the Windows Azure Companion: Check out the Drupal & Windows Azure Companion tutorial
    IMPORTANT NOTE - July 13, 2011
    The Windows Azure Companion was an experimental tool to provide a simple experience installing and configuring platform-elements (PHP runtime, extensions) and web applications on Windows Azure.  Based the feedback and results Microsoft has decided to stop any further development of the Windows Azure Companion and instead we recommend using the new  tools available at http://azurephp.interoperabilitybridges.com/downloads to deploy applications to Windows Azure.

    On top of this Drupal instance running on Windows Azure, are deployed the four NEW generic modules that allow Drupal administrators/developers to provide their users with new features:

    The Bing Maps Module for Drupal provides for easy & flexible embedding of Bing Map in Drupal content types, such as a technical article or story.



    The Windows Live ID Module for Drupal allows Drupal user to associate their Drupal account to their Windows Live ID, and then to login on Drupal with their Windows Live ID.



    The OData Module for Drupal allows developers to include data sources based on OData in Drupal content types. The generic module includes a basic OData query builder and renders data in a simple HTML Table. In this case, we are taking the Netflix OData catalog and using a simple visual query engine, generating a filtered query to display on our Drupal “Article” page.



    The Silverlight Pivot viewer Module for Drupal enables enables easy & flexible embedding of Silverlight PivotViewer in Drupal content types, using a set of preconfigured data sources like oData producers or existing Pivot collections


    In this example, we are using the wedding venues pivot collection exposed on http://beta.hitched.co.uk to render the interactive Silverlight PivotViewer of that collection with deep zoom image support and a complete visual query experience.


    These modules are independently developed and contributed by Schakra and MindTree, with funding provided by Microsoft. The modules have all been made available on GitHub, and we hope to see them moved to the Drupal module gallery in the near future. As always I look forward to your comments and feedback.


    Craig Kitterman, Sr. Interoperability Evangelist, Microsoft

  • Interoperability @ Microsoft

    Open XML made easier for Java developers with Apache POI


    [05/18- Update:
    Apache POI project is highlighted in today's Document Interoperability Inititice (DII) event that just happened in London ]

    When developers are tasked to deal with document file formats it might be challenging to do the right thing if you don’t have a good experience with a particular format, and need to crack it open and understand all the details.

    For Java developers and Microsoft Office file formats there’s a very interesting solution with the Apache POI project, which provides a Java API to access Microsoft Office formats. Last year Microsoft and Sourcesence announced that they would collaborate to add support of the Open XML file format to the Apache POI project, and the resulting Open XML support has been integrated as part of POI 3.5 beta 5.

    The end result: Good news for Java developers who need to manipulate the Office Open XML files (.XLSX, .DOCX, .PPTX), because it really makes it easier for them to do the job!

    To illustrate the point, let me walk you through a demo scenario that uses Apache POI Java Libraries and actually combines it with the PHPExcel project (for PHP developers) and the Open XML Format SDK 2.0 (for .NET developers). My goal is just to give you a sense of the type of scenarios you can easily develop using multiple languages and multiple platforms.

    We will make that demo available with more explanation in an article on http://openxmldeveloper.org/. Before we get into the demo itself I want to thank Julien Chable and Maarten Balliauw for their help in building this demo.

    For now, let me walk you through the scenario. For the sake of our demonstration we are going to show how raw data can be consumed by a Java web application using the Apache POI, to create an .XLSX file from scratch. How that file can then be accessed and modified by a PHP application (with PHPExcel). And finally how the resulting file can be digitally signed and finalized via the .NET framework using the Open XML Format SDK.

    Here’s the data flow:


    Step 1 of the scenario starts in the Java Web applications:


    Once the “Create Spreadsheet” button is pressed, it creates the files:


    And does some processing to inject the initial XML data and formatting. The result looks like this:


    Most of the Java code required to do this fits in this code snippet:


    Step 2, moving to the PHP application, the UI is similar:


    This step adds cell protection, renames the .XLSX file, changes cell formatting, and inserts additional content formatting. The result looks like this:


    And the code to accomplish it looks like this:


    Step 3, finally, from the ASP.NET web applications using the Open XML Format SDK:


    Where the code for adding the digital signature looks like this:


    Easy, don’t you think?
    Stay tuned, as I said earlier, we will follow up on http://openxmldeveloper.org/ with a more detailed article.

    Additional background on PHPExcel and the Open XML SDK:

    The PHPExcel project is an open source project available on Codeplex. It consists of a set of classes for PHP that enables PHP applications to read and write to various file formats. These formats include HTML, PDF, and the relevant one for our demonstration…Excel 2007’s .XLSX format. This class set supports features such as setting spreadsheet meta data (author, title, description ...), multiple worksheets, different fonts and font styles, cell borders, fills, gradients, and adding images to spreadsheets. In parallel to this project, there is also the sister project PHPPowerPoint, which is intended to operate along similar lines as the PHPExcel application but with a focus on the .PPTX file formats. Both of these projects are built around the OpenXML standard, and the PHP framework. Read this nice article: Use PHP to create Open XML Spreadsheet reports

    The Open XML Format SDK provides methods for .NET developers to access and manipulate XML content, including XML data contained in OXML document formatted files. It provides strongly typed part classes to manipulate Open XML documents. The SDK also uses the .NET Framework Language-Integrated Query (LINQ) technology to provide strongly typed object access to the XML content inside the parts of Open XML documents. The April 2009 CTP release also adds support for the validation of Open XML documents.
    Read Brian Jones' blog to go deep on Open XML SDK. 

    Jean-Christophe Cimetiere  - Sr. Technical Evangelist

  • Interoperability @ Microsoft

    Apache incubator project, Stonehenge: showcasing Web Services interoperability


    I am a Principal Program Manager in Jean Paoli’s Interoperability Technical Strategy Team. Among other things, I am also the lead for Microsoft’s participation in the Apache incubator project, Stonehenge. I am really excited about Microsoft’s participation in this effort and look forward to our continued involvement with it.

    As Jean discussed in his post, Microsoft has been working on many open source projects but this is the first time that Microsoft is participating as a code contributor in an Apache project! This has been a very valuable learning experience for us here at Microsoft that will significantly inform and influence many future projects, I am sure.


    In November, I wrote on port25 about ApacheCon and the Stonehenge incubator project. Lots of activities have taken place since then around Stonehenge. It was approved as an incubator project within Apache Software Foundation, and WSO2 and Microsoft have already contributed code for a web-services based sample application (called StockTrader) to this effort. Our code can be found here, along with the contributions from WSO2.


    We have three committers from Microsoft on the Stonehenge incubator project. Most of the credit must go to Greg Leake, who wrote the original StockTrader application, and Drew Baird, who worked to get it ready for contribution to Stonehenge. Mike Champion is also going to play an active role in this effort, as he mentioned in his recent blog where he describes how “Stonehenge can help wire up the "last mile…"


    Projects like Stonehenge are very important to enhance interoperability between different software implementations. Standards organizations do a great job and the roll out of various WS-* standards is a testimonial to the fact that they can work efficiently. But interoperability work doesn’t stop at the end of the standardization process… in fact, that is where it really starts.


    It is important for customers and the industry to have multiple implementations of these standards and have the ability to choose the best ones for their scenarios and requirements. This will encourage competition and ensure the production of better quality software in response to market forces. Interoperability work  within an open community generates both competition and collaboration. Customers will be able to get working code on multiple platforms and vendors will be able to catch bugs and test interoperability issues in an open manner.


    Stonehenge has attracted some very prominent committers so far and I hope that the momentum will be sustained. I am looking forward to seeing code contributions from other folks and seeing the StockTrader sample application enhanced with new features. I also hope that new sample applications will be developed to cover other areas of the WS-* standards that are not best represented by the StockTrader application. I look forward to participating in this discussion with the Stonehenge community.


    I also want to thank the folks at WSO2 inc. for their leadership and guidance in driving the Stonehenge project. Congratulations are due to Paul Fremantle, Sanjiva Weeravaran, Jonathan Marsh and their dev team for successfully launching and steering this project so far. We are happy to follow and work with other participants in making it successful.


    I would like to hear comments and feedback on the Stonehenge project and also discuss ideas around other interoperability projects of similar nature. Looking forward to the conversation!

  • Interoperability @ Microsoft

    Prototypes of JavaScript Globalization & Math, String, and Number extensions


    As the HTML5 platform becomes more fully featured, web applications become richer, and scenarios that require server side interaction for trivial tasks become more tedious.  This makes deficits in the capabilities of JavaScript as a runtime come into focus.

    Microsoft is committed to advancing the JavaScript standard. Through active participation in the Ecma TC39 working group, we have endorsed and pushed for the completion of proposed standards which provide extensions to the intrinsic Math, Number, and String libraries and introduce support for Globalization. We shared the first version of prototypes for the libraries at the standards meeting on the Microsoft campus in July and are shared our Globalization implementation at the standards meeting last week at Apple’s Cupertino campus. In addition, we are also releasing these reference implementations so that the JavaScript community can provide feedback on applying their use in practice.

    What’s in this drop

    This drop includes extensions to the Math, Number, and String built-in libraries:




    cosh, sinh, tanh

    startsWith, endsWith


    acosh, asinh, atanh



    log1p, log2, log10










    To illustrate, a simple code sample using some of these functions is included below:

    var aStr = "24-";
    var aStrR = aStr.reverse();
    var num = aStrR * 1;
    if (Number.isInteger(num)) {
    console.log("The sign of " + num + " is " + Math.sign(num));

    This drop also includes an implementation of the evolving Globalization specification. Globalization is the software discipline that makes sure that applications can deal correctly with changes in number and date formats, for example. It’s a part of the localization of an application to run in a local language. With this library, you can show date and numbers in the specified locale and specify collation properties for the purposes of sorting and searching in other languages. You can also set standard date and number formats to use alternate calendars like the Islamic calendar or formats to show currency as a Chinese Yuan. Again, a code sample illustrates below:

    var nf = new Globalization.NumberFormat(localeList, {
    style : "currency",
    currency : "CNY",
    currencyDisplay: "symbol",
    maxmimumFractionDigit: 1

    nf.format(100); // "¥100.00"

    var dtf = new Globalization.DateTimeFormat(
    new Globalization.LocaleList(["ar-SA-u-ca-islamic-nu-latin"]), {
    weekday : "long",

    dtf.format() // today's date
    dtf.format(new Date("11/15/2011")); // "الثلاثاء, ١٢ ١٩ ٣٢"

    How to get the bits

    The prototypes should install automatically if you view the Intrinsics Extensions demo and the Globalization demo. Or to install the prototype, run the MSIs found here.

    Note that as with all previous releases of HTML5 labs, this is an unsupported component with an indefinite lifetime. This should be used for evaluation purposes only and should not be used for production level applications.

    Providing Feedback

    We’ve created a couple of sample applications so you can see what this functionality enables.  Once you’ve installed the bits, view the Intrinsics Extensions demo and the Globalization demo to see the APIs in action. 

    As usual, we encourage you to play with the sample apps, download the prototype, and develop your own app to see how it feels. Once you’ve tried it out, let us know if you have any feedback or suggestions. We look forward to improving JavaScript and making it ever easier to build great web applications using standard APIs.

    Thanks for your interest!

    Claudio Caldato, Adalberto Foresti – Interoperability Strategy Team


  • Interoperability @ Microsoft

    Interoperability Elements of a Cloud Platform Outlined at OSCON


    OSCON Keynote Jean Paoli

    This week I’m in Portland, Oregon attending the O’Reilly Open Source Convention (OSCON). It’s exciting to see the great turnout as we look to this event as an opportunity to rub elbows with others and have some frank discussions about what we’re collectively doing to advance collaboration throughout the open source community. I even had the distinct pleasure of giving a keynote this morning at the conference.

    The focus of my presentation, titled “Open Cloud, Open Data” described how interoperability is as an essential component of a cloud computing platform. I personally think it’s critical to acknowledge that the cloud is intrinsically about connectivity. Because of this, interoperability is really the key to successful connectivity.

    We’re facing an inflection point in the industry – where the cloud is still in a nascent state – that we need to focus on removing the barriers for customer adoption and enhancing the value of cloud computing technologies. As a first step, we’ve outlined what we believe are the foundational elements of an open cloud platform.


    They include:

    • Data Portability:
      How can I keep control over my data?
      Customers own their own data, whether stored on-premises or in the cloud. Therefore, cloud platforms should facilitate the movement of customers’ data in and out of the cloud.
    • Standards:
      What technology standards are important for cloud platforms?
      Cloud platforms should support commonly used industry standards so as to facilitate interoperability with other software and services that support the same standards. New standards may be developed where existing standards are insufficient for emerging cloud platform scenarios.
    • Ease of Migration and Deployment:
      Will your cloud platforms help me migrate my existing technology investments to the cloud and how do I use private clouds?
      Cloud platforms should provide a secure migration path that preserves existing investments and should enable the co-existence between on-premise software and cloud services. This will enable customers to run “customer clouds” and partners (including hosters) to run “partner clouds” as well as take advantage of public cloud platform services.
    • Developer Choice:
      How can I leverage my developers’ and IT professionals’ skills in the cloud?
      Cloud platforms should offer developers a choice of software development tools, languages and runtimes.

    Through our ongoing engagement in standards and with industry organizations, open source developer communities, and customer and partner forums, we hope to gain additional insight that will help further shape these elements. We’ve also pulled together a set of related technical examples which can be accessed at www.microsoft.com/cloud/interop to support continued discussion with customers, partners and others across the industry.

    Interoperability Elements of a Cloud Platform

    In addition, we continue to work with others in the industry to deliver resources and technical tools to bridge non-Microsoft languages — including PHP and Java — with Microsoft technologies. As a result, we have produced several useful open source tools and SDKs for developers, including the Windows Azure Command-line Tools for PHP, the Windows Azure Tools for Eclipse and the Windows Azure SDK for PHP and for Java. Most recently, Microsoft joined Zend Technologies Ltd., IBM Corp. and others for an open source, cloud interoperability project called Simple API for Cloud Application Services, which will allow developers to write basic cloud applications that work in all of the major cloud platforms.

    Available today is the latest version of the Windows Azure Command Line Tools for PHP to the Microsoft Web Platform Installer (Web PI). The Windows Azure Command Line Tools for PHP enable developers to use a simple command-line tool without an Integrated Development Environment to easily package and deploy new or existing PHP applications to Windows Azure. Microsoft Web PI is a free tool that makes it easy to get the latest components of the Microsoft Web Platform as well as install and run the most popular free web applications.

    On the data portability front, we’re also working with the open source community to support the Open Data Protocol (OData), a REST-based Web protocol for manipulating data across platforms ranging from mobile to server to cloud. You can read more about the recent projects we’ve sponsored (see OData interoperability with .NET, Java, PHP, iPhone and more) to support OData. I’m pleased to announced that we’ve just release a new version of the OData Client for Objective-C (for iOS & MacOS), with the source code posted on CodePlex, joining a growing list of already available open source OData implementations.

    Microsoft’s investment and participation in these projects is part of our ongoing commitment to openness, from the way we build products, collaborate with customers, and work with others in the industry. I’m excited by the work we’re doing , and equally eager to hear your thoughts on what we can collectively be doing to support interoperability in the cloud.

    Jean Paoli, general manager for Interoperability Strategy at Microsoft

  • Interoperability @ Microsoft

    Microsoft, Zend and others announce Simple API for Cloud Application Services


    Zend Technologies today launched the Simple API for Cloud Application Services project, “a new open source initiative that allows developers to use common application services in the cloud, while enabling them to unlock value-added features available from individual providers.”

    The initial goal of the project is to provide a set of programming interfaces for PHP developers to facilitate the development of applications that have basic cloud storage needs.

    The project’s announcement includes a quote from Microsoft’s Doug Hauger, General Manager Windows Azure: “Microsoft is pleased to continue to work with Zend and join efforts with other contributors to this project. The Simple Cloud API is an example of Microsoft’s continued investment in the openness and interoperability of its platform. We’re excited to see how this project will foster adoption of cloud computing platforms by PHP developers and hope that many of these developers are encouraged to use Windows Azure.”

    What is the Simple API for Cloud Application Services

    Cloud computing platforms are new technologies and the platform vendors are innovating rapidly in their platforms to address varied customer needs. Some projects do not require the richness provided by vendor-specific APIs and can instead be built with simple APIs that provide an abstraction layer across different platforms. From a developer’s perspective, simple APIs make it easier to write code that remains the same whatever the destination platform.

    simple api for cloud application services zend microsoft interoperability

    This project is pragmatic. The first available implementation of the “Simple API for Cloud Application Services” is provided by Zend who will ship the “Zend Cloud” adapters that will target storage services such as:

    • File storage services, to enable all kinds of files to be stored
    • Document Storage services, to enable manipulation of structured data in a tabular form
    • Simple queue services, to enable storage and delivery of messages.

    simple api for cloud application services zend microsoft interoperability php sdk

    It encourages PHP developers to explore cloud computing by writing code that leverages commonalities across different platforms’ storage services. As the developers become proficient and learn each platform, they will be further inclined to learn vendor-specific features to take advantage of richer functionality.

    Microsoft’s contribution to the project

    A few months ago, Microsoft started to work with Real Dolmen on a Windows Azure SDK for PHP developers. This SDK has been submitted to the Zend Framework (see “July CTP of Windows Azure for PHP Released and support in Zend Framework”) and it now forms the basis of Microsoft’s contribution to the Simple Cloud API project.

    PHP developers will be able to program against Windows Azure using the Simple Cloud API to access the main features of Window Azure Storage:


    For PHP developers who need to use Windows Azure specific features that are not included in the Simple Cloup API (e.g. Windows Azure storage supports transactions unlike some other cloud storage services) they will be able to combine Simple Cloud API code with Windows Azure storage specific code using the dedicated Windows Azure SDK for PHP. The goal is to allow “developers to use common application services in the cloud, while enabling them to unlock value-added features available from individual providers”.

    The Channel9 video provides more information on this announcement:

    Get Microsoft Silverlight

    Going Forward

    Windows Azure is an open platform. We believe that initiatives like the Simple Cloud API will benefit adoption of cloud computing platforms by developers. The Simple Cloud API gives PHP developers more choices and for Microsoft this is a great opportunity to encourage them to use Windows Azure.

    Let’s meet at www.simplecloud.org

    Vijay Rajagopalan, Principal Architect, Microsoft Corp.

  • Interoperability @ Microsoft

    The OData Producer Library for PHP is here



    I’m pleased to announce that today we released the OData Producer Library for PHP. In case you missed it, we released last year a client library that allows PHP applications to consume an OData feed, and with this new library it now easy for PHP Applications to generate OData Feeds. PHP developers can now add OData support to their applications so it can be consumed by all clients and libraries that support OData.

    The library is designed to be used with a wide range of data sources (from databases such as SQL Server and MySQL to data structures that are at the application level for applications such as CMS systems). The library is available for download under the open source BSD license: http://odataphpproducer.codeplex.com/
    In order to make the library generic so it can be used on a wide range of scenarios we didn’t take any dependency to specific data structures or data sources. Instead the library is based on 3 main interfaces that, when implemented by the developers for the specific data source, allow the library to retrieve the appropriate data and serialize it for the client. The library takes care of handling metadata, query processing and serialization/deserialization of the data streams.

    Two examples are included that show how a full OData service can be built using the library: the Northwind DB example uses an SQL Express DB as data source and the WordPress example that uses the WordPress’s MySQL DB Schema to expose a feed for Posts, Comments and Users.

    Quick Introduction to OData

    Open Data Protocol is an open protocol for sharing data. It is built upon AtomPub (RFC 5023) and JSON. OData is a REST (Representational State Transfer) protocol, therefore a simple web browser can view the data exposed through an OData service.

    The basic idea behind OData is to use a well-known data format (Atom feed or JSON) to expose a list of entities.

    The OData technology has two main parts:

    • The OData data model, which provides a generic way to organize and describe data. OData uses the Entity Data Model (EDM).The EDM models data as entities and associations among those entities. Thus OData work with pretty much any kind of data.
    • The OData protocol, which lets a client make requests to and get responses from an OData service. Data sent by an OData service can be represented on the wire today either in the XML-based format defined by Atom/AtomPub or in JavaScript Object Notation (JSON).

    An OData client accesses data provided by an OData service using standard HTTP. The OData protocol largely follows the conventions defined by REST, which define how HTTP verbs are used. The most important of these verbs are:

    • GET : Reads data from one or more entities.
    • PUT : Updates an existing entity, replacing all of its properties.
    • MERGE : Updates an existing entity, but replaces only specified properties.
    • POST : Creates a new entity.
    • DELETE : Removes an entity.

    Each HTTP request is sent to a specific URI, identifying some resource in the target OData service's data model.

    The OData Producer Library for PHP

    The OData Producer Library for PHP is a server library that allows to exposes data sources by using the OData Protocol.

    The OData Producer supports all Read-Only operations specified in the Protocol version 2.0:

    • It provides two formats for representing resources, the XML-based Atom format and the JSON format.
    • Servers expose a metadata document that describes the structure of the service and its resources.
    • Clients can retrieve a feed, Entry or service document by issuing an HTTP GET request against its URI.
    • Servers support retrieval of individual properties within Entries.
    • It supports pagination, query validation and system query options like $format, $top, $linecount, $filter, $select, $expand, $orderby, $skip .
    • User can access the binary stream data (i.e. allows an OData server to give access to media content such as photos or documents in addition to all the structured data)

    How to use the OData Producer Library for PHP

    Data is mapped to the OData Producer through three interfaces into an application. From there the data is converted to the OData structure and sent to the client.

    The 3 interfaces required are:

    • IDataServiceMetadataProvider: this is the interface used to map the data source structure to the Metadata format that is defined in the OData Protocol. Usually an OData service exposes a $metadata endpoint that can be used by the clients to figure out how the service exposes the data and what structures and data types they should expect.
    • IDataServiceQueryProvider: this is the interface used to map a client query to the data source. The library has the code to parse the incoming queries but in order to query the correct data from the data source the developer has to specify how the incoming OData queries are mapped to specific data in the data source.
    • IServiceProvider: this is the interface that deals with the service endpoint and allows defining features such as Page size for the OData Server paging feature, access rules to the service, OData protocol version(s) accepted and so on.
    • IDataServiceStreamProvider: This is an optional interface that can be used to enable streaming of content such as Images or other binary formats. The interface is called by the OData Service if the DataType defined in the metadata is EDM.Binary.

    If you want to learn more about the OData Producer Library for PHP, the User Guide included with the code provides detailed information on how to install and configure the library, it also show how to implement the interfaces in order to build a fully functional OData service.

    The library is built using only PHP and it runs on both Windows and Linux.

    This is the first release of a Producer library, future versions may add Write support to be used for scenarios where the OData Service needs to provide the ability to update data. We will also keep it up to date with future versions of the OData Protocol.

    Claudio Caldato, Principal Program Manager, Interoperability Strategy Team

  • Interoperability @ Microsoft

    Now on IE and Firefox: Debug your mobile HTML5 page remotely with weinre (WEb INspector REmote)


    Great news for HTML5 mobile developers: the remote DOM inspector tool weinre is no longer restricted to Webkit-based browsers and can be used with Internet Explorer 10 or Firefox, thanks to the community contribution with technical support from Microsoft Open Technologies, Inc.

    WEb INspector REmote

    Weinre (WEb INspector REmote) is an HTML5 debugging tool addressing the challenge of testing and troubleshooting web pages on mobile devices. It is part of the Apache Cordova projects and intends to help developers debugging their mobile web page or Cordova-based mobile apps on actual device. It allows doing DOM inspection remotely and generally makes it way easier to validate that an HTML5 page will render and behave properly on actual devices.

    If you are familiar with the F12 tools on IE, Firebug on Firefox or Web Inspector on Chrome, then you will feel at home to debug your HTML5 pages on mobiles devices.

    Removing WebKit dependencies

    Thanks to the recent community work with MS Open Tech technical support, WebKit dependencies were removed from weinre and the tools now perfectly works on Internet Explorer and Firefox, giving you the option to use your favorite modern browser.

    For HTML5 developers wrapping web code into native applications with tools such as Apache Cordova (PhoneGap), this tool is a great addition to the native development tools that will not allow to do DOM inspection on the HTML5 content that is encapsulated in their native apps with Apache Cordova. Using weinre, they can now test the HTML5 part of their apps in a real environment, no longer “simulating” actual devices with desktop web browsers.


    Check out the video below to see a short demo of Weinre used to debug an HTML5-based application on Windows Phone 8.

    Get Started with weinre on IE 10 and Windows Phone 8

    To get started with weinre, visit the weinre Apache project page.

    You can install weinre with npm using the following command in the Node.js command prompt: npm -g install weinre

    A debug server (running on Node.js) is launched on your development machine. A debugger client web page allows you to inspect and manipulate the DOM elements of your HTML5 page. In the Node command prompt, just type weinre –boundHost xx.xx.xx.xx (where xx.xx.xx.xx is the IP address of the Network adapter you want to use).

    You can obviously set a number of settings so that the server is bound to a specific Network connection, and uses a specific port.

    You can then access the debugger instruction web page going to the http://xx.xx.xx.xx:8080/ page.


    You can then start your debugger client page: http://xx.xx.xx.xx:8080/client.

    Instrument your mobile web page with the following script line:

    <script src="http://xx.xx.xx.xx:8080/target/target-script-min.js#anonymous"></script>

    Then bring up your mobile page on a connected device whether through the browser on in your app. The client will show the new target connection and you will be able to play with your DOM elements!


    Call for feedback

    Once you gave a try to weinre to inspect your HTML5 markup on a Windows Phone 8 device, we would love to have your feedback and input. Please comment below to let us know what you think.

  • Interoperability @ Microsoft

    Migrating HTML5-based applications to Windows Phone overnight with Apache Cordova and jQuery Mobile


    Microsoft Open Technologies, Inc., the Apache Cordova team and the JQueryMobile team recently met with over 20 top PhoneGap/Apache Cordova developers for a Hackathon in San Francisco to gather feedback on building HTML5 applications running on Windows Phone with Apache Cordova and the jQuery Mobile theme for Windows Phone (Metro style). The Windows Phone team also joined the party, asking developers about their HTML5 and Javascript experiences on top of the Windows Phone Web Browser control.

    While some of the developers had backgrounds in start-ups and others were independent developers, every attendee had one thing in common: all of them had PhoneGap/Cordova applications published on Android and/or iPhone platforms. During the event, using Apache Cordova and jQuery Mobile, we helped attendees migrate their HTML5-based applications to Windows Phone. For many of the attendees, this was their first time to work with Windows Phone. The energy at the event was amazing as developers got to experience first-hand the ease of integrating Apache Cordova and jQuery Mobile with Windows Phone.

    You can read the report on the event from Jesse and Steve from the Apache Cordova team here.

    After a few hours of learning Visual Studio Express for Phone, coding and eating pizza, the first demos of HTML5 applications running on Windows Phone started to pop up. Developers saw their applications running on their new Windows Phone devices, which they received as part of the event along with AppHub tokens for them to publish applications on the Windows Phone marketplace.

    Developers from Learnzapp, with no previous experience on Windows Phone development, migrated their Cordova/JQuery Mobile Law School Admission Test application to Windows Phone and applied the jQuery Mobile theme for Windows Phone (Metro style) to their HTML5 controls in only a few hours. Those developers plan to submit the application to the Windows Phone marketplace in the next few days. You can read their own report on the event on their blog. Below is a screenshot of the LSAT application running on an Android device, a Windows Phone and an iPhone.


    Developers from Tiggzi, delivering a cloud based Builder for HTML5, jQuery Mobile and Apache Cordova applications, kicked off the addition of Windows Phone to the list of platforms their tool targets. They announced the added support for Windows Phone earlier this month, only 3 weeks after the event.

    The event was a success not only because developers left the Hackathon with functional HTML5-based Windows Phone applications, after migrating their applications in a single night, but also because experienced developers helped us identify the key aspects of the migration process which will enable making the HTML5 and Javascript development for Windows Phone even better.

    We want to thank everyone who attended the event, and look forward to further engagement with this community. Be sure to take a look at the video below to see a short demo of an HTML5 application development with Apache Cordova, jQuery Mobile and the new jQuery Mobile theme for Windows Phone (Metro style).

    To learn more about HTML5 and JavaScript development for Windows Phone, visit this page where you will find related resource, articles and tutorials.

    Abu Obeida Bakhach
    Program Manager
    Microsoft Open Technologies, Inc.
    A subsidiary of Microsoft Corporation

  • Interoperability @ Microsoft

    Windows Azure Platform gets easier for PHP developers to write modern cloud applications


    OSIDaysThis week, I’m attending the Open Source India conference, in Chennai, India where I had the chance to participate in the opening keynote. During my talk, I gave a quick overview of the Interoperability Elements of a Cloud Platform, and I illustrated some elements through a series of demos. I used this opportunity to unveil a new set of developer tools and Software Development Kits (SDKs) for PHP developers who want to build modern cloud applications targeting Windows Azure Platform:

    • Windows Azure Companion – September 2010 Community Technology Preview(CTP)– is a new tool that aims to provide a seamless experience when installing and configuring PHP platform-elements (PHP runtime, extensions) and Web applications running on Windows Azure. This first CTP focuses on PHP, but it may be extended to be used for deploying any open source component or application that runs on Windows Azure. Read below for more details.
    • Windows Azure Tools for Eclipse for PHP - September 2010 Update–is a plug-in for PHP developers using the Eclipse development environment, which provides tools to create, test and deploy Web applications targeting Windows Azure.
    • Windows Azure Command-line Tools for PHP – September 2010 Update– is a command-line tool, which offers PHP developers a simple way to package PHP based applications in order to deploy to Windows Azure.
    • Windows Azure SDK for PHP– Version 2.0 – enables PHP developers to easily extend their applications by leveraging Windows Azure services (like blobs, tables and queues) in their Web applications whether they run on Windows Azure or on another cloud platform.


    These pragmatic examples are good illustrations demonstrating Windows Azure interoperability . Keep in mind that Microsoft’s investment and participation in these projects is part of our ongoing commitment to openness, which spans the way we build products, collaborate with customers, and work with others in the industry.

    A comprehensive set of tools and building blocks to pick and choose from

    We’ve come a long way since we released the first Windows Azure SDK for PHP in May 2009, by adding complementary solutions with the Eclipse plug-in and the command line tools.

    The Windows Azure SDK for PHP gives PHP developers a speed dial to easily extend their applications by leveraging Windows Azure services (like blobs, tables and queues), whether they run on Windows Azure or on another cloud platform. Maarten Balliauw, from RealDolmen, today released the version 2.0 of the SDK. Check out the new features on the project site: http://phpazure.codeplex.com/.
    An example of how this SDK can be used is the Windows Azure Storage for WordPress, which allows developers running their own instance of WordPress to take advantage of the Windows Azure Storage services, including the Content Delivery Network (CDN) feature. It provides a consistent storage mechanism for WordPress Media in a scale-out architecture where the individual Web servers don't share a disk.

    Today we are also announcing updates on the Windows Azure Tools for Eclipse for PHP and the Windows Azure Command-line Tools for PHP.

    Developed by Soyatec, the Windows Azure Tools for Eclipse plug-in offers PHP developers a series of wizards and utilities that allows them to write, debug, configure, and deploy PHP applications to Windows Azure. For example, the plug-in includes a Window Azure storage explorer that allows developers to browse data contained into the Windows Azure tables, blobs, or queues. The September 2010 Update includes many new features like enabling Windows Azure Drives, providing the PHP runtime of your choice, deploying directly to Windows Azure (without going through the Azure Portal), or the Integration of SQL CRUD for PHP, just to name a few. We will publish detailed information shortly, and in the meantime, check out the project site: http://www.windowsazure4e.org/.

    We know that PHP developers use various developments environments – or none J, so that’s why we built the Windows Azure Command-line Tools, which let you easily package and deploy PHP applications to Windows Azure using a simple command-line tool. The September 2010 Update includes more deployment options, like new support for the Windows Azure Web & Worker roles.

    So you might think that from the PHP developer point of view, you’re covered to write and deploy cloud applications for Windows Azure.” The answer is both yes, and no!
    Yes, because these tools cover most scenarios where developers are building and deploying one application at time. But what if you want to deploy open source PHP SaaS applications on the same Windows Azure service? Or what if you are more of a Web applications administrator, and just want to deploy pre-built applications and simply configure them?

    This is where the Windows Azure Companion comes into the picture.

    A seamless experience when deploying PHP apps to Windows Azure


    IMPORTANT NOTE - July 13, 2011

    The Windows Azure Companion was an experimental tool to provide a simple experience installing and configuring platform-elements (PHP runtime, extensions) and web applications on Windows Azure.  Based the feedback and results Microsoft has decided to stop any further development of the Windows Azure Companion and instead we recommend using the new  tools available at http://azurephp.interoperabilitybridges.com/downloads to deploy applications to Windows Azure


    The Windows Azure Companion – September 2010 CTP– is a new tool that aims to provide a seamless experience when installing and configuring PHP platform-elements (PHP runtime, extensions) and Web applications running on Windows Azure. This early version focuses on PHP, but it may be extended for deploying any open source component or application that runs on Windows Azure. Read below for more details.

    It is designed for developers and Web application administrators who want to more efficiently “manage” the deployment, configuration and execution of their PHP platform-elements and applications.

    The Windows Azure Companion can be seen as an installation engine that is running on your Windows Azure service. It is fully customizable through a feed which describes what components to install. Getting started is an easy three step process:

    1. Download the Windows Azure Companion package & set your custom feed
    2. Deploy Windows Azure Companion package to your Windows Azure account
    3. Using the Windows Azure Companion and your custom feed, deploy the PHP runtime, frameworks, and applications that you want


    So, how did we build the Windows Azure Companion? The Windows Azure Companion itself is a Web application built in ASP.NET/C#. Why C#? Why not PHP? The answer is simple: the application is doing some low-level work with the Windows Azure infrastructure. In particular, it spins the Windows Azure Hosted Web Core Worker Role in which the PHP engine and applications are started and then executed. Doing these low level tasks in PHP would be much more difficult, so we chose C# instead. The source and the installable package (.cspkg & config files) are available on the MSDN Code Gallery: http://code.msdn.microsoft.com/azurecompanion. And from a PHP developer perspective, all you need is the installable package, and you don’t have to worry about the rest unless you are interested!

    All you need is in the feed

    The Windows Azure Companion Web application uses an ATOM feed as the data-source to display the platform-elements and Web applications that are available for installation. The feed provides detailed information about the platform element or application, such as production version, download location, and associated dependencies. The feed must be hosted on an Internet accessible location that is available to the Windows Azure Companion Web application. The feed conforms to the standard ATOM schema with one or more product entries as shown below:

    <?xml version="1.0" encoding="utf-8"?>
    <feed xmlns="http://www.w3.org/2005/Atom">
    <title>Windows Azure platform Companion Applications Feed</title>
    <link href="http://a_server_on_the_internet.com/feed.xml" />
    <name>Interoperability @ Microsoft</name>
    <id>http://a_server_on_the_internet.com/feed.xml </id>
    <installCategory>Frameworks and SDKs</installCategory>
    <!-- UI elements shown in Windows Azure platform Companion -->
    <title>OData SDK for PHP</title>
    <summary>OData SDK for PHP</summary>
    <!-- Installation Information -->
    <installerFile url="http://download.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=odataphp&amp;DownloadId=111099&amp;FileTime=129145681693270000&amp;Build=17027" version="2.0">
    <installationProperty name="downloadFileName" value="OData_PHP_SDK.zip" />
    <installationProperty name="applicationPath" value="framework" />
    <!-- Product dependencies -->

    If you want to see a sample feed in action and the process for building it, I invite you to check Maarten Balliauw’s blog: Introducing Windows Azure Companion Cloud for the masses. He has assembled a custom feed with interesting options to play with. And of course, the goal is to let you design the feed that contains the options and applications you need.

    We are on a journey

    Like I said earlier, we’ve come a long way in the past 18 months, understanding how to best enable various technologies on Windows Azure. We’re on a journey and there’s a lot more to accomplish. But I have to say that I’m very excited by the work we’re doing, and equally eager to hear your feedback.

    Vijay Rajagopalan, Principal Architect

  • Interoperability @ Microsoft

    Project Apache Stonehenge: progress and roadmap discussed at ApacheCon


    I, along with other Microsoft colleagues, participated in the ApacheCon 2009 in Oakland, CA this week. This week also marks the 10th Anniversary of the Apache Software Foundation – so congratulations to the ASF and overall Apache community, which has steadily grown and sustained over the past decade!

    Microsoft is now actively participating in several Apache projects and becoming part of the core community. ASF President, Justin Erenkrantz, talked about Microsoft’s contributions recently, as well. Peter Galli has also blogged on other activities that we were engaged in at the ApacheCon this year.

    I have personally been involved in the Apache Stonehenge incubator project along with my colleague, Kent Brown, and it is good to see all the progress that we have made in the last 1 year. The original goal of Stonehenge was to provide a public forum to test the interoperability of WS-* protocols on different vendor stacks and to build sample applications that could provide best practices and coding guidelines for better interoperability. We are on a good path to achieving many of these goals, with the main sample application, Stock Trader, now having been implemented on .NET (by Microsoft), PHP (by WSO2), WSAS JAVA stack (by WSO2), Metro (by SUN Microsystems), Spring Web Services (by SpringSource).


    The Stock Trader application has also been extended to use the WS-Security and WS-Trust protocols now for claims-based authentication scenarios. This allows the end-users’ access to be authenticated through an independent Security Token Service (STS) that is trusted by the bank and to pass that token to the broker to process the transaction.


    Moving forward, the Stonehenge dev community wants to focus on building multiple micro-samples each focused on specific set of WS-* protocols. I think this is a great idea because it will allow vendors, developers and customers to quickly test and learn specific protocols of interest to them instead of going thru one big application that covers most of them. It also allows individual developers to work on different samples and turn them around faster. One other idea that is being talked about is having a dashboard available which can show the results of interop tests across different stacks in an easy to understand and consume way. This will be of great interest to our customers who can see the interop testing results across different versions of implementations of WS-* standards by different vendors. Kent Brown, along with Prabath Siriwardena of WSO2, did a technical session on Stonehenge and talked about future plans. There seems to be good consensus building around these future plans for Stonehenge.

    In addition to the meetings to discuss the progress of Stonehenge with other contributors to the project, I had the privilege of meeting with the ASF executives and talk about other projects that Microsoft can work on. I look forward to doing more interop work and engagements with the Apache community over the next year. We also showed off several other interop-related projects that Microsoft has been engaged in recently:

  • Interoperability @ Microsoft

    Bienvenue - Benvenuto - Welcome to Microsoft Gianugo Rabellino, Senior Director, Open Source Communities.


    Fourteen years ago, I relocated from France to join Microsoft, where I expanded my dream to do big things with technology.  As General Manager for Interoperability Strategy at Microsoft today, I have the privilege of leading a talented team of people who draw inspiration from delivering the most value out of technology for the benefit of our customers. As I review the work Microsoft has been doing to support these efforts, I am continually impressed by the company’s commitment to addressing the customer realities of today’s mixed source IT environments. 

    You may know our team accomplishments from our Interoperability Bridges & Lab Center, but we do much more. We strive to take a holistic approach to interoperability and openness,  always exploring new ways to engage and build deeper technology connections.  As part of that effort, I’m extremely pleased to announce a new addition to my team – Gianugo Rabellino, who will be a Senior Director engaged directly with the broader open source world. In his role, Gianugo will work to foster relationships with the open source communities worldwide. I expect he will be a tremendous resource in helping identify ways open source communities and Microsoft can better work together and to help Microsoft product teams with their open source strategies.

    gianugoMany of you may already know Gianugo as he’s been a well-known figure in open source communities for years. Given his previous roles as Founder and Chief Executive Officer of Sourcesense and Vice President of the Apache XML Project Management Committee, Gianugo possess a deep understanding of open source technologies and platforms. When he joins Microsoft this coming month, Gianugo will bring his wealth of experience and knowledge to a group of passionate and committed individuals who share his same enthusiasm for interoperability and openness between Microsoft and non-Microsoft platforms.

    Gianugo will be relocating from Italy to Redmond to do big things with technology.  And he’ll go big with the support of my team, our company and many of you who follow our blog.  Once he’s had time to settle into his new role, you can expect to hear from Gianugo directly about his plans around his new role, and the adventures in moving a young family from Italy to the Seattle area.

    Jean Paoli, General Manager for Interoperability Strategy

  • Interoperability @ Microsoft

    New CU-RTC-Web HTML5Labs Prototype from MS Open Tech Demonstrates Roaming between Cellular and Wi-Fi Connections


    Demonstrating a faster mobility scenario that would be more difficult with the current WebRTC draft

    Adalberto Foresti
    Principal Program Manager, Microsoft Open Technologies, Inc.

    Since we submitted the initial CU-RTC-Web proposal to the W3C WebRTC Working Group in August 2012 with our proposed original contribution, vibrant discussions over the proposed RTCWeb protocol draft and WebRTC APIs specifications have continued both online and at face to face W3C and IETF Working Group meetings. The amount of energy in the industry around this subject is remarkable, though the road to converge on a quality, implementable spec that properly addresses real-world use cases remains long.

    Last month, our prototype of CU-RTC-Web demonstrated a real world interoperability scenario – voice chatting between Chrome on a Mac and IE10 on Windows via the API.

    Today, Microsoft Open Technologies, Inc., (MS Open Tech) is now publishing an updated prototype implementation of CU-RTC-Web on HTML5Labs that demonstrates another important scenario – roaming between two different connections (e.g. Wi-Fi and 3G, or Wi-Fi and Ethernet) - with negligible impact on the user experience.

    The simple, flexible, expressive APIs underlying the CU-RTC-Web architecture allowed us to implement this important scenario just by building the appropriate JavaScript code and without introducing any changes in the spec, because CU-RTC-Web is a lower level API than the current proposed WebRTC API draft.

    By comparison, the current high level proposed WebRTC API draft would not allow JavaScript developers to implement this scenario: the current draft would need to see modifications done ‘under the hood’ at the platform level by the developers modifying the browser capability itself. There is a proposal for addressing mobility cases in the IETF, but standardization of these mechanisms and subsequent implementation in the browser takes time.

    This example also illustrates that we should not assume everything that will ever be done with WebRTC is already known at the time the standard is developed. It is tempting to develop an opaque, high level API that is optimized for some well-understood scenarios, but that requires development of new, probably non-interoperable extensions to cover new scenarios - or creating yet another standard to enable such applications. We believe that web developers would prefer to be empowered by a lower level, general API that truly enables evolving, interoperable scenarios from day one. Our earlier CU-RTC-Web blog described critical requirements that a successful, widely adoptable Web RTC browser API will need to meet, particularly in the area of network transport. We mentioned how the RealtimeTransport class connects a browser with a peer, providing a secured, low-latency path across the network.

    Rather than using an opaque and indecipherable blob of SDP: Session Description Protocol (RFC 4566) text, CU-RTC-Web allows applications to choose how media is described to suit application needs. The relationship between streams of media and the network layer they traverse is not some arcane combination of SDP m= sections and a= mumble lines. Applications build a real-time transport and attach media to that transport.

    If you want to learn more about the challenges that SDP brings, some very insightful comments have recently been shared by Robin Raymond of Open Peer on the RTCWEB IETF mailing list. Go here to see Robin’s well-crafted Blog post on the issues – SDP the WebRTC Boat Anchor. As a community, it is important we continue to share these views as inaction will constitute a self-defeating choice, for which the industry would pay a high price for years to come.

    As with our previous release, we hope that publishing this latest working prototype in HTML5Labs provides guidance in the following areas:

    • Clarify the CU-RTC-Web proposal with interoperable working code so others can understand exactly how the API could be used to solve real-world use cases.
    • Encourage others to show working example code that shows exactly how their proposals could be used by developers to solve use cases in an interoperable way.
    • Seek developer feedback on how the CU-RTC-Web addresses interoperability challenges in Real Time Communications.
    • Provide a source of ideas for how to resolve open issues with the current draft API as the CU-RTC-Web proposal is cleaner and simpler.

    The prototype can be downloaded from HTML5Labs. We look forward to receiving your feedback: please comment on this post or send us a message once you have played with the API, and stay tuned for even more to come.

    We are proud to be part of the process and will continue to collaborate with the working group to close the gaps in the specification in the coming months. We remain persuaded that the general principles that governed CU-RTC-Web are valid and that a lower level API such as CU-RTC-Web is preferable to the higher level API within the current proposed WebRTC API draft.  This would result in the most agile and robust standard, one that will empower web developers to create innovative experiences for years and decades to come.

  • Interoperability @ Microsoft

    New release - Tx (LINQ to Logs and Traces)


    We are proud to announce the release of Tx (LINQ to Logs and Traces), an open source project to help with the debugging of software from logs/traces, and the building of real-time monitoring and alerting systems.  

    This tool is code that has been used within Microsoft, for example, by the Windows Communication Foundation (WCF) and the ServiceBus teams. With this release, the Tx code is now available for use in your own projects.  

    Tx allows the use of Language Integrated Query (LINQ) queries on raw event sources. LINQ is a Microsoft .NET Framework component that adds native data querying capabilities using any of the supported .NET languages 

    Tx enables the use of Reactive Extensions (Rx) on real event sources and provides support for multiplexed event sequences (a stx multiplexed sequence as you might find in a typical logsingle sequence containing events of different types in order of occurrence). Using Tx, it is possible to hide the heterogeneity of event sources and thus provide a single query across multiple sources. Such queries use the same API for both real-time and past history.  

    When working on historical log/trace files. multiple queries can be performed with a single read. For example, a single pass over a file can count all “Warning” events, match “Begin” and “End” events, and calculate the average duration of each activity. This functionality is extremely useful when working with large files as it is possible to perform the same real-time queries efficiently over historical data to gain additional insights. 

    With this first release Tx (LINQ to Logs and Traces) provides:

    • Parsers that surface various trace/log formats as IObservables
    • LINQPad Driver, allowing the usage of LINQPad directly on files and real-time sessions
    • Samples illustrating how to use Reactive Extensions + LINQ to Objects on:
      • trace/log files that have no size restriction
      • real-time sessions

    This release also provides the following NuGet packages:

    • Tx.Core
      • Common components that are not specific to a specific tracing format and are commonly reused across different formats.
    • Tx.Windows provides support for:
      • Event Tracing for Windows (ETW) which allows application programmers to start and stop event tracing sessions, instrument applications and consume trace events.
      • Event Logs (.etvtx) and listening for changes in event logs
      • Performance counters from files (.blg, .csv, .tsv) and from real-time counter API
      • IIS text logs in W3C format
    • Tx.SqlServer
      • SQL Server Extended Events (XEvent)
    • Tx.All
      • A convenience package containing all the above

    Please check out the Tx project site on CodePlex for more information and the corresponding documentation.


    Georgi Chkodrov, Developer, Microsoft Corp.
    Ross Gardler, Senior Technical Evangelist, Microsoft Open Technologies, Inc.
  • Interoperability @ Microsoft

    Binary to Open XML (B2X) Translator: Interoperability for the Office binary file formats


    [05/18- Update:
    this translator is highlighted in today's Document Interoperability Inititice (DII) event that just happened in London ]

    In support of Microsoft’s ongoing efforts to increase the interoperability of its various technologies, we have partnered with Dialogika to create a translator that converts the Microsoft Office binary file formats (.DOC, .XLS, and .PPT) into the Office Open XML standard format (.DOCX, .XLSX, .PPTX).

    A majority of the world’s documents are available in the binary Office formats and, for developers working with these formats (including .DOC, .PPT, and .XLS.), Microsoft published the specifications under the Open Specification Promise (OSP) in June 2008.


    A new version of the Binary to Open XML (B2X) Translator has just been released ; this version adds support for PowerPoint (.PPT) and Excel (.XLS) files:

    Supported .XLS Features

    Supported .PPT Features

    • Shared Formulas
    • String Formatting
    • Data Type Formatting (number, date, currency, etc.)
    • Cell Formatting


    • Textbox Formatting
    • Shapes
    • Animations
    • Notes (including Formatting)

    (Detailed features http://b2xtranslator.sourceforge.net/architecture.html#mapping )

    From an architectural point of view, the translator can be seen as a series of pipelines during which transformation steps are applied to translate from the binary to Open XML format:


    (more details on http://b2xtranslator.sourceforge.net/architecture.html )

    While it has been possible to manually convert documents between formats by opening the file in the relevant application and saving in the other format, before the release of the translator there was no software tool to automate this task as a stand-alone application, or in a batch mode.

    So from the end-user point of view the translator offers two options:


    While using Windows’ context menus to translate the files is self-explanatory (right-click, convert to…) doing so from the command line warrants a bit more study. The command line utility consists of three separate executables, one for each file type (ppt2x.exe for spreadsheet, doc2x.exe for document, and xls2x.exe for presentation). The executables use the same command line syntax, and support the usual basic command line options:
    This includes the input filename, output filename, and the level of debug verbosity. The resulting command is easy to include in automation scripts, and batch processes.

    The command-line architecture allows the translators to be integrated into existing systems such as document management systems running on a server.

    Using the source of B2X translator (ppt2x.exe, doc2x.exe, xls2x.exe), you can rebuilt them using the .NET Framework on Windows or Mono on Linux, thus ensuring portability across operating systems and platforms.

    As an open source project, the Translator is a solid foundation for engineering work around the Office binary format. Dialogika’s development team has put together a few “how to” guides, including the Freeform Shapes in the Office Drawing Format guide, that helps to explain the specification and give some valuable tips. For developers and ISVs the code of this translator can be reused in their own applications enabling a wide range of document interoperability solutions.

    We’re excited by this latest release making the translators more functional and addressing practical document conversion scenarios. Of course, there’s still work ahead of us! We are currently in the planning stage for the next version. In addition to the goals outlined above, it is very important to us that the translator adequately addresses practical user scenarios. To this end, we would love to hear feedback on this release as well as your feature requests for the next version. Please provide your feedback on the Sourceforge site.

  • Interoperability @ Microsoft

    WS-I Completes Web Services Interoperability Standards Work


    imageThe final three Web services profiles developed by the Web Services Interoperability Organization (WS-I) have been approved by WS-I’s membership. Approval of the final materials for Basic Profile (BP) 1.2 and 2.0, and Reliable Secure Profile (RSP) 1.0 marks the completion of the organization’s work. Since 2002, WS-I has developed profiles, sample applications, and testinimageg tools to facilitate Web services interoperability. These building blocks have in turn served as the basis for interoperability in the cloud. As announced today by the WS-I, stewardship over WS-I’s assets, operations and mission will transition to OASIS (Organization for the Advancement of Structured Information Standards).

    It took a lot of work to get real products to fully interoperate using the standards. WS-I members have delivered an impressive body of work supporting deliverables in addition to the profiles (test tools, assertions, etc.). One might ask “why did it take so long, and what exactly did all this hard work entail?”

    When WS-I started up, interoperability of the whole stack of XML standards was fragile, especially of the SOAP and WSDL specifications at the top of the stack. It was possible for a specification to become a recognized standard with relatively little hard data about whether implementations of the specs interoperated. Specs were written in language that could get agreement by committees rather than in terms of rigorous assertions about formats and protocols as they are used in conjunction with one another in realistic scenarios. In other words, the testing that was done before a spec became a standard was largely focused on determining whether the spec could be implemented in an interoperable way, and not on whether actual implementations interoperated.

    At WS-I the web services community learned how to do this better. One of the first tasks was to develop profiles of the core specifications that turned specification language containing “MAY” and “SHOULD” descriptions of what is possible or desirable to “MUST” statements of what is necessary for interoperability, and removing altogether the features that weren’t widely implemented. We learned that it is important to do N-way tests of all features in a profile across multiple implementations, and not just piecewise testing of shared features. Likewise, since the SOAP based specs were designed to compose with one another, it is important to test specs in conjunction and not just in isolation.  During this period of learning and evolving, it was really necessary to go through the profiling process before the market would accept standards as “really done.”

    The underlying reality, especially in the security arena, is quite complex, a fact which also slowed progress. Different products support different underlying security technologies, and adopted the WS-* security-related standards at different rates. Also, there are many different ways to setup secure connections between systems, and it took considerable effort to learn how to configure the various products to interoperate. For example, even when different vendors support the same set of technologies, they often use different defaults, making it necessary to tweak settings in one or both products before they interoperate using the supported standards. The continuous evolution of security technology driven by the ‘arms race’ between security developers and attackers made things even more interesting.

    This work was particularly tedious and unglamorous over the last few years, when the WS-* technologies are no longer hot buzzwords. But now, partly due to the growing popularity of test driven development in the software industry as a whole, but partly due to the hard-won lessons from WS-I, the best practices noted above are commonplace. Later versions of specifications, especially SOAP 1.2, explicitly incorporated the lessons learned in the Basic Profile work at WS-I. Other Standards Development Organization (SDO) such as OASIS and W3C have applied the techniques pioneered at WS-I, and newer standards are more rigorously specified and don’t need to be profiled before they can legitimately be called “done.” Newer versions of the WS-* standard as well as CSS, ECMAScript, and the W3C Web Platform (“HTML5”) APIs are much more tightly specified, better tested, and interoperable “out of the box” than their predecessors were 10 years ago.

    We at Microsoft and the other companies who did the work at WS-I learned a lot more about how to get our mutual customers applications to interoperate across our platforms than could be contained in the WS-I documents that were just released. And to support this effort we are compiling additional guidance under a dedicated website: http://msdn.microsoft.com/webservicesinterop


    This has a set whitepapers that go into much more depth about how to get interoperability between our platform / products and those from other vendors and open source projects.  Available whitepapers include: 

    Finally, it might be tempting to believe that the lessons of the WS-I experience apply only to the Web Services standards stack, and not the REST and Cloud technologies that have gained so much mindshare in the last few years. Please think again: First, the WS-* standards have not in any sense gone away, they’ve been built deep into the infrastructure of many enterprise middleware products from both commercial vendors and open source projects. Likewise, the challenges of WS-I had much more to do with the intrinsic complexity of the problems it addressed than with the WS-* technologies that addressed them. William Vambenepe made this point succinctly in his blog recently:

    But let’s realize that while a lot of the complexity in WS-* was unnecessary, some of it actually was a reflection of the complexity of the task at hand. And that complexity doesn’t go away because you get rid of a SOAP envelope …. The good news is that we’ve made a lot of the mistakes already and we’ve learned some lessons … The bad news is that there are plenty of new mistakes waiting to be made.

    We made some mistakes and learned a LOT of lessons at WS-I, and we can all avoid some new mistakes by a careful consideration of WS-I’s accomplishments.

    -- Michael Champion, Senior Program Manager

  • Interoperability @ Microsoft

    Breaking news: HTML 5.0 and Canvas 2D specification’s definition is complete!


    imageToday marks an important milestone for Web development, as the W3C announced the publication of the Candidate Recommendation (CR) version of the HTML 5.0 and Canvas 2D specifications.

    This means that the specifications are feature complete: no new features will be added to the final HTML 5.0 or the Canvas2D Recommendations. A small number of features are marked “at risk,” but developers and businesses can now rely on all others being in the final HTML 5.0 and Canvas 2D Recommendations for implementation and planning purposes. Any new features will be rolled into HTML 5.1 or the next version of Canvas 2D.

    It feels like yesterday when I was publishing a previous post on HTML5 progress toward a standard, as HTML5 reached "Last Call" status in May 2011. The W3C set an ambitious timeline to finish HTML 5.0, and this transition shows that it is on track. That makes me highly confident that HTML 5.0 can reach Recommendation status in 2014.

    The real-world interoperability of many HTML 5.0 features today means that further testing can be much more focused and efficient. As a matter of fact, the Working Group will use the “public permissive” criteria to determine whether a feature that is implemented by multiple browsers in an interoperable way can be accepted as part of the standard without expensive testing to verify.

    Work in this “Candidate Recommendation” phase will focus on analyzing current HTML 5.0 implementations, establishing priorities for test development, and working with the community to develop those tests. The WG will also look into the features tagged as “at risk” that might be moved to HTML 5.1 or the next version of Canvas2D if they don’t exhibit a satisfactory level of interoperability by the end of the CR period.

    At the same time, work on HTML 5.1 and the next version of Canvas2D are underway and the W3C announced first working drafts that include features such as media and graphics. This work is on a much faster track than HTML5 has been, and 5.1 Recommendations are expected in 2016. The HTML Working Group will consider several sources of suggested new features for HTML 5.1. Furthermore, HTML 5.1 could incorporate the results of various W3C Community Groups such as the Responsive Images Community Group or the WHATCG. HTML 5.1 will use the successful approach that the CSS 3.0 family of specs has used to define modular specifications that extend HTML’s capabilities without requiring changes to the underlying standard. For example, the HTML WG already has work underway to standardize APIs for Encrypted Media Extensions, which would allow services such as Netflix to stream content to browsers without plugins, and Media Source Extensions to facilitate streaming content in a way that adapts to the characteristics of the network and device.

    Reaching Candidate Recommendation further indicates the high level of collaboration that exists in the HTML WG. I would especially like to thank the W3C Team and my co-chairs, Sam Ruby (IBM) and Maciej Stachowiak (Apple), for all their hard work in helping to get to CR. In addition, the HTML WG editorial team lead by Robin Berjon deserves a lot of credit for finalizing the CR drafts and for their work on the HTML 5.1 drafts.


    Paul Cotton, Microsoft Canada
    W3C HTML Working Group co-chair

  • Interoperability @ Microsoft

    MongoDB Installer for Windows Azure


    Do you need to build a high-availability web application or service? One that can scale out quickly in response to fluctuating demand? Need to do complex queries against schema-free collections of rich objects? If you answer yes to any of those questions, MongoDB on Windows Azure is an approach you’ll want to look at closely.

    People have been using MongoDB on Windows Azure for some time (for example), but recently the setup, deployment, and development experience has been streamlined by the release of the MongoDB Installer for Windows Azure. It’s now easier than ever to get started with MongoDB on Windows Azure!


    MongoDB is a very popular NoSQL database that stores data in collections of BSON (binary JSON) objects. It is very easy to learn if you have JavaScript (or Node.js) experience, featuring a JavaScript interpreter shell for administrating databases, JSON syntax for data updates and queries, and JavaScript-based map/reduce operations on the server. It is also known for a simple but flexible replication architecture based on replica sets, as well as sharding capabilities for load balancing and high availability. MongoDB is used in many high-volume web sites including Craigslist, FourSquare, Shutterfly, The New York Times, MTV, and others.

    If you’re new to MongoDB, the best way to get started is to jump right in and start playing with it. Follow the instructions for your operating system from the list of Quickstart guides on MongoDB.org, and within a couple of minutes you’ll have a live MongoDB installation ready to use on your local machine. Then you can go through the MongoDB.org tutorial to learn the basics of creating databases and collections, inserting and updating documents, querying your data, and other common operations.

    MongoDB Installer for Windows Azure

    The MongoDB Installer for Windows Azure is a command-line tool (Windows PowerShell script) that automates the provisioning and deployment of MongoDB replica sets on Windows Azure virtual machines. You just need to specify a few options such as the number of nodes and the DNS prefix, and the installer will provision virtual machines, deploy MongoDB to them, and configure a replica set.

    Once you have a replica set deployed, you’re ready to build your application or service. The tutorial How to deploy a PHP application using MongoDB on Windows Azure takes you through the steps involved for a simple demo app, including the details of configuring and deploying your application as a cloud service in Windows Azure. If you’re a PHP developer who is new to MongoDB, you may want to also check out the MongoDB tutorial
    on php.net

    Developer Choice

    MongoDB is also supported by a wide array of programming languages, as you can see on the Drivers page of MongoDB.org. The example above is PHP-based, but if you’re a Node.js developer you can find a the tutorial Node.js Web Application with Storage on MongoDB over on the Developer Center, and for .NET developers looking to take advantage of MongoDB (either on Windows Azure or Windows), be sure to register for the free July 19 webinar that will cover the latest features of the MongoDB .NET driver in detail.

    The team here at Microsoft Open Technologies is looking forward to working closely with 10gen to continue to improve the MongoDB developer experience on Windows Azure going forward. We’ll keep you updated here as that collaboration continues!

    Doug Mahugh
    Senior Technical Evangelist
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    IndexedDB Prototype Available for Internet Explorer


    As we launch our new HTML5 Labs today, this is one of two guest blogs about the first two HTML5 prototypes. It is written by Pablo Castro, a Principal Architect in Microsoft's Business Platform Division.

    With the HTML5 wave of features, Web applications will have most of the building blocks required to build full-fledged experiences for users, from video and vector graphics to offline capabilities.

    One of the areas that has seen a lot of activity lately is local storage in the browser, captured in the IndexedDB spec, where there is a working draft as well as a more current editor's draft.

    The goal of IndexedDB is to introduce a relatively low-level API that allows applications to store data locally and retrieve it efficiently, even if there is a large amount of it.

    The API is low-level to keep it really simple and to enable higher-level libraries to be built in JavaScript and follow whatever patterns Web developers think are useful as things change over time.

    Folks from various browser vendors have been working together on this for a while now, and Microsoft has been working closely with the teams at Mozilla, Google and other W3C members that are involved in this to design the API together. Yeah, we even had meetings where all of us where in the same room, and no, we didn't spontaneously combust!

    The IE folks approach is to focus IE9 on providing developer site-ready HTML5 that can be used today by web developers without having to worry about what is stable and not stable, or being concerned about the site breaking as the specifications and implementations change. Here at the HTML5 Labs we are letting developers experiment with unstable standards before they are ready to be used in production site.

    In order to enable that, we have just released an experimental implementation of IndexedDB for IE. Since the spec is still changing regularly, we picked a point in time for the spec (early November) and implemented that.

    The goal of this is to enable early access to the API and get feedback from Web developers on it. Since these are early days, remember that there is still time to change and adjust things as needed. And definitely don't deploy any production applications on it :)

    You can find out more about this experimental release and download the binaries from this archive, which contains the actual API implementation plus samples to get you started.

    For those of you who are curious about the details: we wanted to give folks early access to the API without disrupting their setup, so we built the prototype as a plain COM server that you can register in your box.

    That means we don't need to mess with IE configuration or replace files. The only visible effect of this is that you have to start with "new ActiveXObject(...)" instead of the regular windows.indexedDB. That would of course go away if we implement this feature.

    If you have feedback, questions or want to reach out to us for any other reason, please contact us here. We're looking forward to hearing from you.

    As a side note, and since this is a component of IE, if you want to learn more about how IE is making progress in the space of HTML5 and how we think about new features in this context, check out the IE blog here.


  • Interoperability @ Microsoft

    New resources for Android developers: getting started with Windows 8 development


    As a developer, having your apps and services accessible on more devices is critical. As an experienced Android developer, you can now access lots of resources on how to develop for Windows, allowing you to reach a new range of devices for your apps and services.

    Tons of new articles for Android developers are now live on the Windows Store apps Dev Center to learn more on how to build apps for Windows:

    Your entry point for all this new content is http://aka.ms/androidtowindows

  • Interoperability @ Microsoft

    Azure Toolkit for Eclipse with Java Update: Zulu, Java SDK, Bug Fixes


    Microsoft Open Technologies, Inc. (MS Open Tech), has released a new update to the Azure Toolkit for Eclipse with Java.  Check out the Details on msopentech.com.

  • Interoperability @ Microsoft

    Partnering to foster Eclipse and Microsoft platform interoperability


    I’m very happy about today’s announcements at the Eclipse Summit in Ludwigsburg, Germany. Microsoft,Tasktop and Soyatec announced a series of projects to help developers using the Eclipse platform do two things: take advantage of new features in Microsoft® Windows 7 and Window Server 2008 R2 and reinforce Java and PHP interoperability with Windows® Azure and Microsoft® Silverlight.

    In the first of the four projects, Microsoft is partnering with Tasktop Technologies, a leading Eclipse-based solutions provider from Canada, to create an Eclipse “next-generation experience” on Windows 7 and Windows Server 2008 R2, which shares the same user interface improvements. Tasktop Technologies will contribute enhancements to the Eclipse IDE, which will be available under the Eclipse Public License in Q1 of 2010.

    In addition, Microsoft has collaborated with Soyatec, a France-based IT solutions provider, to develop three solutions:

    Microsoft is providing funding and architectural guidance for all four of the projects. Let’s take a look at some of the details.

    Eclipse “next-generation experience” on Windows 7

    Microsoft and Tasktop will collaborate to extend the Eclipse Rich Client Platform (RCP), and in particular the Standard Widget Toolkit (SWT), to include the mapping of new features offered by Windows 7. This will allow Eclipse developers to take advantage of the new user interface features offered by Windows 7, directly from the Eclipse IDE and from any desktop applications built on top of the Eclipse platform.

    Here are a couple of sample features that illustrate what I’m talking about:

    • Taskbar Progress integration. Windows 7 provides a new visual representation of the progress bar, which is included in the default behavior of the Windows 7 taskbar. The progress bar is actually part of the application icon, and shows progress horizontally across the icon.
      Here’s how it might look in the Eclipse IDE:
      And here is another view of the progress bar from an application built using the Eclipse RCP:
    • Taskbar Jump Lists. The redesigned Windows 7 taskbar allows applications to expose frequently used features or files that users can select directly. Eclipse-based applications will be able to leverage this feature. For example the next screenshot shows how you could launch Eclipse commonly used features (“New…” or “Synchronize”) directly from the taskbar:

    Of course, these features and screenshots are the result of early prototyping, so they may not precisely duplicate the features that will be delivered during the first phase of the project. Microsoft and Tasktop Technologies are working together to establish the following list features, which are currently entered as bugs in the Eclipse bugzilla:

    These goals mark the beginning of a momentous journey for us. We expect to complete the first phase in Q1 2010.

    As always, feedback from the developer community about “most wanted” features is very important to us. So if you have ideas, don’t be shy about speaking up—we would love to hear them. I also encourage you to read Mik Kersten’s blog post (Mik is Tasktop’s CEO and project lead of Mylyn) to get his perspective on the project.

    Windows Azure Tools for Eclipse for PHP developers

    Microsoft worked with Soyatec on Windows Azure Tools for Eclipse, a project to produce an open source plug-in that enables PHP developers using Eclipse to create web applications that target Windows Azure. Windows Azure Tools for Eclipse provides a series of wizards and utilities that allow developers to write, debug, and configure for and deploy PHP applications to Windows Azure. It is available for download at www.windowsazure4e.org


    Architecturally speaking, the plug-in leverages the PHP Development Tools (PDT) framework for enabling PHP developers with integrated developer experiences.

    The plug-in also bundles the existing Windows Azure SDK for PHP, which we introduced a few months ago. In a nutshell, this SDK provides a speed dial for PHP developers who use the Windows Azure storage component, making it very easy to use the blob, queue and table data storage features. If you need more details about this SDK, just visit the project site at http://phpazure.codeplex.com/.

    In the coming months, we’ll detail many of the additional features you’ll find in the Windows Azure Tools for Eclipse plug-in. For now, you can get a quick overview by watching a video we just recorded with Robert Hess for Channel9:

    Get Microsoft Silverlight

    Windows Azure SDK for Java developers

    First let me say that the Storage Explorer is really one of the coolest features of Windows Azure Tools for Eclipse—it allows developers to browse data contained in the Windows Azure storage component, including blobs, tables, and queues. Storage Explorer was developed in Java (like any Eclipse extension), and we realized during the Windows Azure Tools for Eclipse development with Soyatec that abstracting the RESTful communication aspect between the Storage Explorer user interface and the Azure storage component made a lot of sense. So this led us to package the Windows Azure SDK for Java developers as open source, which is available at www.windowsazure4j.org.

    The Windows Azure SDK for Java enables developers to easily leverage Azure storage service in their Java applications. The logical architecture is very simple:


    The Windows Azure Storage Explorer feature that is part of Windows Azure Tools for Eclipse illustrates perfectly a Java application using the SDK:

    Eclipse Tools for Silverlight

    The Eclipse Tools for Silverlight (Eclipse4SL) plug-in is an open source, cross-platform plug-in for the Eclipse development environment that enables Eclipse developers to build Silverlight Rich Internet Applications (RIAs).

    We have developed subsequent beta versions, including the Mac version, since announcing the Eclipse4SL project in October 2008. So, I’m very excited to announce that Microsoft and Soyatec have released version 1.0 of the Eclipse Tools for Silverlight plug-in, which can be downloaded here: http://www.eclipse4sl.org/

    Version 1.0 of Eclipse4SL targets Silverlight 2.0. We are working with Soyatec to add support for subsequent releases of Silverlight (Silverlight 3.0 was released in July). You can find a roadmap of the milestones that we have projected on the project site: http://www.eclipse4sl.org/#roadmap . Video demo walkthrough of the plug-in are available here and here (Mac version).

    We are always working hard to find new ways to provide more choice and opportunity for developers in our ongoing journey to foster interoperability between Microsoft products and other technologies. We are hoping that today’s announcements give developers the additional choices and opportunities they’re looking for, and that they amount to yet another reason why choosing Microsoft platforms means keeping all the options open.

  • Interoperability @ Microsoft

    Building Java applications on Windows Azure gets easier with the new version of the Eclipse plugin


    I’m pleased to announce that the June 2011 CTP (Community Technology Preview) of the Windows Azure Plugin for Eclipse with Java is now available for download. As the project manager and designer behind our Java tooling efforts for Windows Azure,  I invite you to take a look at our latest release and share your feedback to help us make further progress in helping Java developers take advantage of the Windows Azure cloud. At the time this blog goes live, I'll be sleeping, but my colleague Gianugo Rabellino would have announced the new CTP during his keynote "Behind the scenes: Microsoft and Open Source" at the Jazoon conference in Zurich.

    This plugin is intended to help Eclipse users create and configure deployment packages of their Java applications for the Windows Azure cloud. Its key features include:

    • Windows Azure project creation wizard
    • Helpful project structure
    • Sample utility scripts for downloading or unzipping files, or logging errors in the startup script when running in the cloud
    • Shortcuts to test your deployment in the Windows Azure compute emulator
    • Ant-based builder
    • Project properties UI for configuring Windows Azure roles (instance count, size, endpoints, names, etc)
    • [New in this CTP] UI for easy remote access configuration for troubleshooting purposes, including ability to create self-signed certificates
    • [New in this CTP] Schema validation and auto-complete for *.cscfg and *.csdef files

    To install, just point Eclipse’s “Install New Software…” feature at http://webdownload.persistent.co.in/windowsazureplugin4ej/. Also make sure to install al the prerequisites, as explained in detail here or here. For those who have already been playing around with our Ant-based command-line tools called Windows Azure Starter Kit for Java, note that your Starter Kit projects are compatible with this plugin, in fact the plugin builds on top of the Starter Kit.

    We’re continuously working on new tutorials and feature additions in our Windows Azure tooling for Java developers, so keep checking back with our main portal at http://java.interopbridges.com/cloud for further updates.

    Martin Sawicki, Senior Program Manager, Interoperability Strategy team

  • Interoperability @ Microsoft

    Improving experience for Java developers with Windows Azure


    From the early days, Windows Azure has offered choices to developers. It allows use of multiple languages (like .NET, PHP, Ruby or Java) and development tools (like Visual Studio, Eclipse) to build applications that run on Windows Azure or consume any of the Windows Azure platform services from any other cloud or on-premises platform. Java developers have had a few options to leverage Windows Azure, like the Windows Azure SDK for Java or the Tomcat Solution Accelerator.

    At PDC10, we introduced our plan to improve the experience for Java developers with Windows Azure. Today, we’re excited to release a Community Technology Preview (CTP) of the Windows Azure Starter Kit for Java, which enables Java developers to simply configure, package and deploy their web applications to Windows Azure. The goal for this CTP is to get feedback from Java developers, and to nail down the correct experience for Java developers, particularly to make sure that configuring, packaging and deploying to Windows Azure integrates well with common practices.

    What’s the Windows Azure Starter Kit for Java?

    This Starter Kit was designed to work as a simple command line build tool or in the Eclipse integrated development environment (IDE). It uses Apache Ant as part of the build process, and includes an Ant extension that’s capable of understanding Window Azure configuration options.

    The Windows Azure Starter Kit for Java is an open source project released under the Apache 2.0 license, and it is available for download at: http://wastarterkit4java.codeplex.com/

    What’s inside the Windows Azure Starter Kit for Java?

    The Windows Azure Starter Kit for Java is a Zip file that contains a template project and the Ant extension. If you look inside this archive you will find the typical files that constitute a Java project, as well as several files we built that will help you test, package and deploy your application to Windows Azure.


    The main elements of the template are:

    • .cspack.jar: This contains that java implementation of windowsazurepackage ant task.
    • ServiceConfiguration.cscfg: This is the Windows Azure service configuration file.
    • ServiceDefinition.csdef: This is the Windows Azure service definition file.
    • Helloworld.zip: This Zip is a placeholder for your Java application.
    • startup.cmd: This script is run each time your Windows Azure Worker Role starts.

    Check the tutorial listed below for more details.

    Using the Windows Azure Starter Kit for Java

    As mentioned above, you can use the Starter Kit from a simple command line or within Eclipse. In both case the steps are similar:

    1. Download and unzip the Starter Kit
    2. Copy your Java application into the approot folder
    3. Copy the Java Runtime Environment and server distribution (like Tomcat or Jetty) ZIPs into the approot folder
    4. Configure the Startup commands in startup.cmd (specific to the server distribution)
    5. Configure the Windows Azure configuration in ServiceDefinition.cscfg
    6. Run the build and deploy commands

    For detailed instructions, refer to the following tutorials, which show how to deploy a Java web application running with Tomcat and Jetty:

    What’s next?

    Yesterday, Microsoft announced an Introductory Special offer that includes 750 hours per month (which is one server 24x7) of the Windows Azure extra-small instance, plus one small SQL Azure database and other platform capabilities - all free until June 30, 2011.  This is a great opportunity for all developers to see what the cloud can do - without any up-front investment!

     You can also expect continued updates to the development tools and SDK, but the experience of Java developers is critical. Now is the perfect time to provide your feedback, so join us on the forum at: http://wastarterkit4java.codeplex.com/

    -- Jean-Christophe Cimetiere, Sr. Technical Evangelist, @openatmicrosoft

  • Interoperability @ Microsoft

    June Update to the Azure Toolkit for Eclipse - Zulu, Java SDK, and More!


    Microsoft Open Technologies, Inc. (MS Open Tech), has released a minor June update to the Azure Toolkit for Eclipse. This update includes a few enhancements since our April 2014 release. 

    • Support for the Zulu OpenJDK v1.8
    • Updated versions of Zulu v1.6 and 1.7
    • Support extended for the Azure SDK for Java (v. 5.0)
    • A handful of user-requested bug fixes

    Have a look at the post on msopentech.com for full details

  • Interoperability @ Microsoft

    W3C’s Web Platform Docs – Your “Go To” for All Things Web Development



    Jean Paoli, President, Microsoft Open Technologies, Inc.

    Michael Champion, Senior Program Manager, Microsoft Open Technologies, Inc.


    We are thrilled to share the news that the W3C announced the alpha release of Web Platform Docs. Adobe, Facebook, Google, HP, Microsoft, Mozilla, Nokia and Opera are among the stewards of the project. Together, we worked with the W3C on creating this wiki-styled site and contributed thousands of web documentation articles.

    W3C’s Web Platform Docs is a community site designed to be a comprehensive and authoritative resource for developers to help them build modern web applications that will work across browsers and devices, and share their own expertise, which will further the goal of web platform interoperability and same markup.

    Currently, developers need to do a lot of research about what technologies work on which platforms when building websites and applications with HTML5, CSS and other open web standards. It’s costly and inefficient for them to spend precious hours consulting multiple resources to understand how to employ web technologies in a way that functions across browsers, operating systems and devices. W3C’s Web Platform Docs addresses these issues by offering a single “go-to” source for web developer documentation, and providing a site that the community can continually edit and improve.

    Microsoft Open Technologies, Inc., represented by Michael Champion, and the Microsoft Internet Explorer team represented by Eliot Graff, have been involved from the very inception of the project as we strongly believe this community site is key in the journey to an interoperable web platform and same markup.

    As an initial contribution, Microsoft donated more than 3,200 topics from MSDN and will continue to add content moving forward. This is an open community – web developers can get an account at webplatform.org to make their own contribution – fill in gaps, correct errors, and flesh out the documentation with sample code to explain how to use the web platform to its full potential.

    So what does this mean for you, the developer?

    You will save time and resources, knowing you can consult with confidence a community-curated site to learn about standards, innovations and best practices including:

    • What technologies really interoperate across platforms and devices;
    • The standardization status of each technology specification;
    • The stability and implementation status of specific features in actual browsers.

    W3C’s Web Platform Docs is an open site where anyone can become a member and contribute. Microsoft and the other founding stewards helped boot up the wiki (and will continue to contribute new content), but YOU, the developer community, own the site. W3C convened the community and will administer webplatform.org in the future, but you don’t have to join W3C to participate in this effort.

    All materials on W3C’s Web Platform Docs are freely available and licensed to foster sharing and reuse.

    Begin simplifying your web development and check out W3C’s Web Platform Docs today. Better still, sign up for an account, find a topic of interest, and contribute your expertise!

  • Interoperability @ Microsoft

    More of Microsoft’s App Development Tools Goes Open Source


    Today marks a milestone since we launched Microsoft Open Technologies, Inc. (MS Open Tech) as we undertake some important open source projects. We’re excited to share the news that MS Open Tech will be open sourcing the Entity Framework (EF), a database mapping tool useful for application development in the .NET Framework. EF will join the other open source components of Microsoft’s dev tools – MVC, Web API, and Web Pages with Razor Syntax – on CodePlex to help increase the development transparency of this project.

    MS Open Tech will serve as an accelerator for these projects by working with the open source communities through our new MS Open Tech CodePlex landing page. Together, we will help build out its source code until shipment of the next product version.

    This will enable everyone in the community to monitor and provide feedback on code check-ins, bug-fixes, new feature development, and build and test the products on a daily basis using the most up-to-date version of the source code.

    The newly opened EF will, for the first time, allow developers outside Microsoft to submit patches and code contributions that the MS Open Tech development team will review for potential inclusion in the products.

    We were happy to see the welcoming response when Scott Guthrie announced a similar open development approach with ASP.NET MVC4 and Web API in March. He said they have found it to be a great way to build an even tighter feedback loop with developers – and ultimately deliver even better products as a result. Check out what Scott has to say about this new EF news on his blog today.

    Together, this news further demonstrates how we want to enable our growing community of developers to build great applications. Take a look at the projects you’ll find on CodePlex:

    • Entity Framework – The ADO.NET Entity Framework is a widely adopted Object/Relational Mapping (ORM) framework that enables developers to work with relational data as domain-specific objects, eliminating the need for most of the data access plumbing code that developers usually need to write
    • ASP.net MVC 4 – this is the newest release of the ASP.net MVC (Model-View-Controller) framework. It is a web framework applying the MVC pattern to build web sites that separate data, presentation and actions.
    • Web API – this is a framework that augments ASP.net MVC to expose easily XML and JSON APIs consumable by websites or mobile devices. You can view it as a special model that instead of returning HTML (views) returns JSON or XML (data)
    • Web Pages/ Razor version 2, i.e. a view engine for MVC. It is a way to mix HTML and server code so that you can bind HTML pages to code and data.

    We are proud to have created an engineering culture for open development through the people that work at MS Open Tech. We’ve grown into an innovative hub where engineers assemble to build, accept and contribute to open source projects. Today we profiled our new MS Open Tech Hub where engineering teams across Microsoft may be temporarily assigned to MS Open Tech to participate in the Hub, where they will collaborate with the community, work with the MS Open Tech full time employees contribute to MS Open Tech projects, and create open source engineering best practices. Read more about our Hub on our Port 25 blog and meet the team working on the Entity Framework, MVC, Web API, and Web Pages with Razor Syntax projects at MS Open Tech. We’re nimble and we have a lot of fun in the process.

    Gianugo Rabellino
    Senior Director Open Source Communities
    Microsoft Open Technologies, Inc.
    A subsidiary of Microsoft Corporation

  • Interoperability @ Microsoft

    Creating PHP CRUD Apps with SQL Server on your Server or in the Azure Cloud


    Microsoft SQL Server 2008 Express Do you know PHP and have data in a Microsoft SQL Server database? Well we have an application wizard that will make your lives a little easier. The project, open source and hosted on CodePlex, will help you build a simple CRUD (Create, Read, Update, Delete) application that works against Microsoft SQL Server, SQL Azure and Windows Azure Storage. The application is installable on Windows and supports data navigation, paging, sorting and UI customization using simple CSS.  crudwizard_arch_sm

    Here’s what you will need, a working PHP web server, a connection to the internet, a SQL Server 2005 or higher. You can also use the free version. “SQL Server Express” that is available for download and installable as part of the Web Platform Installer. As an added bonus you can also work against your Windows Azure Storage or SQL Azure database.   Windows Azure tokens are available by registering for Windows Azure Services and redeemable at http://windows.azure.com

    To begin download the wizard, open the.zip and install on your PC. There is a handy deployment guide that helps you get started. You will simply need to set up a database account with a username and password. Install the SQL Server 2005 Native Client DLL and the  SQL 2005 PHP Driver 1.1 which will give you a .dll for the appropriate version on PHP you are running (5.2 or 5.3, thread-safe and non). Copy the appropriate .dll to your PHP extension directory (e.g. C:\php\ext) and add a reference to your PHP.ini file (e.g. C:\php\php.ini) to call the .dll, (e.g. extension=php_sqlsrv_xx_yyy.dll)

    You will then want to use “SQL Server Management Studio” (ssms.exe) from the Start Menu (All Programs or Programs) or from C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE to create a new user and a database. The tutorials listed below are a quick way to get started

    Following the steps above I created a login to my SQL Server, Created a database “sqlcruddemo” and a new table called “customer” and associated a database user “demo” to it.  In the table I then created 3 columns. ID, set as a non-null integer and a “Primary Key” plus Firstname and Lastname both as variable character strings of length 50.  This is how it looks like from the design tool.


    Once you have the database set up you can run the file PhpSiteGenerator.exe either from the “PHP SQL Crud Application Builder” entry in your Start menu or from the Installation folder (typically C:\Program Files\PHP SQL Crud Application Builder). The splash page shows up as below. You will want to enter in the name of the database user you created (e.g. demo) and it’s password. Then click the button “Find Database” where it will bring down entries for the “Database” and “Table” drop down menus. I have selected the entries I created before (database “sqlcruddemo” and table “customer”) and the checkboxes for the columns (ID, Firstname, Lastname)

    Hit the “Next” button to get to the second screen where you can set CSS styles for the table of the CRUD application. I decided to change the “.tblHeaderCell” tag, which sets the table headers, with a blue background-color and bold fonts. The CSS in the generated file, “style.css” and the form looks like  … PHPGenerator1



    Hitting “Generate Site” will yield the result in the following form, “index.php”, which I have populated entries using the “Create New” button which calls the “create.php” file also displayed in the browser belowCRUD












    You can use these generated helper php files and forms as the building blocks for the database driven application that you would like to write against SQL Server. As you can see most of the PHP code you need to manipulate data from SQL Server has been taken care for you.

    Take a look!

    Jas Sandhu
    Technical Evangelist, Interop Vendor Alliance Manager, Interoperability Strategy Team
    Twitter@jassand, FriendFeed@jassand

  • Interoperability @ Microsoft

    MS Open Tech releases open source Jenkins plugin for using Windows Azure Blob service as a repository


    Continuous integration (CI), where software teams integrate their work continuously into frequent builds in an Agile environment, has been around for a relatively long time. Tools for managing the CI process have been around too, and have been gaining in popularity in the last few years, as the CI process becomes more complicated and the benefits of CI become more obvious. CI tools can be used in conjunction with existing SCM version control tools to manage today’s complex build, test and deployment processes that SCM tools and processes don’t cover completely on their own.

    Jenkins is a popular open source CI tool, with many installations and extensions, as well as strong community commitment. For this reason Microsoft Open Technologies, Inc. has released a Jenkins plugin for using Windows Azure’s Blob Storage service as a repository of build artifacts.

    Using our Jenkins plugin can improve your CI process by using the Windows Azure Storage plugin to manage artifact storage in a Windows Azure Blob. Choosing the Windows Azure Blob service to store your build artifacts ensures that you have all the resources you need each time a build is required, all in a safe and reliable yet centralized location, with configurable access permissions. This takes a load off on-premise network bandwidth and storage, and improves continuous build performance.

    We’ve also open-sourced our plugin to share with the community. Source code for the plugin is available on Github here.

    Setting up a Jenkins Continuous Integration Server on Windows Azure

    The plugin works with any Jenkins CI installation. VM Depot, MS Open Tech’s community-driven repository of Linux Virtual Machines, also has several preconfigured Linux and Jenkins Virtual Machines ready to quickly get Jenkins up and running in a Windows Azure Linux VM. For more information on setting up VM Depot Virtual Machines on Windows Azure, follow this link.

    It’s also easy to set up a custom instance of Jenkins on a customized Windows Azure Virtual Machine. Here are some great resources to get started.

    For source code versioning and repository management, Jenkins on Windows Azure can use the built-in CVS or Subversion instances that are downloaded with Jenkins, or you can connect to any code management repository source that a plugin exists for, including Team Foundation Server (via the Jenkins TFS plugin), or the GitHub plugin.

    Once you have a code repository and a Jenkins instance set up, you’re ready to configure Jenkins for build management and deployment. We’ve created a detailed tutorial here on how to set up and use the plugin.

    Configure Jenkins Projects to manage Build Artifacts

    To install the plugin, go to Manage Jenkins > Manage Plugins, select the Available Plugins tab and select the Windows Azure Storage Plugin from the Artifact Uploaders Category.


    After selecting Install without Restart, you should see a confirmation screen like this one when done:


    Set up your Windows Azure Storage Account Configuration

    After the plugin is installed, the first step you should take is to configure one or more Windows Azure storage accounts for Jenkins to use. You do that using Jenkins’ Configure System page, in the Windows Azure Storage Account Configuration section:


    Configure Projects to use Windows Azure Blob Storage

    After you have configured your storage account(s), you can start adding this new Post-Build action to your jobs: Upload artifacts to Windows Azure Blob Storage:



    Selecting and configuring this option will enable you to work with your artifacts using Azure Blob Storage services, which helps with management and speed of integration. For more information on the configuration options, please refer to our tutorial.

    Next Steps

    We’re excited to be participating in the Jenkins ecosystem to enable build artifacts to be stored in Windows Azure storage. As always, we’re looking for ways to make it easier for developers to interact with Windows Azure services in any way we can, so if you have suggestion on what we can do to improve interoperability between Jenkins and Windows Azure, let us know!

  • Interoperability @ Microsoft

    WordPress on Windows Azure: A discussion with Morten Rand-Hendriksen


    I finally had the chance to sit down with Morten at MIX11 in Las Vegas last week to discuss the work he is doing on WordPress with Windows Azure to solve some common challenges with multi-site WordPress installations using traditional hosting.

    In Morten's words: "I am building a garden just for me and my clients...I control it...but the security and management of the garden is run by a very large company...they also will make sure that it works!"

    Read Morten's blog on http://www.designisphilosophy.com and find him on Twitter @Mor10


    Craig Kitterman
    Web: http://craig.kitterman.net

  • Interoperability @ Microsoft

    Eclipse Tools for Silverlight (Eclipse4SL): now for Mac developers


    One more step for the Eclipse Tools for Silverlight (Eclipse4SL) project: the Customer Technology Preview (CTP) of Eclipse4SL with support for Macintosh is being delivered at MIX09, Microsoft’s conference for Web developers, designers, business and digital marketing professionals. With this plug-in, Mac developers using Eclipse can develop Rich Internet Applications (RIAs) using the Silverlight platform.

    If you’re new to Eclipse4SL, here’s a quick recap: “The Eclipse tools for Silverlight project, aka eclipse4SL, is an Eclipse plug-in that enables developers to use the Eclipse IDE to create applications that run on the Microsoft Silverlight runtime platform. Announced in October of last year, the project is led by Soyatec, an IT solutions provider based in France & China, and also an Eclipse Foundation member (Yves Yang, Soyatec President). Microsoft provides funding and architectural guidance (in particular my colleagues Vijay Rajagopalan and Stève Sfartz)” (read the full introduction at Eclipse and Silverlight, another interoperability journey has begun)

    The CTP not only enables support for the development experience on a Mac but it also includes many new features also available for the Windows version. To get the plug-in go to http://www.eclipse4sl.org/download/.

    • Watch the demo for a quick walkthrough:

    The demo is also posted on Youtube and MSN Video.

    If you are attending MIX09, I encourage you to go to Vijay Rajagopalan’s session “Build Applications on the Microsoft Platform Using Eclipse, Java, Ruby and PHP!” (Friday, March 20, 10:45 AM-12:00 PM).

    Vijay will give an overview of how Microsoft has delivered multiple technologies that focus on interoperability with non-Microsoft and open source technologies.

    And of course he will also show the Eclipse Tools for Silverlight along with other interoperability scenarios, like combinations of Java, Ruby and PHP with the Azure Services Platform and the use of claims-based identity in support of heterogeneous identity systems.

    Going back to the Eclipse4SL plug-in, let me share a few screenshots showing the new features:

    • Eclipse4SL on Mac, overview: the Project explorer, the Silverlight rendering surface, the advanced XAML code editor, the Controls Palette
    • Code completion in the XAML editor
    • Code generation from the XAML editor, to generate the C# event handlers
    • Code generation in the C# editor

    Finally, while the Eclipse4SL plug-in brings Silverlight development capability to Eclipse, it also preserves the project structure to retain compatibility with other Microsoft tools (Visual Studio and Expression Blend) enabling collaboration between Eclipse developers (Java, PHP, etc…), .NET developers, and designers:


    Finally, if you have feedback, join the conversation at http://www.eclipse4sl.org/community/

    Jean-Christophe Cimetiere - Sr. Technical Evangelist

  • Interoperability @ Microsoft

    Celebrating the W3C & HTML5 With a New Logo Program


    W3C is the home of web standards

    The World Wide Web Consortium (W3C) has been the home of web standards since 1994 and is a unique place where every major browser vendor (Apple, Firefox, Google, Microsoft, Opera) participate as one of the 322 W3C members.

    Logo now available

    Today, the W3C is introducing a new logo program for HTML5. A logo with a consistent visual design is an important indication of the growing maturity of many components of HTML5. As developer and site owners see this logo across the web, we hope it will signal that while there is still a lot of work to do until all the HTML5 technologies are ready, real sites are starting to take advantage of them today.

    The logo links back to W3C, the place for authoritative information on HTML5, including specs and test cases. It’s time to tell the world that HTML5 is ready to be adopted. You can find some examples of how real sites are using HTML5 today here.

    Microsoft and the W3C

    Microsoft, as part of its ongoing focus on interoperability, is committed to the W3C and we currently have had some 66 participants in 38 technical groups. We work closely with other members on a range of matters, from drafting early specifications to developing test suites to improve interoperability.

    Parts of HTML5 are ready to be used today

    HTML5 offers tremendous improvements in interactivity, graphics, typography and more. One question we often hear is “When should my site start embracing HTML5?” Our answer is simple. Today. But it’s important to recognize that HTML5 is not just one technology, but rather that it encompasses a broad set of technologies. So, while there are some parts that are very stable and are ready to be used in real sites today, there are also some parts that are still changing rapidly.

    With IE9 and HTML5 Labs - which gives developers a stable foundation to build their experiences on IE9 knowing that their sites will continue to work with build updates - we are making this line clearer to encourage adoption rather than waiting. In IE9, we have put the site-ready parts of HTML5 that can be used today without worrying about the site breaking as the specification changes.

    In the HTML5 Labs environment, we are building prototypes for unstable specifications where we can iterate quickly and freely as we make it clear to developers not to include these in sites as yet. Microsoft’s Interoperability Bridges & Labs Center has started publishing prototype implementations of unstable specifications where significant change is expected.

    Congratulations to the W3C on the new HTML5 logo program!

    Jean Paoli

    GM: Interoperability Strategy

  • Interoperability @ Microsoft

    Windows Azure Provisioning of Linux and Windows via Puppet


    Microsoft Open Technologies, Inc. (MS Open Tech) is pleased to announce that the release of a new Windows Azure Puppet Module that makes it possible to provision both Linux and Windows virtual machines on Windows Azure using the popular open source DevOps tool, Puppet. Support is provided in the form of a Windows Azure module for Puppet published in the Puppet Forge. In addition, management of key services such as network configuration and databases are supported. As a result, Puppet users can now leverage over 1800 community-defined configurations found in the Puppet Forge on Windows Azure.

    MS Open Tech engineers have undertaken this work through our focus on enhancing interoperability across popular DevOps tools. DevOps focuses on the management of the intersection between software development and IT operations. It emphasizes collaboration and integration between the increasingly agile software development team (where rapid change is necessary), and the operations team who are required to provide maximum up time (where change may impact reliability). DevOps seeks to enable these two groups to communicate and collaborate more effectively. The contribution of a Puppet Module for Windows Azure is an important step in ensuring that users of Puppet are able to leverage their skills in a Windows Azure environment.

    The Windows Azure Puppet module provides everything you need to provision the following Windows Azure services:

    • Virtual Machines – both Linux and Windows
    • Virtual Networks – create logically isolated sections of Azure and securely connect them to your on premise clients and servers
    • SQL Server – create and maintain your SQL database

    In addition Windows Azure users will now be able to access more than 1800 existing community-defined modules in the Puppet Forge.

    "The ability to use Puppet to provision virtual machines on Windows Azure and thus to leverage the extensive repository of community provided modules in Puppet Forge should be compelling for many Puppet users” said Mitch Sonies, Vice President of Business and Corporate Development of Puppet Labs, Inc. “We think this contribution is a great step toward driving adoption of Azure within the Puppet community, and we look forward to seeing community uptake and ecosystem contributions grow.”

    Getting Started with Puppet and Azure

    Puppet is open source software that automates the configuration, provisioning and management of IT infrastructure, both in development and production. Machine configurations are described in terms of a “desired state” using an easy-to-read declarative language. Puppet uses this description to bring systems into the desired state and keep them there. For more information about Puppet see the extensive documentation available on the Puppet Labs website.

    There are two parts to this MS Open Tech contribution. The first is the Puppet Manifests that describe the Windows Azure resources that can be managed using Puppet. The second is a cross platform command line interface (CLI). Using the CLI and Manifests it is easy to manage both Linux and Windows Virtual Machines, Virtual Networks and Affinity Groups and SQL servers. The goal is to maximize performance of your development, test and deployment environments.

    Virtual Machine Management

    Virtual machines deliver on demand, scalable compute infrastructure. Windows Azure provides both Windows Servers and Linux Servers in multiple configurations. To launch a new virtual machine and install the Puppet agent (so that it can later be managed by Puppet) you would us a command similar to the following:

    puppet azure_vm create \
    --management-certificate pem-or-pfx-file-path \
    --azure-subscription-id=your-subscription-id \
    --image b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-13_04-amd64-server-20130501-en-us-30GB \
    --location 'west us' \
    --vm-name vmname \
    --vm-user username \
    --password ComplexPassword \
    --puppet-master-ip yourPuppetMasterIPAddress

    The full list of actions is shown in the table below, to see a list of options available for that action execute the command “puppet help node-azure ACTION-NAME”.


    Action Description
    bootstrap Install Puppet node on an existing Windows Azure VM
    create Create Windows Azure VM
    delete Delete Windows Azure node instances
    images List Windows Azure images
    locations List Windows Azure locations
    servers List Windows Azure node instances
    shutdown Shutdown Windows Azure node instances
    start Start Windows Azure node instances

    Manage Virtual Networks

    An Azure virtual network enables you to create a logically isolated section in Azure and securely connect it to your on premise data-center or clients machines using an IPsec connection. This allows you to more easily remote debug your applications through a direct connection between your local development machine and virtual machines hosted in Azure. Using virtual networks you will be able to troubleshoot and debug your applications using the same tools you would for on premise development work.

    In addition this feature enables you to build distributed applications in a hybrid environment. For example, a web application hosted in Windows Azure can securely access an on premise database server or authenticate users against an on premise authentication server.

    To create a virtual network you would execute a command something like this:

    puppet azure_vnet set --management-certificate pem-or-pfx-file-path \
    --azure-subscription-id=your-subscription-id \
    --virtual-network-name vnetname \
    --affinity-group-name ag-name \
    --address-space ',' \
    --dns-servers 'dns1-1:,dns2:' \
    --subnets 'subnet-1:,subnet-2:'

    Other available actions are:

    Action Description
    list List virtual networks
    set Configure the virtual network
    set_xml_schema Configure the virtual network using xml schema

    Manage SQL database server

    Many applications require a database server, we are therefore providing commands to create and configure a SQL database using Puppet. To create a server use a command such as:

    puppet azure_sqldb create --management-certificate pem-or-pfx-file-path \
    --azure-subscription-id=your-subscription-id \
    --management-endpoint=https://management.database.windows.net:8443/ \
    --login loginname \
    --password ComplexPassword \
    --location 'West Us'

    Manifest Files

    Manifest files are collections of definitions, references and commands that enable you to quickly and repeatably deploy virtual machines in a defined “desired state”. In addition to the CLI described above we are contributing manifest files that can be used by Puppet to configure Windows Azure services. These Manifests are available as part of the Windows Azure module in the Puppet Forge and can be further adapted to suit your specific needs. The manifests provided are:

    • bootstrap.pp – allows the creation of a new Puppet node
    • db.pp – create a new instance of SQL server
    • init.pp – defines a Windows Azure class that will allow easy deployment to the associated Windows Azure account
    • vm.pp – create a new virtual machine instance from a virtual machine image
    • vnet.pp – create a new virtual network

    What is next?

    MS Open Tech is pleased to enable Windows Azure provisioning using Puppet. This is an important component of our ongoing commitment to ensure that users of DevOps tools can leverage their skills within a Windows Azure environment.

  • Interoperability @ Microsoft

    Eclipse and Silverlight, another interoperability journey has begun


    Silverlight is a cross-platform browser plug-in that enables rich media experiences and .NET-based Rich Internet Applications (RIAs) within the browser. While Microsoft creates developer and designers tools, interoperability scenarios using other tools makes sense simply because in many situations there are development teams working in heterogeneous environments. Searching for ways to assist these teams is how Eclipse tools for Silverlight came to life!

    The Eclipse tools for Silverlight project, aka eclipse4SL, is an eclipse plug-in that enables Eclipse developers to use the Eclipse IDE to create applications that run on the Microsoft Silverlight runtime platform. Announced in October of last year, the project is led by Soyatec, an IT solutions provider based in France & China, and also an Eclipse Foundation member (Yves Yang, Soyatec President). Microsoft provides funding and architectural guidance (in particular my colleagues Vijay Rajagopalan and Stève Sfartz).

    Since the release of a new beta version in December, additional technical content for Java developers has been published on the project site, giving guidance on key interoperability scenario sought by developers: facilitate interoperability between Silverlight clients and REST and SOAP (JAX-WS/CXF) Java web services.

    Even though the V1 of the project is not yet complete, Soyatec has done a great job of building the early pieces of this bridge between Eclipse and Silverlight. The interoperability scenarios this project enables are very interesting, as it provides more choices to Java/Eclipse developers and opens up new opportunities for Silverlight adoption.

    So if you haven’t had a chance to see the Eclipse tools for Silverlight in action, take a look at this demo. It gives an overview of the developer experience of creating a basic Silverlight application in Eclipse, shows how collaborating with a designer could work, and finally you’ll see a sample Silverlight application talking to a Java web service, from the www.Youtube.com/interopbydesign channel:

    If you want to try it for yourself it’s very easy, just follow the step-by-step installation guide on http://www.eclipse4sl.org/download/. The eclipse4SL plug-in can be installed directly from the internet with the Eclipse software update wizard (see screenshot below):


    Then you can explore the Hello, world and DataGrid tutorials that my colleague Stève Sfartz has prepared for you. Also you might want to check this tutorial that has just been posted on Devx: Getting Started with Silverlight for Eclipse.

    I don’t write a lot of code these days, but from a developer point of view I think it is cool to deliver interoperability at this level, and to extend the Silverlight development experience to Eclipse developers. For a nascent project, the eclipse4SL has been well received by the community and is currently in the top 10 “Top Rated” on www.eclipseplugincentral.com (a portal that helps developers find Eclipse plug-ins):

    (Screenshot taken on 02/03/2009)

    Interop BloggsThe Eclipse tools for Silverlight project aka eclipse4SL is an eclipse plug

    Of course, if you have feedback, feel free to join the conversation.

    Jean-Christophe Cimetiere - Sr. Technical Evangelist

  • Interoperability @ Microsoft

    One step closer to full support for Redis on Windows, MS Open Tech releases 64-bit and Azure installer


    I’m happy to report new updates today for Redis on Windows Azure: the open-source, networked, in-memory, key-value data store. We’ve released a new 64-bit version that gives developers access to the full benefits of an extended address space. This was an important step in our journey toward full Windows support. You can download it from the Microsoft Open Technologies github repository.

    Last April we announced the release of an important update for Redis on Windows: the ability to mimic the Linux Copy On Write feature, which enables your code to serve requests while simultaneously saving data on disk.

    Along with 64-bit support, we are also releasing a Windows Azure installer that enables deployment of Redis on Windows Azure as a PaaS solution using a single command line tool. Instructions on using the tool are available on this page and you can find a step-by-step tutorial here. This is another important milestone in making Redis work great on the Windows and Windows Azure platforms.

    We are happy to communicate that we are using now the Microsoft Open Technologies public github repository as our main go-to SCM so the community will be able to follow what is happening more closely and get involved in our project.

    We have already received some great feedback from developers interested in using Redis on Windows Azure, so we are committed to an open development process in collaboration to the over 400 Github followers which, among other benefits, will provide more frequent releases.

    Now our journey continues with two additional major steps:

    - Stress Testing: Our test team spent quite some time testing the code but we need more extensive stress testing that will exercise the new code’s reliability and also guarantee Redis on Windows Azure can be used under significant workload and for an extended period of time before it can be reliably used for production scenarios.

    - Redis 2.6: Our development team will be focused in getting the code base up to the latest version on Linux, 2.6. UPDATED 01/22/2013: an alpha version of Redis 2.6 was released today. It has a few known issues, but we expect to have a stable version in a few days.

    In addition we want to make easier for developers to deploy Redis by adding support for nuGet and WebPI deployment. We will make these features available very soon.

    If you are interested in running Redis on Windows, the best thing you can do is to use this release as much as you can, log bugs and share your comments and suggestions. We also have a long list of features/changes/enhancements that we’re ready to make so let us know if you’re interested in helping - we’re looking for a few more smart developers that want to join our dev team as contributors to the project on Github. Let us know if you want to join the virtual team!

    Claudio Caldato
    Principal Program Manager Lead
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    PHP and IE8 Web Slices


    image Internet Explorer 8 (IE8) shipped with a new feature for web users called Web Slices. You can learn more about web slices here. Essentially it lets you add enhanced links to your favorite bar that allow you to preview snippets of content from websites that you frequently visit without having to open up the page. It’s really useful to do little tasks like check on your web based Inbox, check the weather in cities you live or visit, traffic status, stock tickers, headlines, sports, the list goes on and on and you can check the IE add-on gallery for more examples of useful web slices and for inspiration. [UPDATE: if you’re into sports then check out web slices by Buzztap that showcases a whole bunch to keep you up to date, also see the blog post by Jon Box]

    A web slice is content on a web page which a user can subscribe to.  The content is then available from a button in the Internet Explorer 8 Favorite's toolbar. When the content is updated, the button glows orange to alert the user that there is new content.  When the user clicks the button, they see a drop down window with the updated content of the web slice.

    To enable Web Slices on your PHP web site. We have created a project, Web Slices and Accelerators for PHP that lets you get started quickly and the source is available on Codeplex too. The solution contains HTML and PHP samples to create web slices in Wordpress, Wikimedia and Facebook,, all popular php blogging, content and social sites that you may want to have your code interoperate with to get started. Download the package and unzip the file to a directory on your machine.

    The code is based on HTML and XML and can be easily integrated into any other web site, framework or platform you may be working with. The markup is displayed in a client web browser and IE8 will discover and update content when it parses the code. Any web server can be used including IIS or Apache on phpie8websclicearchitectureLinux/Windows.

    There are three things to mark content as a web slice. You will need specific CSS class names to start with;  first, a div marked with a class equal to ‘hslice’; second, the div must have a unique id; third, a child element marked with a class equal to ‘entry-title’.


    The HTML tags you use to structure your web slice are immaterial, the important thing is to specify the right CSS class names. We can create a PHP function to output this HTML structure.  The function will accept a unique name for the web slice, a title and a string representing the content.


    The code will render in your browser and when you hover over it, you will notice a green box around the content as well as a green button shows up next to your Home page button as illustrated below. You will only need to click on any of these buttons to get the dialog box that add this content to your favorites bar.


    Another handy feature that web slices have is a built in reader which can display the first item of an RSS feed. The HTML is almost identitcal but instead of specifying an element with a  class equal to 'entry-content', you create an anchor tag pointed to the content source. Note that you must specify an attribute rel='feedurl' and pointthe link to the URL of the RSS feed you choose to use.


    We can modify our above PHP function to accept another parameter which specifies the feed URL too. Note that this example also includes a parameter for content, which is displayed on the page advertising the web slice. The content from the feed, however, will be the actual content to which a user subscribes to.


    The following web slice was created using the above code with our Interoperability Blog RSS feed showing the first item on the list when this post was written.


    You may already have content and or feeds on your website that could get a little spotlight today! It will also handy for your website users by using Internet Explorer 8 Web Slices. We hope that you take a look at the sample and please share your feedback!

    Happy Coding!

    Jas Sandhu
    Technical Evangelist, Interop Vendor Alliance Manager, Interoperability Strategy Team
    Twitter@jassand, FriendFeed@jassand

  • Interoperability @ Microsoft

    Viewing public government data with Windows Azure and PHP: a cloud interoperability scenario using REST



    This week Microsoft is participating in the first Gov 2.0 Summit produced by O'Reilly Media, Inc. and TechWeb in Washington D.C., to explore how technology can enable transparency, collaboration and efficiency in government. Today, we're pleased to present a cloud interoperability scenario which takes advantage of the recently announced Toolkit for PHP with ADO.NET Data Services to view public government data with Windows Azure and PHP.

    As you may recall, few weeks ago, Microsoft announced the Toolkit for PHP with ADO.NET Data Services, a new bridge enabling PHP developers to connect to .NET using a RESTful architecture. Today, we've published a cloud interoperability scenario where a Windows Azure application exposes data in a standard way (XML / Atom) and how you can simply “consume” this data from a PHP web application. This scenario takes advantage of the Open Government Data Initiative (OGDI), another piece of Microsoft's Open Government effort, built on the foundation of transparency, choice and interoperability.

    A few words about OGDI

    The Open Government Data Initiative (OGDI) is a project launched in May by our colleagues from the Microsoft Public Sector Developer Platform Evangelism team

    In a nutshell, Open Government Data Initiative (OGDI) is a cloud-based collection of software assets that enables publicly available government data to be easily accessible. Using open standards and application programming interfaces (API), developers and government agencies can retrieve the data programmatically for use in new and innovative online applications, or mashups.

    Data and Platform Interoperability scenario in the cloud

    Publicly available government data sets have been loaded into Windows Azure Storage, and the OGDI team built a data service that exposes the data through REST web services, returning data by default in the Atom Publishing Protocol format. The OGDI application uses ADO.NET Data Services to expose the data. On the diagram below you see the list of available data sets: http://ogdi.cloudapp.net/v1/dc.

    This list is then accessed by the data browser web application built in PHP. To build the PHP applications the Toolkit for PHP with ADO.NET Data Services was used by simply generating the PHP proxy classes that would match the data sets exposed through REST at this URI: http://ogdi.cloudapp.net/v1/dc.


    Trying out the sample application

    The PHP Data browser sample application is deployed on Windows Azure. Although it is not required and it could be deployed on any PHP compatible hosting environment, this sample application showcases a PHP application running on Azure. You can view or download the source of this sample from the demo site: http://ogdiphpsample.cloudapp.net/

    The OGDI Service demonstrates some of the possibilities of the Azure platform and you can try the OGDI interactive SDK http://ogdisdk.cloudapp.net to understand how it works, as it features a similar data browser developed in .NET.

    Moving forward

    This sample application illustrates how you can simply create applications leveraging data and platform interoperability (PHP & .NET). The Toolkit for PHP with ADO.NET Data Services makes it easier for PHP developers to interoperate with .NET, including Azure which supports multiple internet protocols, including HTTP, REST, SOAP, and XML.  This scenario is just one among many we are working on using RESTful architectures.
    Stay tuned, more to come soon!

    Finally, here’s a recap of related resources:

    For more information on Microsoft's Open Government efforts and participation at the Government 2.0 Summit, check out: FutureFed, the voice of Microsoft's Federal division.

    Jean-Christophe Cimetiere - Sr. Technical Evangelist

  • Interoperability @ Microsoft

    Welcome Drupal 7, a new version with greater interoperability with the Microsoft platform


    The new version of Drupal 7 was released a couple weeks ago, and now that people have finally recovered from the many Drupal release parties around the world (like in London), we, at Microsoft, want to formally welcome this new version. From our point of view, Drupal version 7 marks an important milestone because it includes great improvements, some of which are the result of efforts from Microsoft and the Drupal community to bring users greater interoperability and more choices/options.

    Let’s review our favorite improvements:

    imageIt shouldn’t be too surprising that our favorite addition is support for Microsoft SQL Server (version 2005 or later), which we announced last year at DrupalCon when we shipped the community technology preview (CTP) of the SQL Server Driver for PHP 2.0 with PDO support. The new driver was then released in August. Special thanks to Commerce Guys, who actually developed SQL Server support in Drupal and contributed the code.

    Bryan House - Sr. Director, Marketing, from Acquia commented: “The Drupal 7 release with enhancements for the Microsoft platform is a tremendous milestone giving Drupal developers the freedom to use their existing Microsoft resources to build extraordinary web experiences with Drupal. It expands the set of options Drupal developers have to choose from when building the best solutions for their customers and end-users. We’re also pleased to see Microsoft really participating in the community, providing valuable assistance, and taking a long term approach to supporting Drupal.”

    What I think is interesting about the SQL Server Driver for PHP 2.0 is that it enables PHP applications like Drupal 7 to use the PDO “PHP style” and interoperate smoothly with Microsoft’s SQL Server database. This reduces the complexity of targeting multiple databases and makes it easier for PHP developers to take advantage of SQL Server’s business intelligence & reporting feature (which is also included in the free SQL Server Express edition), as well as SQL Azure features like exposing OData feeds.

    Another neat improvement has to do with Drupal installation packages and modules – those that are current, as well as any that are newly submitted. Previously, they were only available as a TGZ archive but now they’re also available as ZIP archives. This removes the burdens for Windows users trying to install Drupal. Along the same lines, the Drupal 7 Windows package now includes a “web.config” file designed specifically for Microsoft Internet Information Server (IIS), which is now listed in the supported web servers for Drupal 7. For more on the latest Drupal 7 developments, check out this video with Drupal expert Jim Taylor.

    You can get the latest Drupal 7 distribution directly from the community project site, or you can install one of its distributions built by Commerce Guys from the Microsoft Web Platform Installer (Web PI), which lets you install not only Drupal, but our entire Web stack in a breeze. And for developers who want to dive deep into Drupal 7 PHP code and start hacking around to customize it, we recommend taking a look at the newly released WebMatrix tool. In response to the WebMatrix announcement, Damien TOURNOUD, CTO of Commerce Guys, said that ”Microsoft has become a citizen of the Drupal world, and the integration of Drupal 7 in WebMatrix is great news for the Drupal community.” Damien is a key contributor to Drupal 7 and the main developer of Drupal 7/SQL Server integration.

    Of course we think these improvements are great, and we hope they attract even more developers to our platform. But there’s more on our to-do list and today we’re excited to announce four new generic modules developed by Schakra and MindTree that allow Drupal administrators/developers to provide users with new features:

    • Bing Maps Module: enable easy & flexible embedding of Bing Map in Drupal content types (like articles for example)
    • Silverlight Pivot viewer Module: enable easy & flexible embedding of Silverlight Pivot in Drupal content types, using a set of preconfigured data sources (OData, a, b, c).
    • Windows Live ID Module: allow Drupal user to associate their Drupal account to their Windows Live ID, and then to login on Drupal with their Windows Live ID
    • OData Module: allow data sources based on OData to be included in Drupal content types (such as articles). The generic module includes a basic OData query builder and renders data in a simple HTML Table. The package includes a sample module base on an Open Government Data Initiative (OGDI) OData source, showing how to build advanced rendering (with Bing Maps)

    To learn more about these modules, check out the Interoperability Hands-On , which shows off Drupal on Windows Azure using Bing Maps + Windows Live ID + OData + Silverlight Pivot Viewer.

    As for on-going projects, there is also work under way to create demonstrations of how to harness the benefits of the cloud with Windows Azure and PHP (see azurephp.interoperabilitybridges.com). Drupal is among the popular PHP applications we’ve demonstrated on Windows Azure, using the Windows Azure Companion. Now we’re working, for example, on bringing the full elasticity and scalability of Windows Azure cloud to Drupal and other PHP applications.

    Microsoft supports the work of Commerce Guys, MindTree and Schakra , as well as that of the Open Source community, in improving the interoperability of Drupal with Microsoft’s platform. This work is representative of Microsoft’s broader commitment to openness by expanding choice and opportunity for customers, partners and developers. As always, we welcome any feedback, so feel free to leave a comment, or contact us.

    Jean-Paoli, General Manager for Interoperability Strategy

  • Interoperability @ Microsoft

    HTML5 Spec Hits Last Call Status


    Late yesterday the W3C’s HTML Working Group announced that the HTML5 specification has reached Last Call status.

    Last Call is the point at which W3C thinks the group’s work has reached a point of reasonable stability. Last Call is also essentially a call for all communities to confirm the technical soundness of the specification, after which the group will shift focus to gathering implementation experience and building a comprehensive test suite.

    Microsoft staffers, along with many other individuals and 194 participants from 54 organizations - including Adobe, Google, Mozilla, Apple and Opera Software - all participate in the Working Group developing the specification for HTML5, the next version of the platform-neutral HyperText Markup Language standard used worldwide for rendering Web pages.

    HTML5 is the first new revision since HTML 4.01 was released in 1999, and will include built-in video and audio, a "canvas" element for two-dimensional graphics, new structural labels such as "article" to smooth programming, and a codified process to consistently interpret the hodgepodge styles of real-world Web pages, even when improperly coded.

    In a press statement the W3C called for broad review of HTML5 and five related specifications published by the W3C HTML Working Group, which constitute the foundation of W3C's Open Web Platform. The W3C also reconfirmed that, as previously announced, these specifications are on track to become stable standards in 2014.

    While feedback is expected on how the current draft specification implements the HTML5 features, W3C expects that the specification is largely feature complete and that any additional features will be limited to those necessary to resolve issues raised during the Last Call period, which will be open for the next 10 weeks until August 3. After that, feedback will be taken only from implementers and through trials of the test suite.

    Tim Berners-Lee, the W3C Director, invited additional comment. "We invite new voices to let us know whether the specification addresses their needs. This process for resolving dependencies with other groups inside and outside W3C is a central part of our mission of ensuring the Web is available to all. W3C staff will provide the HTML Working Group Chairs the support they need to move forward, and to ensure that the specification meets W3C's commitments in areas including accessibility, internationalization, security, and privacy," he said.

    The Last Call milestone is all the more important given the difficult decision made by the W3C several years ago to undertake a collaboration with a wider group of invited experts to bring the HTML5 innovations into a formal Recommendation. This collaboration has had many challenges, but reaching last call shows that it is working.

    The W3C HTML Working Group also set an ambitious timeline almost a year ago, and this announcement of Last Call meets that timeline.

    Getting to this point has required compromise and good will from all participants, and we are very pleased to see the degree of consensus across several sub-communities that came to agreement.

    However, this does not mean that the HTML5 specs are “done,” just that the Working Group has found solutions that reached some level of consensus for the open issues. It is now time for a wider audience of stakeholders to review these documents and give their feedback.

    Rigorous testing of the specification against implementations in browsers and other products will help drive disciplined and technical discussions of issues that come up during the Last Call period.

    As Philippe Le Hégaret, the W3C manager responsible for HTML5, notes: “reaching agreements in this large a community is a tremendous achievement. There remain some important issues, but I am confident that the broader community will help us resolve them."

    Looking ahead, we are extremely hopeful that the final HTML5 Recommendation can be completed by 2014 as per the current timeline. But, to be clear, developers can already use HTML5 now and the W3C is encouraging them to do so.

    Because HTML5 anchors the Open Web Platform, the W3C has also started work on a comprehensive test suite to ensure the high levels of interoperability that diverse industries demand. Microsoft has already donated test cases to the current test suite. While it's the most comprehensive test suite of HTML5 so far, it is far from complete. But the test suite is an important step as it identifies differences in implementation and encourages implementers to fix deviations from the specification.

    The W3C has invited test suite contributions from the community and, earlier this year, dedicated new staff to drive development of an HTML5 test suite. Its first task is to expand the existing test framework by mid-2011, which will encourage browser vendors and the community to create test cases.

    Microsoft is pleased that this Last Call milestone has been reached. We regard it as a great step forward and look forward to continuing to work with the hundreds of other members of the HTML Working Group to advance the specification.


    Paul Cotton

    Co-Chair: HTML Working Group

  • Interoperability @ Microsoft

    July CTP of PHP SDK for Windows Azure Released and support in Zend Framework


    [Update: Maarten Balliauw has posted some samples. You'll see how easy it is to use the SDK:  PHP SDK for Windows Azure - Milestone 2 release

    I am pleased to communicate the availability of July Technology Preview of PHP SDK for Windows Azure.  As part of Microsoft’s continued commitment to interoperability, we announced the open source PHP SDK for Windows Azure in May in collaboration with our development partner RealDolmen.

    There are two key activities that I am excited about in this release:

    • Submission of PHP SDK for Windows Azure to Zend Framework
    • Feature completion of Windows Azure Table Storage APIs in PHP

    We received good feedback in the past couple of months and have addressed a few defects in the blob storage as well.

    Submission of PHP SDK for Windows Azure to Zend Framework

    Microsoft & RealDolmen have decided to make PHP SDK for Windows Azure available as part of Zend Framework. By extending support for Windows Azure through Zend Framework, millions of PHP developers that use Zend Framework can build web applications seamlessly targeting Windows Azure. Realdolmen has formally submitted the July CTP repository to Zend Framework’s laboratories to begin the review and approval process. Upon approval, Zend Framework will publish a technology preview package of the SDK on the Zend Framework website.  We will continue to work closely with Zend to ensure consistency across the standalone and Zend Framework versions of the PHP SDK for Windows Azure.

    I worked with Zend when we demonstrated information card interoperability on PHP based web applications through Zend Information card (read this to see it in action) and continue to enjoy the great working experience.  I look forward to the release for PHP Support for Windows Azure in Zend Framework.

    Support for Table Storage.

    The Windows Azure Table service offers structured storage in the form of tables which contain a set of Entities, which contains a set of named Properties. A Few highlights of Windows Azure Table are

    • Compile time type checking when using the ADO .NET Data Services client library.
    • A rich set of data types for property values.
    • Support for unlimited number of tables and entities, with no limit on the table size.
    • Strong consistency for single entity transactions.
    • Optimistic concurrency for updates and deletes.


    The Table service exposes a REST API.  The PHP classes for the Table service provide developers with an abstraction upon the REST APIs for CRUD and Query operations. Some of the features supported in this milestone are:

    • SharedKey Lite authentication (for local table storage service from SDK)
    • Query, Create, Delete & Update Tables - Enumerates the tables in a storage account.
    • Query, Create, Delete & Update Entities - Queries data in a table.
    • Batch Transactions

    Detailed usage scenarios of Table Storage can be found here

    Please note that you need to have the May CTP of Windows Azure to take advantage of the features in this release of PHP SDK for Windows Azure.

    - Vijay Rajagopalan

  • Interoperability @ Microsoft

    Getting started with PHP on Windows Azure Tools for Eclipse


    PHP on Windows  Recently I’ve been chatting with quite a few developers out there who are looking to get their PHP applications working on the Cloud. Many are exploring different options and trying to navigate all the offerings available. One of your choices may be Windows Azure. Our team, all who actively blog here, in partnership with the product team is working very hard to make Windows Azure the most interoperable cloud platform yet. 

    Microsoft has built Windows Azure as an open platform which offers choice for developers. You can run multiple languages including .NET, Ruby, Python, Java, or for the purposes of this post, PHP! You also have the option of running tools like Microsoft's Visual Studio or the Open Source based IDE, Eclipse, that simplifies the life of the developer. You can build applications which can run and consume any of the Windows Azure platform offerings and even those from other clouds. There is also the ability to connect to servers that you run yourself, whether under your desk, nearby offices or your datacenter as part of the composite applications built. Windows Azure is standards based, interoperable and supports all the commonly used internet protocols such as HTTP, XML, SOAP and REST. Using these popular protocols we have a commitment to users and their information so as to make data portability real. The image below provides a glimpse of all the parts working together in helping make this real

    azure interop

    If you use Eclipse, you’re already most of the way there and I am going to illustrate how you can get some simple PHP code up on the Cloud using the Windows Azure Tools for Eclipse project which was developed with the partnership of Soyatec, an active contributor to the Eclipse community. This project is a feature rich open source PHP application development environment in Eclipse that enables  development and deployment of PHP applications to Windows Azure.  The windowsazure4e plug-in builds upon the PHP Development Toolkit (PDT) and integrates Web Tools Platform (WTP) to provide a complete toolkit for Windows Azure Web Application development.

    Simply the project does a few things that accelerates getting a cloud based project up …

    • Project Creation & Migration: The New Project Wizard creates a new PHP Web Application targeting Windows Azure. Existing PHP projects can be converted to Windows Azure projects (or vice-versa) using the migration tool.
    • Azure Project Structure & Management: The windowsazure4e plug-in creates the project artifacts that Windows Azure expects, including a Windows Azure Service project and a Web-role Project, as well as Windows Azure configuration and definition files. Project and Windows Azure settings are exposed via the properties window in Eclipse
    • Storage Explorer: As part of the plug-in, a Windows Azure Storage Explorer is provided within the Eclipse environment. The Storage Explorer allows easy management of Windows Azure Storage Accounts. In addition, it also provides a friendly user-interface for performing Create, Read, Update, and Delete (CRUD) operations on Blobs, Queues, and Tables. The Storage Explorer it built using the Windows Azure SDK for JavaTM.
    • Azure Project Deployment: Once the PHP application for Windows Azure had been developed and tested locally on the Windows Azure Development Fabric, the application can be packaged up for Windows Azure deployment with a right-clicking on the target project from within Eclipse.

    First, make sure you have the prerequisites detailed in this web page which are all publicly available . I would recommend using the the Web Platform Installer to get the free versions of SQL Server 2008 Express and Microsoft Visual Web Developer 2008 Express Edition with SP1. You can find these as choices in the Web Platform tab under Database and Tools in the installer. It’s quick, easy and again free!


    These versions will work with the Windows Azure Tools for Microsoft Visual Studio 1.1 (February 2010) and will provide the necessary hooks so that your Eclipse IDE can take advantage of the cloud. The same download link also has instructions for making sure you have the right settings for your development system including turning on features usually off by default. There is also a handy MSDN page for getting started with the Windows Azure SDK if you are looking for more details and want to review the documentation. If you plan on deploying on the cloud then you will want to go to the Windows Azure Getting Started page and find the best option for you. EclipsePDT Download

    Once you have these in place you can get started with your Eclipse setup. If you haven’t downloaded Eclipse already and since the IDE is built with Java, you will first need to get a current version of the Java Development Kit (JDK) or Java Runtime Engine (JRE), available at the Java download site. Anything v1.5 and above will suffice. Then you can go ahead and download Eclipse. I have had good success with the Galileo version aka  PDT 2.1 SR-1 All In Ones / Eclipse PHP Package, available at this link. Test the IDE to make sure it launches and also make sure you have a connection to the internet. If you can hit a web page, you’re pretty much good to go!

    Okay, let’s launch the Eclipse IDE and head to the Help menu item and select “Install New Software” as with the image here,  Help-InstallNewSW

    In the Available Software, Click Add... button, this will bring up a pop-up dialog, use http://www.windowsazure4e.org/update for the location and you can enter something descriptive of your choice in the Name field, Windows Azure Tools for Eclipse is used here.


    Select All available sites. If the list of categories doesn't contain the entry Windows Azure, you need to restart eclipse. Select Windows Azure PHP Development Toolkit and click Next button.You may also want to select the other check marks if you are interested or you may add them later from the same menus.

    Available Software 

    In the next dialog, check Windows Azure PHP Development Toolkit and click Next button.

    Install Details

    Then read carefully the license agreement. If you accept all conditions, select I accept... option and click Finish button. The IDE will then do some thinking and shortly will start downloading and installing the required jar package. When finished installing, it will pop-up a dialog, click Yes button to restart your Eclipse for the changes to take effect.


    To check the plugin installation is successful, you can select Help->About Eclipse. This window will verify that you have the right build of Eclipse for PHP Developers and the Build version you installed with. In the next dialog box, click Installation Details,

    About Eclipse

    select Installed Software option, you will see Windows Azure PHP Development Toolkit in the Name list along with any other options you may have chosen before. See the screenshot below.

    Ecplise Installation Details

    Another way to verify this is that there will be  Windows Azure menu item, right next to Help, with some tools that help in working with Windows Azure in PHP.

    Windows Azure Menu Item

    Congratulations you’ve just installed Windows Azure Tools for Eclipse! Now let’s write a PHP application and get it running on the Cloud.

    If you haven’t already Created or Assigned a workspace folder for Eclipse do so now. I have created a folder in my user dev hierachy but it can be anywhere on your machine where you have space for your projects.

    Workspace Launcher

    Now you will want to change the perspective to PHP Windows Azure, by going to Window menu bar itemand then selecting Open Perspective and Other sub item. The PHP perspective is usually set as default and also available from the same menu item. For now just click on Other ….

    select perspective

    In Open Perspective panel, select PHP Windows Azure

    Open Perscpective 

    In the PHP Windows Azure perspective, Create a new PHP Windows Azure Project. By selecting the File from the menu bar and then New, and Windows Azure Web Project

    New Windows Azure Project

    This will launch a new window title PHP Azure Project where you will be able to create and title your new project. Provide a Project Name, I’ve titled mine HelloPHPInfo, as PHP developers you probably know where we’re going with this. You will also want to click the Create new project in workspace dialog button if you haven’t already and make sure the Data Storage Options dialog button is set to None as we’ll not be calling any storage for this example. Then click Finish, if you hit next don’t worry you’ll get some additional info, you can just click Finish or Back.

    New PHP Azure Web Project

    Before we go on we’ll need to start the Development Fabric if it isn’t already running, right-click its icon in the system tray and select Start Development Fabric Service. To view Development Fabric UI, right-click its icon in the system tray and select Show Development Fabric UI

    lab0_StartDevFabric lab0_ShowDevFabricUI

    There should be no WebRole(s) deployed within Development Fabric.


    We will now go back to Eclipse and  Web Role in local Development Fabric, from Eclipse menu-bar. First go the PHP Explorer tab, pick either of  the projects so they are selected (e.g HelloPHPInfo or HelloPHPInfo_WebRole, I picked the latter) and then select the Windows Azure menu bar and the Run in Development Fabric menu item. 

    Run Dev Fabric

    The service will then start with a Progress Information/Project in Progress window will show up with some information and then it will launch you default web browser and present the default document index.php on the next available port. By default it will be the URL . It will also open an explorer window for the project you created, in this case HelloPHPInfo. I went into the index.php file in the HelloPHPInfo_WebRole folder and modified the automatically generated file’s <H1> tag and included Hello just to make sure that it’s my version. You can do something familiar. It also runs the phpinfo() command too as you would typically do when checking to see if your PHP installation is configured properly and information like version, the location of your php.ini, as you can see it’s in your Eclipse workspace directory as the other files such as the service definition which we will get a little into later.

    Development Fabric Web Page

    If you open the Development Fabric UI from the Windows Azure icon in the systems tray you will find that the tree on the left panel will have Web Role deployment instances set within ServiceConfiguration.csfg showing everything running using the local development fabric on your machine.

    Development Fabric 

    Okay we now have our PHP application running on our local developer fabric. Let’s deploy it to the cloud and run this PHP application and service remotely using Windows Azure Storage Account. This is where the Window Azure account you set up comes in. First you will need to create a Windows Azure Service Package. Again go to your project and pick either the project or the webrole (HelloPHPInfo or HelloPHPInfo_WebRole) and then pick the Windows Azure menu, and select Publish Application to Windows Azure Portal menu-item. This action will open a portal to Windows Azure.

    Publish to Portal

    We’ll still have the  Development Fabric running and it is necessary to cleanup to create a Windows Azure Service Package, select OK button to proceed.


    and you soon have a Service Package created with the build results and a Windows Explorer will open the HelloPHPInfo workspace folder and you will see two new additions to the project. The first is a ServiceDefinition.csx folder and the other is a HelloPHPInfo.cspkg service package file.

    Package Folder

    the tool then opens your default web browser to Windows Live Sign-In in order to access your Windows Azure account. You will want to sign in with the credentials you have created and it will direct you to the Windows Azure Portal.


    If you then click the Windows Azure link on the left navigation pane it’ll will expand to give you a + New Service, which when click will allow you to Create a Service. Here you can setup a public service name, I picked HelloPHPInfo, and a Check Availability  to see if the name is available. I also selected the first dialog button since I don’t have any hosted services or storage accounts for this project. You may want to do this if you have them and things like custom domains etc. I also selected the Region, to be Anywhere US. You can pick the one that is most applicable to you. All you then have to do is click the Create button on the bottom of the page.

    Create Hosted Service

    The section of the next page that you will be interested is Hosted Service and you will want to select Deploy to Staging. If you only see the Deploy button for Production, then you want to click on the middle separator bar with the arrow highlighted below.

    Hosted Service

    The next page Staging Deployment, will ask you for the two packages mentioned earlier, you will want to pick the Upload a file from local storage dialog buttons for the Application Package, in this example, HelloPHPInfo.cspkg and for Configuration Settings, ServiceConfiguration.csfg which can both be picked from the workspace folder you set for the project with the Browse buttons. Then set the Service Deployment Name with a label. I have called it HelloPHPInfo. Click Deploy to start the process of copying your files up for deployment on the Windows Azure Cloud.

    Application Package 

    Wait for a few minutes for the deployment to complete where it may take you to a blank page with something like a button that states “Processing, Please Wait”. Click the Run button on the next page and the WebRole status will change from Stopped to Initializing to Busy.

    Staging Staging Initializing

    When the WebRole status is Busy, then the Web Site URL is clickable, if clicked before, it will give a web page that cannot be displayed. The service has also got a unique address for the URL and the Deployment ID. When moved to Production, you will find that the handy or friendly name that you selected earlier is used. In this example the final production URL will be http://HelloPHPInfo.cloudapp.net.

    Staging Busy

    Clicking on the Web Site URL gives the following page which renders exactly as the example we did using the local Development Fabric. There are some noticeable differences though such as the path of the php.ini compared to the local deployment.

    Deployed Web Page 

    Congratulations you are Cloud Computing!

    Now you can pretty much deploy any PHP application up to the Window Azure Cloud as in this tutorial. We created a simple PHP Windows Azure Web Project. Built and Ran PHP Windows Azure Web Project within the Development Fabric on our local machine and then we went on to deploy and Run PHP Windows Azure Web Project within the Windows Azure Cloud.

    I hope you found this helpful and we look forward to hearing from you on your experiences. Please send feedback!

    Jas Sandhu , @jassand

  • Interoperability @ Microsoft

    Solr and LucidWorks: enterprise search for Windows Azure


    Last week’s Windows Azure release delivered a host of new services for developers, ranging from hybrid cloud capabilities and Linux virtual machine support to OSS technologies delivered as a service from many vendors. Gianugo Rabellino covered the high-level view of all the exciting new offerings, and in this post I’d like to take a closer look at a service that’s likely to become very popular: LucidWorks Cloud for Windows Azure.

    Lucid Imagination, the leading experts in Lucene/Solr technology, has packaged their LucidWorks Enterprise search service in a cloud-friendly way that requires only four quick and simple steps: select a plan, sign up, log in, and start using it. LucidWorks Enterprise is based on Apache Solr, the open-source search platform from the Apache Lucene project, and it includes a variety of enhancements from the search experts at Lucid that make it easy to use Lucene/Solr functionality while preserving the purity of the open source code base and open APIs. There’s a comprehensive REST API for integrating it into your applications and services, and you get all of the functionality that has made Solr and LucidWorks Enterprise so popular: high-performance indexing for a wide range of data sources, flexible searching and faceting, and user-oriented features like auto-complete, spell-checking, and click scoring.

    As covered on the Lucid Imagination web site, there are four levels of service available for LucidWorks Cloud: Micro, Small, Medium and Large. Pick the level that meets your needs, sign up for the service, and you’re ready to start creating collections and searching your content. You can currently search content in web sites, Windows shares, Microsoft SharePoint sites, FTP, and other sources, with Windows Azure blob storage support coming soon. You can even index and search your data from Hadoop if desired. All index data is stored on Windows Azure drives, which offer high availability and reliability, and the Lucid dev operations engineering team can provide expert support for your LucidWorks Cloud environment.

     If you’re new to Solr, check out the free white paper available for download from the Lucid web site, which covers the basics of LucidWorks Enterprise and shows how to use the indexing and searching functionality through the LucidWorks dashboard. Most developers will want to study the API and integrate search tightly into their own software, but you can learn all of the key concepts through the dashboard UI without writing a single line of code.

    One concept worth pointing out here is that Solr isn’t just about searching web sites and HTTP documents. Sure, it does a great job of that, but it can also index content stored in database tables, local file systems, and other sources. There is also an XML-based Solr document format that you can use for importing data directly into the Solr engine, giving developers flexibility for indexing any type of content from any source.

    This new service from Lucid Imagination is great for those who want to get up and running quickly, but there are also developers who will want to take responsibility for all of the details and host Solr or LucidWorks Enterprise themselves. You can download LucidWorks Enterprise and install it, or you can take advantage of the simple Solr installer for Windows Azure that helps you deploy your own Solr instances as Windows Azure cloud services.

    As you can see, there are many options for getting up and running with Solr and LucidWorks. For a simple overview of how easy it is to start using the new LucidWorks Cloud service, check out this Getting Started video that covers how to create a collection, index a web site, and then search that website using the Lucidworks Cloud dashboard. Lucid continues to evolve and invest in supporting the most popular Solr clients, so there will surely be more good news for Lucene/Solr users going forward.

    In a future blog post, we’ll be covering how to use LucidWorks Cloud with popular content management systems such as WordPress and Drupal.

  • Interoperability @ Microsoft

    New Media Capture Audio Prototype Released


    As we announced in April, we have been working hard on developing a prototype to cover the Media Capture API, a draft specification that defines HTML form enhancements to provide access to the audio, image and video capture capabilities of a device.

    As such, I am delighted to be able to announce that today we have the first release of the prototype, which includes Audio capabilities only, but we do plan to add image and video support over the next month or so.

    This first version of the Media Capture prototype implements the Audio portion of this W3C specification. We have also included a sample that demonstrates how to properly utilize the APIs that the IE9 plugin exposes.

    Once a user has connected their microphone and the drivers are properly installed, they can click on the microphone iconand  the web page will capture the sound until it either hears silence or it is stopped (the captured sequence is preserved), or cancelled (captured sequence is discarded). When the Play button is pressed, the sounds just captured will play back.

    A screenshot of the Audio Capture Demo that lets you record and play back captured sounds

    Our next prototype will support Speech recognition and will implement the Microsoft proposal available on the W3C website here and here. It will also include two implementations of the sample apps that are described in sections 5.1 and 5.2 of the draft.

    Then, after that, we will deliver another update to the Media Capture prototype that will add video capabilities. We are very excited about the ability of these extensions to the existing IE9 capabilities to showcase how everybody will be able to interact in an ever more natural way with the Web going forward.

    Again, my thanks to you for helping Microsoft and the Internet Explorer team build a better and more interoperable Web, and I encourage you to continue participating in the appropriate standards bodies to help finalize the specifications.

    So stay tuned for all this goodness!


    Claudio Caldato,

    Principal Program Manager, Interoperability Strategy Team

  • Interoperability @ Microsoft

    MongoDB on Azure – Onsite in New York City!


    Excitement is in the air for the MongoDB community and Microsoft Open Technologies, Inc. this week as MongoDB’s inaugural global community event -MongoDB World is now underway at the Sheraton in New York City. I’m here and I’ll be around for the whole event – I’m looking forward to meeting the global MongoDB community!   Check out the full post on the MS Open Tech Blog

  • Interoperability @ Microsoft

    Symfony on Windows Azure, a powerful combination for PHP developers


    Symfony, the popular open source web application framework for PHP developers, is now even easier to use on Windows Azure thanks to Benjamin Eberlei’s Azure Distribution Bundle project. You can find the source code and documentation on the project’s GitHub repo.

    Symfony is a model-view-controller (MVC) framework that takes advantage of other open-source projects including Doctrine (ORM and database abstraction layer), PHP Data Objects (PDO), the PHPUnit unit testing framework, Twig template engine, and others. It eliminates common repetitive coding tasks so that PHP developers can build robust web apps quickly.

    Azure Distribution BundleSymfony and Windows Azure are a powerful combination for building highly scalable PHP applications and services, and the Azure Distribution Bundle is a free set of tools, code, and documentation that makes it very easy to work with Symfony on Windows Azure. It includes functionality for streamlining the development experience, as well as tools to simplify deployment to Windows Azure.

    Features that help streamline the Symfony development experience for Windows Azure include changes to allow use of the Symfony Sandbox on Windows Azure, functionality for distributed session management, and a REST API that gives Symfony developers access to Windows Azure services using the tools they already know best. On the deployment side, the Azure Distribution Bundle adds some new commands that are specific to Windows Azure to Symfony’s PHP app/console that make it easier to deploy Symfony applications to Windows Azure:

    • windowsazure:init – initializes scaffolding for a Symfony application to be deployed on Windows Azure
    • windowsazure:package – packages the Symfony application for deployment on Windows Azure

    Benjamin Eberlei, lead developer on the project, has posted a quick-start video that shows how to install and work with the Azure Distribution Bundle. His video takes you through prerequisites, installation, and deployment of a simple sample application that takes advantage of the SQL Database Federations sharding capability built into the SQL Database feature of Windows Azure:

    Whether you’re a Symfony developer already, or a PHP developer looking to get started on Windows Azure, you’ll find the Azure Distribution Bundle to be easy to use and flexible enough for a wide variety of applications and architectures. Download the package today – it includes all of the documentation and scaffolding you’ll need to get started. If you have ideas for making Symfony development on Windows Azure even easier, you can join the project and make contributions to the source code, or you can provide feedback through the project site or right here.

    Symfony and Doctrine are often used in combination, as shown in the sample application mentioned above. For more information about working with Doctrine on Windows Azure, see the blog post Doctrine supports SQL Database Federations for massive scalability on Windows Azure.

    Symfony and Doctrine have a rich history in the open source and PHP communities, and we’re looking forward to continuing our work with these communities to make Windows Azure a big part of the Symfony/Doctrine story going forward!

    Doug Mahugh
    Senior Technical Evangelist
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    Pointer Events Standardization Reaches Key Maturity Milestone at W3C, MS Open Tech Releases Second Drop of Pointer Events Prototype for WebKit


    Asir Vedamuthu Selvasingh, Principal Program Manager
    Microsoft Open Technologies, Inc.

    Adalberto Foresti, Principal Program Manager
    Microsoft Open Technologies, Inc.

    Developers can start building multi-input websites and apps with greater confidence that an emerging industry standard will enable building a single website targeting multiple devices and platforms.

    Only three months after its creation, the W3C Pointer Events Working Group has announced that Pointer Events has reached “Last Call Working Draft” status and is considered feature complete by the Working Group. The W3C Pointer Events Working Group has been hard at work over the last few months to standardize a single device input model – mouse, pen and touch – across multiple browsers. Congratulations to the W3C Pointer Events Working Group!

    Microsoft Open Technologies, Inc. (MS Open Tech), and the Microsoft Corp. Internet Explorer teams have been working with our colleagues across the industry, engaging developers to test and provide feedback on the specification, and incorporating all the received feedback into this Last Call Working Draft.

    Last Call Working Draft” means that members of the Working Group, including representatives from Google, jQuery Foundation, KAIST, Microsoft, Mozilla, Nokia, Opera, Zynga, and others, consider that this specification has satisfied all the technical requirements outlined in the Working Group Charter. The working group intends to advance the specification to implementation after this Last Call review.

    Build now with Pointer Events

    What’s cool is that you can go build websites using Pointer Events today. The Working Group is using Microsoft’s member submission as a starting point for the specification, which is based on the APIs available today in IE10 on Windows 8 and Windows Phone 8.

    If you are building your apps using Pointer Events and testing these apps on various browsers, you should try out the hand.js polyfill developed by David Catuhe from Microsoft France. Check out a demo that uses hand.js - universal virtual joystick. We expect that native implementations for WebKit-based browsers will follow shortly.

    To demonstrate cross-browser interoperability for Pointer Events, MS Open Tech developed a Pointer Events prototype for WebKit on HTML5 Labs and submitted the patch to the WebKit community. Today MS Open Tech posted an updated version of the patch on HTML5 Labs and on the WebKit issue tracker, incorporating community feedback received on the previous version. Working with the WebKit community, MS Open Tech will continue updating this prototype to implement the latest draft of the specification.

    Recently, MS Open Tech hosted an HTML5 Labs Test Jam event on Feb. 11 to share an early preview of the new prototype and collect feedback, and the browser community has been playing with the prototype as noted in a blog post by our friends at AppendTo. AppendTo shares Chromium builds for OSx and for Windows that integrate the Pointer Events patch by MS Open Tech.

    Learn more about Pointer Events

    If you are attending W3Conf this week in San Francisco, you don’t want to miss “Pointing Forward” at 3:00 pm on Thursday, February 21, presented by Jacob Rossi, program manager for Internet Explorer and co-editor of the W3C Pointer Events specification. You also can watch a live stream of his conference presentation on the W3Conf site and UStream, or later on video on demand.

    And, you can learn more by checking out the Pointer Events Primer on WebPlatform.org, developed by Rob Dolin, senior program manager at MS Open Tech. The primer provides guidance on how to use Pointer Events in ways similar to mouse events, and how to access and use additional attributes such as pointer type, button(s) pressed, touch size and pen tilt. The primer is a great resource if you are migrating your code from handling mouse, to consistently handling input from mouse, pen and touch.

    We’ve been happy to share the great progress for the W3C Pointer Events emerging standard. Stay tuned for more updates as we work together on this open standard that can further enable natural and simple computing interfaces on the Web.

  • Interoperability @ Microsoft

    Lowering the barrier of entry to the cloud: announcing the first release of Actor Framework from MS Open Tech (Act I)



    Erik Meijer, Partner Architect, Microsoft Corp.

    Claudio Caldato, Principal Program Manager Lead, Microsoft Open Technologies, Inc.


    There is much more to cloud computing than running isolated virtual machines, yet writing distributed systems is still too hard. Today we are making progress towards easier cloud computing as ActorFX joins the Microsoft Open Technologies Hub and announces its first, open source release. The goal for ActorFx is to provide a non-prescriptive, language-independent model of dynamic distributed objects, delivering a framework and infrastructure atop which highly available data structures and other logical entities can be implemented.

    ActorFx is based on the idea of the Actor Model developed by Carl Hewitt, and further contextualized to managing data in the cloud by Erik Meijer in his paper that is the basis for the ActorFx project − you can also watch Erik and Carl discussing the Actor model in this Channel9 video.

    What follows is a quick high-level overview of some of the basic ideas behind ActorFx. Follow our project on CodePlex to learn where we are heading and how it will help when writing the new generation of cloud applications.

    ActorFx high-level Architecture

    At a high level, an actor is simply a stateful service implemented via the IActor interface. That service maintains some durable state, and that state is accessible to actor logic via an IActorState interface, which is essentially a key-value store.



    There are a couple of unique advantages to this simple design:

    • Anything can be stored as a value, including delegates.  This allows us to blur the distinction between state and behavior – behavior is just state.  That means that actor behavior can be easily tweaked “on-the-fly” without recycling the service representing the actor, similar to dynamic languages such as JavaScript, Ruby, and Python.
    • By abstracting the IActorState interface to the durable store, ActorFx makes it possible to “mix and match” back ends while keeping the actor logic the same.  (We will show some actor logic examples later in this document.)

    ActorFx Basics

    The essence of the ActorFx model is captured in two interfaces: IActor and IActorState.

    IActorState is the interface through which actor logic accesses the persistent data associated with an actor, it is the interface implemented by the “this” pointer.

    public interface IActorState
            void Set(string key, object value);
            object Get(string key);
            bool TryGet(string key, out object value);
            void Remove(string key);
            Task Flush(); // "Commit"

    By design, the interface is an abstract key-value store.  The Set, Get, TryGet and Remove methods are all similar to what you might find in any Dictionary-type class, or a JavaScript object.  The Flush() method allows for transaction-like semantics in the actor logic; by convention, all side-effecting IActorState operations (i.e., Set and Remove) are stored in a local side-effect buffer until Flush() is called, at which time they are committed to the durable store (if the implementation of IActorState implements that).

    The IActor interface

    An ActorFx actor can be thought of as a highly available service, and IActor serves as the computational interface for that service.  In its purest form, IActor would have a single “eval” method:

    public interface IActor
            object Eval(Func<IActorState, object[], 
    object> function, object[] parameters); }

    That is, the caller requests that the actor evaluate a delegate, accompanied by caller-specified parameters represented as .NET objects, against an IActorState object representing a persistent data store.  The Eval call eventually returns an object representing the result of the evaluation.

    Those familiar with object-oriented programming should be able to see a parallel here.   In OOP, an instance method call is equivalent to a static method call into which you pass the “this” pointer.  In the C# sample below, for example, Method1 and Method2 are equivalent in terms of functionality:

    class SomeClass
            int _someMemberField;
            public void Method1(int num)
                _someMemberField += num;
            public static void Method2(SomeClass thisPtr, int num)
                thisPtr._someMemberField += num;

    Similarly, the function passed to the IActor.Eval method takes an IActorState argument that can conceptually be thought of as the “this” pointer for the actor.  So actor methods (described below) can be thought of as instance methods for the actor.

    Actor Methods

    In practice, passing delegates to actors can be tedious and error-prone.  Therefore, the IActor interface calls methods using reflection, and allows for transmitting assemblies to the actor:

    public interface IActor
            string CallMethod(string methodName, string[] parameters);
            bool AddAssembly(string assemblyName, byte[] assemblyBytes);

    Though the Eval method is still an integral part of the actor implementation, it is no longer part of the actor interface (at least for our initial release).  Instead, it has been replaced in the interface by two methods:

    • The CallMethod method allows the user to call an actor method; it is translated internally to an Eval() call that looks up the method in the actor’s state, calls it with the given parameters, and then returns the result.
    • The AddAssembly method allows the user to transport an assembly containing actor methods to the actor.

    There are two ways to define actor methods:

    (1)   Define the methods directly in the actor service, “baking them in” to the service.

    Func<IActorState, object[], object
    delegate(IActorState astate, object
    [] parameters)
    return "Hello!"

    (2)   Define the methods on the client side.

            public static object SayHello(IActorState state, object[] parameters)
                return "Hello!";


    You would then transport them to the actor “on-the-fly” via the actor’s AddAssembly call.

    All actor methods must have identical signatures (except for the method name):

    • They must return an object.
    • They must take two parameters:
      • An IActorState object to represent the “this” pointer for the actor, and
      • An object[] array representing the parameters passed into the method.

    Additionally, actor methods defined on the client side and transported to the actor via AddAssembly must be decorated with the “ActorMethod” attribute, and must be declared as public and static.

    Publication/Subscription Support

    We wanted to be able to provide subscription and publication support for actors, so we added these methods to the IActor interface:

    public interface IActor
            string CallMethod(string clientId, int clientSequenceNumber,
    string methodName, string[] parameters); bool AddAssembly(string assemblyName, byte[] assemblyBytes); void Subscribe(string eventType); void Unsubscribe(string eventType); void UnsubscribeAll(); }

    As can be seen, event types are coded as strings.  An event type might be something like “Collection.ElementAdded” or “Service.Shutdown”.  Event notifications are received through the FabricActorClient.

    Each actor can define its own events, event names and event payload formats.  And the pub/sub feature is opt-in; it is perfectly fine for an actor to not support any events.

    A simple example: Counter

    If you wanted your actor to support counter semantics, you could implement an actor method as follows:

            public static object IncrementCounter(IActorState state, 
    object[] parameters) { // Grab the parameter var amountToIncrement = (int)parameters[0]; // Grab the current counter value int count = 0; // default on first call object temp; if (state.TryGet("_count", out temp)) count = (int)temp; // Increment the counter count += amountToIncrement; // Store and return the new value state.Set("_count", count); return count; }

    Initially, the state for the actor would be empty.

    After an IncrementCounter call with parameters[0] set to 5, the actor’s state would look like this: