January, 2013

  • Interoperability @ Microsoft

    MS Open Tech publishes HTML5 Labs prototype of a Customizable, Ubiquitous Real Time Communication over the Web API proposal

    • 7 Comments

    Prototype with interoperability between Chrome on a Mac and IE10 on Windows

    From:

    Martin Thomson
    Senior Architect, Skype, Microsoft Corp.

    Bernard Aboba
    Principal Architect, Lync, Microsoft Corp.

    Adalberto Foresti
    Principal Program Manager, Microsoft Open Technologies, Inc.

    The hard work continues at the W3C WebRTC Working Group where we collectively aim to define a standard for customizable, ubiquitous Real Time Communication over the Web. In support of our earlier proposal, Microsoft Open Technologies, Inc., (MS Open Tech) is now publishing a working prototype implementation of the CU-RTC-Web proposal on HTML5Labs to demonstrate a real world interoperability scenario – in this interop case, voice chatting between Chrome on a Mac and IE10 on Windows via the API.

    By publishing this working prototype in HTML5 Labs, we hope to:

    • Clarify the CU-RTC-Web proposal with interoperable working code so others can understand exactly how the API could be used to solve real-world use cases.
    • Show what level of usability is possible for Web developers who don’t have deep knowledge of the underlying networking protocols and interface formats.
    • Encourage others to show working example code that shows exactly how their proposals could be used by developers to solve use cases in an interoperable way.
    • Seek developer feedback on how the CU-RTC-Web addresses interoperability challenges in Real Time Communications.
    • Provide a source of ideas for how to resolve open issues with the current draft API as the CU-RTC-Web proposal is cleaner and simpler.

    Our earlier CU-RTC-Web blog described critical requirements that a successful, widely adoptable Web RTC browser API will need to meet:

    • Honoring key web tenets – The Web favors stateless interactions that do not saddle either party of a data exchange with the responsibility to remember what the other did or expects. Doing otherwise is a recipe for extreme brittleness in implementations; it also raises considerably the development cost, which reduces the reach of the standard itself.
    • Customizable response to changing network quality – Real time media applications have to run on networks with a wide range of capabilities varying in terms of bandwidth, latency, and packet loss. Likewise, these characteristics can change while an application is running. Developers should be able to control how the user experience adapts to fluctuations in communication quality. For example, when communication quality degrades, the developer may prefer to favor the video channel, favor the audio channel, or suspend the app until acceptable quality is restored. An effective protocol and API should provide developers with the tools to tailor the application response to the exact needs of the moment.
    • Ubiquitous deployability on existing network infrastructure – Interoperability is critical if WebRTC users are to communicate with the rest of the world with users on different browsers, VoIP phones, and mobile phones, from behind firewalls and across routers and equipment that is unlikely to be upgraded to the current state of the art anytime soon.
    • Flexibility in its support of popular media formats and codecs as well as openness to future innovation – A successful standard cannot be tied to individual codecs, data formats or scenarios. They may soon be supplanted by newer versions that would make such a tightly coupled standard obsolete just as quickly. The right approach is instead to support multiple media formats and to bring the bulk of the logic to the application layer, enabling developers to innovate.

    CU-RTC-Web extends the media APIs of the browser to the network. Media can be transported in real time to and from browsers using standard, interoperable protocols.

    clip_image002

    The CU-RTC-Web first starts with the network. The RealtimeTransportBuilder coordinates the creation of a RealtimeTransport. A RealtimeTransport connects a browser with a peer, providing a secured, low-latency path across the network.

    At the network layer, CU-RTC-Web demonstrates the benefits of a fully transparent API, providing applications with first class access to this layer. Applications can interact directly with transport objects to learn about availability and utilization, or to change transport characteristics.

    The CU-RTC-Web RealtimeMediaStream is the link between media and the network. RealtimeMediaStream provides a way to convert the browsers internal MediaStreamTrack objects – an abstract representation of the media that might be produced by a camera or microphone – into real-time flows of packets that can traverse networks.

    Rather than using an opaque and indecipherable blob of SDP: Session Description Protocol (RFC 4566) text, CU-RTC-Web allows applications to choose how media is described to suit application needs. The relationship between streams of media and the network layer they traverse is not some arcane combination of SDP m= sections and a=mumble lines. Applications build a real-time transport and attach media to that transport.

    Microsoft made this API proposal to the W3C WebRTC Working Group in August 2012, and revised it in October 2012, based on our experience implementing this prototype. The proposal generated both positive interest and healthy skeptical concern from working group members. One common concern was that it was too radically different from the existing approach, which many believed to be almost ready for formal standardization. It has since become clear, however, that the existing approach (the RTCWeb protocol and WebRTC APIs specifications) is far from complete and stable, and needs considerable refinement and clarification before formal standardization and before it’s used to build interoperable implementations.

    The approach proposed in CU-RTC Web also would allow for existing rich solutions to more easily adopt and support the eventual WebRTC standard. A good example is Microsoft Lync Server 2013 that is already embracing Web technologies like REST and Hypermedia with a new API called the Microsoft Unified Communications Web API (UCWA see http://channel9.msdn.com/posts/Lync-Developer-Roundtable-UCWA-Overview). UCWA can be layered on the existing draft WebRTC API, however it would interoperate more easily with WebRTC implementations if the standard adopted would follow a cleaner CU-RTC-Web proposal.

    The prototype can be downloaded from HTML5Labs here. We look forward to receiving your feedback: please comment on this post or send us a message once you have played with the API, including the interop scenario between Chrome on a Mac and IE10 on Windows.

    We’re pleased to be part of the process and will continue to collaborate with the working group to close the gaps in the specification in the coming months as we believe the CU-RTC-Web proposal can provide a simpler and thus more easily interoperable API design.

  • Interoperability @ Microsoft

    One step closer to full support for Redis on Windows, MS Open Tech releases 64-bit and Azure installer

    • 3 Comments

    I’m happy to report new updates today for Redis on Windows Azure: the open-source, networked, in-memory, key-value data store. We’ve released a new 64-bit version that gives developers access to the full benefits of an extended address space. This was an important step in our journey toward full Windows support. You can download it from the Microsoft Open Technologies github repository.

    Last April we announced the release of an important update for Redis on Windows: the ability to mimic the Linux Copy On Write feature, which enables your code to serve requests while simultaneously saving data on disk.

    Along with 64-bit support, we are also releasing a Windows Azure installer that enables deployment of Redis on Windows Azure as a PaaS solution using a single command line tool. Instructions on using the tool are available on this page and you can find a step-by-step tutorial here. This is another important milestone in making Redis work great on the Windows and Windows Azure platforms.

    We are happy to communicate that we are using now the Microsoft Open Technologies public github repository as our main go-to SCM so the community will be able to follow what is happening more closely and get involved in our project.

    We have already received some great feedback from developers interested in using Redis on Windows Azure, so we are committed to an open development process in collaboration to the over 400 Github followers which, among other benefits, will provide more frequent releases.

    Now our journey continues with two additional major steps:

    - Stress Testing: Our test team spent quite some time testing the code but we need more extensive stress testing that will exercise the new code’s reliability and also guarantee Redis on Windows Azure can be used under significant workload and for an extended period of time before it can be reliably used for production scenarios.

    - Redis 2.6: Our development team will be focused in getting the code base up to the latest version on Linux, 2.6. UPDATED 01/22/2013: an alpha version of Redis 2.6 was released today. It has a few known issues, but we expect to have a stable version in a few days.

    In addition we want to make easier for developers to deploy Redis by adding support for nuGet and WebPI deployment. We will make these features available very soon.

    If you are interested in running Redis on Windows, the best thing you can do is to use this release as much as you can, log bugs and share your comments and suggestions. We also have a long list of features/changes/enhancements that we’re ready to make so let us know if you’re interested in helping - we’re looking for a few more smart developers that want to join our dev team as contributors to the project on Github. Let us know if you want to join the virtual team!

    Claudio Caldato
    Principal Program Manager Lead
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    Getting Started with VM Depot

    • 1 Comments

    Do you need to deploy a popular OSS package on a Windows Azure virtual machine, but don’t know where to start? Or do you have a favorite OSS configuration that you’d like to make available for others to deploy easily? If so, the new VM Depot community portal from Microsoft Open Technologies is just what you need. VM Depot is a community-driven catalog of preconfigured operating systems, applications, and development stacks that can easily be deployed on Windows Azure.

    You can learn more about VM Depot in the announcement from Gianugo Rabellino over on Port 25 today. In this post, we’re going to cover the basics of how to use VM Depot, so that you can get started right away.

    Deploying an Image from VM Depot

    Deploying an image from VM Depot is quick and simple. As covered in the online documentation, VM Depot will auto-generate a deployment script for use with the Windows Azure command-line tool for Mac and Linux that you can use to deploy virtual machine instances from a selected image. You can use the command line tool on any system that supports Node.js – just install the latest version of Node and then download the tool from this page on WindowsAzure.com. For more information about how to use the command line tool, see the documentation page.

    Publishing an Image on VM Depot

    To publish an image on VM Depot, you’ll need to follow these steps:

    Step 1: create a custom virtual machine. There are two approaches you can take for creating your custom virtual machine. The quickest and simplest approach is to create a Linux virtual machine from the image gallery in Windows Azure and then customize your VM by installing or configuring open source software on it. And for those who’d like to build an image from scratch, you can create and upload a virtual hard disk that contains the Linux operating system and then customize your image as desired.

    Regardless of which approach you used to create your image, you’ll then need to save it to a public storage container in Windows Azure as a .VHD file. The easiest way to do this is to deploy your image to Azure as a virtual machine and then capture it to a .VHD file. Note that you’ll need to make the storage container for your .VHD file public (they’re private by default) in order to publish your image – you can do this through the Windows Azure management portal or by using a tool such as CloudXplorer.

    Step 2: publish your image on VM Depot. Once your image is stored in a public storage container, the final step is to use the Publish option on the VM Depot portal to publish your image. If it’s your first time using VM Depot, you’ll need to use your Windows Live™ ID, Yahoo! ID, or Google ID to sign in and create a profile.

    See the Learn More section for more detailed information about the steps involved in publishing and deploying images with VM Depot.

    As you can see, VM Depot is a simple and powerful tool for efficiently deploying OSS-based virtual machines from images created by others, or for sharing your own creations with the developer community. Try it out, and let us know your thoughts on how we can make VM Depot even more useful!

    Doug Mahugh
    Lead Technical Evangelist
    Microsoft Open Technologies, Inc.

    Eduard Koller
    Senior Program Manager
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    Using Drupal on Windows Azure to create an OData repository

    • 1 Comments

    OData is an easy to use protocol that provides access to any data defined as an OData service provider.  Microsoft Open Technologies, Inc., is collaborating with several other organizations and individuals in development of the OData standard in the OASIS OData Technical Committee, and the growing OData ecosystem is enabling a variety of new scenarios to deliver open data for the open web via standardized URI query syntax and semantics. To learn more about OData, including the ecosystem, developer tools, and how you can get involved, see this blog post.

    In this post I’ll take you through the steps to set up Drupal on Windows Azure as an OData provider.  As you’ll see, this is a great way to get started using both Drupal and OData, as there is no coding required to set this up. 

    It also won’t cost you any money – currently you can sign up for a 90 day free trial of Windows Azure and install a free Web development tool (Web Matrix) and a free source control tool (Git) on your local machine to make this happen, but that’s all that’s required from a client point of view.  We’ll also be using a free tier for the Drupal instance, so you may not need to pay even after the 90 day trial, depending on your needs for bandwidth or storage.

    So let’s get started!

    Set up a Drupal instance on Windows Azure using the Web Gallery. 

    The Windows Azure team has made setting up a Drupal instance incredibly easy and quick – in a few clicks and a few minutes your site will be up and running.  Once you’ve signed up for Windows Azure and have your account set up, click on New > Quick Create > from Gallery, as shown here:

     

    clip_image002[13]

     

    Then click on the Drupal 7 instance, as shown here.  The Web Gallery is where you’ll find images of the latest Web applications, preconfigured and ready to set up.  Currently we’re using the Acquia version of Drupal 7 for Drupal:

    clip_image004[13]

    Enter some basic information about your site, including the URL (.azurewebsites.net will be added on t what you choose), the type of database you want to work with (currently SQL Server and MySQL are supported for Drupal), the region you want your app instance deployed :

    clip_image006[13]

     

    Next, add a database name, username and password for the database, and a region that the database should be deployed :

    clip_image008[13]

    That’s it!  In a few minutes your Windows Azure Web Site dashboard will appear with options for monitoring and working with your new Drupal instance:

    clip_image010[13]

     

    Setting up the OData provider

    So far we have a Drupal instance but it’s not an OData provider yet.  To get Drupal set up as an OData provider, we’re going to have to add a few folders and files, and configure some Drupal modules. 

    Because good cloud systems protect your data by backing it up and providing seamless, invisible redundancy, working with files in the cloud can be tricky.  But the Windows Azure team provide a free, easy to use tool to work with files on Windows azure, called Web Matrix.  Web Matrix lets you easily download your files, work with them locally, test your work and publish changes back up to your site when you’re ready.  It’s also a great development tool that supports most modern Web application development languages.

    Once you’ve downloaded and installed Web Matrix on your local machine, you simply click on the Web Matrix icon on the bottom right under the dashboard, as show in the image above.  Web Matrix will confirm that you want to make a local copy of your Windows Azure Web site and download the site:

    clip_image012[13]

    Web Matrix will detect the type of Web site you’re working with, set up a local instance Database and start downloading the Web site to the instance:

    clip_image014[13]

     

     

     

    When Web Matrix is done downloading your site you’ll see a dashboard showing you options for working with your local site.  For this example, we’re only going to be working with files locally, so click the files icon shown here:

    clip_image016[13]

    We need to add some libraries and modules to our Drupal Instance to make the Windows Azure standard configuration of Drupal 7 become an OData provider.  There are three sets of files we need to download and place in specific places in our instance.  You’ll need Git, or your favorite Git-compatible tool installed on your local machine to retrieve some of these files:

    1) Download the OData Producer Library for PHP V1.2 to your local machine from https://github.com/MSOpenTech/odataphpprod/
    Under the sites > all folder,  create a folder called libraries> odata (create the libraries folder if it doesn’t exist ) and copy in the downloaded files.

    2) Download version 2 of the Drupal Libraries API from your local machine from http://drupal.org/project/libraries
    Under
    the sites > all folder, create a folder called modules > libraries (yes, there are two libraries directories in different places) and copy in the downloaded files.

    3) Download r2integrated's OData Server files  to your local machine from //git.drupal.org/sandbox/r2integrated/1561302.git
    Under the sites > all folder, create a folder called modules > odata_server and copy in the downloaded files.

     

    Here’s what the directories should look like when you’re done:

    clip_image018[13]

     

    Next, click on the Publish button, to upload the new files to your Windows Azure Web site via WebMatrix. After a few minutes your files should be loaded up and ready to use.

    OData Configuration in Drupal on Windows Azure

    Next, we will configure the files we just uploaded to provide data to OData clients. 

    From the top Menu, Go to the Drupal modules, and navigate down to the “other”section.

    Enable Libraries and OData Server, then save configuration.  The modules should look like this when you’re done:

    clip_image020[13]

    Next, go to Site Configuration from the top menu, and navigate down to the Development section. Under development, click on OData Settings

    Under Node, enable page and or article, (click on expose then to OData clients), the select the fields from each Node you want to return in an OData search.  You can also return Comments, Files, Taxonomy Terms, Taxonomy Vocabularies, and Users.  All are off by default and have to be enabled to expose properties, fields, and references through the OData server:

    clip_image022[15]

    Click Save Configuration and you’re ready to start using your Windows Azure Drupal Web site as an OData provider! 

    One last thing - unfortunately, the default data in Drupal consists of exactly one page, so search results are not too impressive.  You’ll probably want to add some data to make the site useful as an OData provider. The best way to do that is via the Drupal feeds module. 

    Conclusion

    As promised at the beginning of this post, we’ve now created an OData provider based on Drupal to deliver open data for the open Web.  From here any OData consumer can consume the OData feed and doesn’t have to know anything about the underlying data source, or even that it’s Drupal on the back end.  The consumers simply see it as an OData service provider.  Of course there’s more effort involved in getting your data imported, organizing it and building OData clients to consume the data, but this is a great start with minimal effort using existing, free tools.
  • Interoperability @ Microsoft

    Need to discover, access, analyze and visualize big and broad data? Try F#.

    • 1 Comments

    Microsoft Research just released a new iteration of Try F#, a set of tools designed to make it easy for anyone – not just developers – to learn F# and take advantage of its big data, cross-platform capabilities.

    F# is the open-source, cross-platform programming language invented by Don Syme and his team at Microsoft Research to help reduce the time-to-deployment for analytical software components in the modern enterprise.

    Big data definitively is big these days and we are excited about this new iteration of Try F#. Regardless of your favorite language, or if you’re on a Mac, a Windows PC, Linux or Android, if you need to deal with complex problems, you will want to take a look at F#!

    Kerry Godes from Microsoft’s Openness Initiative connected with Evelyne Viegas, Director of Semantic Computing at Microsoft Research, to find out more about how you can use “Try F# to seamlessly discover, access, analyze and visualize big and broad data.” For the complete interview, go to the Openness blog or check out www.tryfsharp.org to get started “writing simple code for complex problems”.

      

  • Interoperability @ Microsoft

    Congratulations on the latest development for OVF!

    • 0 Comments

    Interoperability in server and cloud space has found even more evidence with the release announcement of Open Virtualization Format (OVF) 2.0 standard. We congratulate DMTF for this new milestone, a further proof that customers and industry partners care deeply about interoperability and we are proud of our participation to advance this initiative.

    Browsing the OVF 2.0 standards specification, it is evident the industry is aligning around common scenarios and it comes as a pleasant surprise how some of those emerging scenarios have been driving our own thinking in the direction for System Center.

    Microsoft has collaborated closely with Distributed Management Task Force (DMTF) and our industry partners to ensure OVF provides improved capabilities for virtualization and cloud interoperability scenarios to the benefit of customers.

    OVF 2.0 and DMTF are making progress on key emerging patterns for portability of virtual machines and systems, and it’s nice to see OVF being driven by the very same emerging use cases we have been analyzing with our System Center VMM customers such as shared Hyper-V host clusters, encryption for credential management and virtual machine boot order management (not to mention network virtualization, placement groups and multi-hypervisor support).

    Portability in the cloud and interoperability of virtualization technologies across platforms using Linux and Windows virtual machines continues to be important to Microsoft and to our customers and are increasingly becoming key industry trends. We continue to assess and improve interoperability for core scenarios using the SC 2012 VMM. We also believe moving in this direction will provide great benefit to our customer and partner eco-system, as well as bring real-world experience to our participation with OVF in DMTF.

    See the overview for further details and other enhancements in System Center 2012 VMM.

    Mark Gayler
    Senior Technical Evangelist
    Microsoft Open Technologies, Inc.

Page 1 of 1 (6 items)