• Interoperability @ Microsoft

    Using LucidWorks on Windows Azure (Part 3 of a multi-part MS Open Tech series)

    • 0 Comments

    LucidWorks Search on Windows Azure delivers a high-performance search service based on Apache Lucene/Solr open source indexing and search technology. This service enables quick and easy provisioning of Lucene/Solr search functionality on Windows Azure without any need to manage and operate Lucene/Solr servers, and it supports pre-built connectors for various types of enterprise data, structured data, unstructured data and web sites.

    In June, we shared an overview of the LucidWorks Search service for Windows Azure, and in our first post in this series we provided more detail on features and benefits. In December we covered the main features of LucidWorks Search, but today Microsoft Open Technologies, Inc, is happy to share with you a few new data sources that are Available in LucidWorks Search on Windows Azure, and a new easier way to sign up for LucidWorks Search Service on Windows Azure.

    A new option for signing up

    LucidWorks Search is still listed under applications in the Windows Azure Marketplace, and from there you can create an account via the LucidWorks Account Signup Page. But getting started is now even easier as we’ve integrated LucidWorks’ service with the Windows Azure Store, so you can now set up an instance on Windows Azure By clicking 0n the Store option in the Windows Azure Dashboard:

    clip_image002

    Next, you’ll be prompted to choose an Add-on from a list. Select LucidWorks Search. The next screen invites you Personalize your new Add-On:

    clip_image004

    At this point, all you have to do is enter a new Name for your LucidWorks Search Add-on and the region you want your instance to be located in.

    Right now the only option for signup via the Windows Azure Store is the Micro level, which is great for getting started. Should you exceed the limits of the Micro level, you can also sign up for other enterprise-level accounts from the LucidWorks Dashboard using the LucidWorks account that is automatically created when you sign up via the Window Azure Store.

    LucidWorks support for Windows Azure SQL Databases, Windows Azure Tables and Windows Azure Blobs

    Along with the Windows Azure Store integration, we also released LucidWorks Search support for Windows Azure SQL Databases, Windows Azure Blobs, and windows Azure Table Storage. All are available via the LucidWorks Search Dashboard under Indexing > Data Sources:

    clip_image006

    Windows Azure Blobs provide a way to store large amounts of unstructured, binary data, such as video, audio, and images, including streaming content such as video or audio. There are two types of blob storage available, block blobs and page blobs. Block blobs are optimized for streaming and referenced by a unique block ID. Page blobs are optimized for random access and composed of pages that are referenced by offsets from the beginning of the blob. More information on Windows Azure blobs can be found here.

    Windows Azure Table storage is a collection of non-relational structured data. Unlike tables in a database, there is no schema that enforces a certain set of values on all the rows within a table. Windows Azure Storage tables are more like rows within a spreadsheet application such as Excel than rows within a database such as SQL Server. Each row can contain a different number of columns, and of different data types, than the other rows in the same table. You can find more information on Windows Azure Table storage here.

    Windows Azure SQL Databases are similar to an on-premise instance of SQL Server, but not the same.  Windows Azure SQL Databases expose a tabular data stream (TDS) interface for Transact-SQL-based database access, so they can be used the same way you use on-premise SQL Server.

    However, there are some very important differences for administration. Windows Azure SQL Database abstracts the logical administration from the physical administration. That means that you continue to administer databases, logins, users, and roles, but Windows Azure manages the physical hardware and networking to ensure enterprise-class availability, scalability, security, and self-healing. More information on Windows Azure SQL Databases is available here.

    To set up a new Azure SQL Database as a Data source, select Database as your data source option under Indexing > Data Sources.

    clip_image008

    There are a few tips for setting up a Windows Azure SQL Database as a data source for LucidWorks that you need to know. First of all, copy the URL for your Database from the JDBC connection strings in your Windows Azure Dashboard, using this format:

    jdbc:sqlserver://<WindowsAzureSQLDBURL>:1433/<databaseName>

    Next, select the SQL Server JDBC driver as the Driver for your Windows Azure SQL Database. You also have to include at least one SQL SELECT statement that includes an id column in the result. The id column is used at the Document identifier in LucidWorks search, and relates each row returned by the SELECT statement as fields in that Document. Have a look at my first post in this series for more information on how LucidWorks works with Documents, Fields, and Collections to return search results.

    When done your Data Source configuration should look something like the sample here:

    clip_image010

    Next there are two additional options for setting up SELECT statements to work with your database. The Delta SQL Query uses the primary key to compare new records in the database with existing Documents in the LucidWorks Search Index, and only indexes the new or updated rows. Nested Queries allow you to set up one-to many relationships in the source Windows Azure SQL Database to include multiple rows of data in a single LucidWorks index Document, based on the primary key. Full instructions on setting up these queries as well as other options can be found in the LucidWorks help documentation here.

    Summary

    These are just the latest new features to help you easily and quickly set up LucidWorks Search service on Windows Azure, and there are more on the way. Get started with your own LucidWorks Search solution by signing up via the Windows Azure Store, and let us know what you think!

  • Interoperability @ Microsoft

    Update on Standardization of Next Version of HTTP/2.0

    • 0 Comments

    From:
    Gabriel Montenegro
    Principal Software Development Engineer, Microsoft Corporation

    Andrei Popov
    Senior Software Development Engineer, Microsoft Corporation

    Brian Raymor
    Senior Program Manager, Microsoft Open Technologies, Inc.

    Rob Trace
    Senior Program Manager Lead, Microsoft Corporation

    We wanted to give our readers an update on the standardization of the next version of the Hypertext Transfer Protocol, HTTP/2.0, based on our recent industry standards meeting.

    Representatives from Microsoft Corporation and Microsoft Open Technologies, Inc., recently attended the Internet Engineering Task Force 86 meetings in Orlando to make progress on the first in a series of experimental implementations of HTTP/2.0 (see our earlier blog for details).

    Much of this HTTPBIS Working Group meeting focused on presentations on header compression, which is one of the big open issues that must be resolved for the first experimental implementation of HTTP/2.0.

    Martin Thomson (HTTP/2.0 co-editor) collected and presented a number of pending specification issues for discussion and rough consensus - little things that I would like to change in HTTP/2.0 that I don’t feel I have the authority to change without working group feedback.

    Gabriel Montenegro shared a presentation on Known startup state for a simpler and more robust HTTP 2.0 that reduces the complexity of HTTP/2.0 implementations by ensuring that the protocol starts in a known state for both the client and server.

    At the Transport Layer Security Working Group (TLS WG) meeting, this group reviewed proposals for application protocol negotiation requested by HTTPBIS for HTTP/2.0 negotiation. Andrei Popov presented the Application Layer Protocol Negotiation Extension (ALPN) – one of the proposals under consideration, co-authored with Stephan Friedl (Cisco). After much discussion and a straw poll, there was rough consensus to adopt ALPN. Eric Rescorla (TLS co-chair) sent a Confirming Consensus for ALPN message to the TLS mailing list to encourage additional discussion from IETF members who had not attended the meeting.

    It was exciting to see the progress and tone of the discussions that you can see reflected in the transcriptions below:

    Mark Nottingham (HTTPBIS chair) also suggested that HTTPBIS continue meeting on a frequent schedule to make progress on the first HTTP/2.0 experimental implementation with future interim meetings proposed before and after IETF 87 in Berlin:

    • June 12 or 13-14 in San Francisco Bay Area
    • IETF 87, July 28 - August 2 in Berlin
    • Early August in Northern Germany

    Representatives from Microsoft Corporation and Microsoft Open Technologies, Inc. plan on participating in these meetings and encourage the community to also attend and become more involved in defining the next generation of HTTP at the IETF.

  • Interoperability @ Microsoft

    jQuery Adds Support for Windows Store Apps, Creates New Opportunities for JavaScript Open Source Developers

    • 3 Comments

    jQueryonWinRTThe popular open source JavaScript Web framework jQuery is adding full support for Windows Store applications in the upcoming v2.0 release, thanks to recent contributions from appendTo with technical support from Microsoft Open Technologies, Inc. (MS Open Tech). Considering the opportunity Windows Store apps represent for developers, this is a great news for JavaScript developers who can now develop apps for Windows 8 using what they already know along with their existing JavaScript code, hopefully leading to a new wave of jQuery-based Windows Store applications.

    The Windows 8 application platform introduced support for HTML5 and JavaScript development leveraging the same standard-based HTML5 and JavaScript engines as Internet Explorer. As developers would expect, some popular open source JavaScript frameworks can already be used in the context of a Windows Store application, like backbone.js, Knockout.JS, YUI. You can learn more about how to build a Windows 8 app with YUI in this YUI blog from Jeff Burtoft, HTML5 evangelist for Microsoft.

    Windows 8 provides access to all the WinRT APIs within the HTML5 development environment. Developers should be aware that there are some additional security features to consider when developing Windows 8 applications or HTML5-based cross platform applications for Windows. You can learn more about these features on MSDN.

    jQuery paves the way for open source JavaScript frameworks use in Windows Store applications

    According to the buildwith.com site, jQuery is the most widely used JavaScript framework on the Web. This makes it even more exciting that jQuery 2.0 will fully support Windows Store applications as this will benefit developers who already use jQuery and also demonstrates how other JavaScript frameworks can be integrated into the Windows 8 application model.

    “The jQuery team is excited about the new environments where jQuery 2.0 can be used. HTML and JavaScript developers want to take their jQuery knowledge with them to streamline the development process wherever they work. jQuery 2.0 gives them the ability to do that in Windows 8 Store applications. We appreciate the help from appendTo for both its patches and testing of jQuery 2.0 and MS Open Tech for its technical support.”Dave Methvin, president, jQuery Foundation

    appendTo, long-time JavaScript and Web development experts and jQuery contributors, extended its expertise to the Windows 8 application development, working with the jQuery community with technical support from MS Open Tech to enable jQuery support for the Windows 8 application model.

    While jQuery meets the language criterion for Windows Store applications, Windows 8 exposes all the WinRT APIs within the HTML5 development environment, which comes with a new security model that made some code and common practices of jQuery flagged as unsafe in the context of a Windows Store application. AppendTo reviewed and re-authored portions of jQuery core to bring it into alignment with the Windows security model, as well as identified key areas where alternative patterns would need to be substituted for actually-used conventions.” Jonathan Sampson, director of Support for appendTo.

    appendTo submitted code directly to the jQuery Core project, which will integrate this support, and the alternative patterns mentioned by Sampson were submitted to the net.tuts+ site to help jQuery developers understand the Windows 8 security model and easily build Windows 8 applications using jQuery. You can read appendTo’s blog post with more details on this work.

    Although these patterns apply to the jQuery framework, most of them transfer to all JavaScript frameworks and will definitely help you if you are planning to use your favorite open source JavaScript framework to build Windows 8 applications.

    Mobile cross platform development frameworks and tools

    HTML5 is now supported on all modern mobile platforms and open source tools such as Apache Cordova (aka PhoneGap), allowing developers to publish their applications built with HTML5 and JavaScript to multiple platforms with minimal effort and maximum code reuse. As in all HTML5/JavaScript development, developers love to be able to use their favorite frameworks, to help with their MVC model, database, UI or simply JavaScript code structure.

    Developers can already use some of these mobile cross-platform development frameworks and tools on Microsoft Devices as we mentioned in a previous post about Windows Phone 8 support added to popular open source tools and frameworks. MS Open Tech continuously engages with open source communities (contributing code, providing technical support, getting developers early access to future versions of the platforms, helping with testing devices, etc.), and we’ve found that developers are eager to publish their HTML5 apps to Windows 8 and Windows Phone 8 Stores.

    "At HP IT, we use Enyo to build apps for conference attendees. Our Enyo-based conference apps deliver a first-class user experience on Windows 8 and Windows Phone 8 — not to mention iOS, Android and a host of other platforms. The ability to serve users across platforms and device types with a single app is a huge win for us." — Sharad Mathur, sr. director, Software, Architecture & Business Intelligence Printing & Personal Systems HP IT

    Here are some recent notable developments in HTML5 mobile cross platform development:

    If you are an HTML5 and JavaScript developer, you should definitely consider building Windows 8 applications leveraging not only your development experience and skills but also your existing JavaScript code and libraries. Take a look at the jQuery new patterns proposed by appendTo, and start coding for Windows — who knows, you might be sitting on the next Cut the Rope!

  • Interoperability @ Microsoft

    Another milestone for open source cloud programming model ActorFx, more stability, async actor communication and more

    • 0 Comments

    Our team is making steady progress on building a solid framework for our new open source computing model for the cloud ActorFx, and we’re happy to report that we just released version 0.40 for this ActorFx project on CodePlex.

    ActorFx provides a non-prescriptive, language-independent model of dynamic distributed objects for highly available data structures and other logical entities via a standardized framework and infrastructure.

    Since Microsoft Open Technologies, Inc., announced that ActorFx joined the MS Open Tech Hub in December, we’ve been hard at work at adding new features in regular releases.

    For this release we focused on adding some interesting features that enriches the framework and makes it suitable for a wider range of scenarios.

    • We added a DictionaryActor, accompanied by a C# CloudDictionary<TKey,TValue> client. This is just another example of rich distributed data structures that can leverage the ActorFx infrastructure for high availability.
    • We added support for asynchronous actor-to-actor method calls (details can be found in the documentation included in the "ActorFx Basics" doc on the codeplex project).
    • We added support for actor methods written in languages other than C#. For now, we support actor methods written in JavaScript. Documentation is included in the "ActorFx Basics" doc.
    • We also spent some time to improve stability. We've added sensible handling for "transient" errors from the Actor Runtime (like NoWriteQuorum).

    We are already working on the V0.50 release. Let us know if there is any feature you would like to be added to the ActorFx project and, as always, comments and feedback are welcome.

    The ActorFx team:

    Claudio Caldato, Principal Program Manager Lead, Microsoft Open Technologies, Inc.

    Brian Grunkemeyer, Senior Software Engineer, Microsoft Open Technologies Hub

    Joe Hoag, Senior Software Engineer, Microsoft Open Technologies Hub

  • Interoperability @ Microsoft

    New CU-RTC-Web HTML5Labs Prototype from MS Open Tech Demonstrates Roaming between Cellular and Wi-Fi Connections

    • 3 Comments

    Demonstrating a faster mobility scenario that would be more difficult with the current WebRTC draft

    Adalberto Foresti
    Principal Program Manager, Microsoft Open Technologies, Inc.

    Since we submitted the initial CU-RTC-Web proposal to the W3C WebRTC Working Group in August 2012 with our proposed original contribution, vibrant discussions over the proposed RTCWeb protocol draft and WebRTC APIs specifications have continued both online and at face to face W3C and IETF Working Group meetings. The amount of energy in the industry around this subject is remarkable, though the road to converge on a quality, implementable spec that properly addresses real-world use cases remains long.

    Last month, our prototype of CU-RTC-Web demonstrated a real world interoperability scenario – voice chatting between Chrome on a Mac and IE10 on Windows via the API.

    Today, Microsoft Open Technologies, Inc., (MS Open Tech) is now publishing an updated prototype implementation of CU-RTC-Web on HTML5Labs that demonstrates another important scenario – roaming between two different connections (e.g. Wi-Fi and 3G, or Wi-Fi and Ethernet) - with negligible impact on the user experience.

    The simple, flexible, expressive APIs underlying the CU-RTC-Web architecture allowed us to implement this important scenario just by building the appropriate JavaScript code and without introducing any changes in the spec, because CU-RTC-Web is a lower level API than the current proposed WebRTC API draft.

    By comparison, the current high level proposed WebRTC API draft would not allow JavaScript developers to implement this scenario: the current draft would need to see modifications done ‘under the hood’ at the platform level by the developers modifying the browser capability itself. There is a proposal for addressing mobility cases in the IETF, but standardization of these mechanisms and subsequent implementation in the browser takes time.

    This example also illustrates that we should not assume everything that will ever be done with WebRTC is already known at the time the standard is developed. It is tempting to develop an opaque, high level API that is optimized for some well-understood scenarios, but that requires development of new, probably non-interoperable extensions to cover new scenarios - or creating yet another standard to enable such applications. We believe that web developers would prefer to be empowered by a lower level, general API that truly enables evolving, interoperable scenarios from day one. Our earlier CU-RTC-Web blog described critical requirements that a successful, widely adoptable Web RTC browser API will need to meet, particularly in the area of network transport. We mentioned how the RealtimeTransport class connects a browser with a peer, providing a secured, low-latency path across the network.

    Rather than using an opaque and indecipherable blob of SDP: Session Description Protocol (RFC 4566) text, CU-RTC-Web allows applications to choose how media is described to suit application needs. The relationship between streams of media and the network layer they traverse is not some arcane combination of SDP m= sections and a= mumble lines. Applications build a real-time transport and attach media to that transport.

    If you want to learn more about the challenges that SDP brings, some very insightful comments have recently been shared by Robin Raymond of Open Peer on the RTCWEB IETF mailing list. Go here to see Robin’s well-crafted Blog post on the issues – SDP the WebRTC Boat Anchor. As a community, it is important we continue to share these views as inaction will constitute a self-defeating choice, for which the industry would pay a high price for years to come.

    As with our previous release, we hope that publishing this latest working prototype in HTML5Labs provides guidance in the following areas:

    • Clarify the CU-RTC-Web proposal with interoperable working code so others can understand exactly how the API could be used to solve real-world use cases.
    • Encourage others to show working example code that shows exactly how their proposals could be used by developers to solve use cases in an interoperable way.
    • Seek developer feedback on how the CU-RTC-Web addresses interoperability challenges in Real Time Communications.
    • Provide a source of ideas for how to resolve open issues with the current draft API as the CU-RTC-Web proposal is cleaner and simpler.

    The prototype can be downloaded from HTML5Labs. We look forward to receiving your feedback: please comment on this post or send us a message once you have played with the API, and stay tuned for even more to come.

    We are proud to be part of the process and will continue to collaborate with the working group to close the gaps in the specification in the coming months. We remain persuaded that the general principles that governed CU-RTC-Web are valid and that a lower level API such as CU-RTC-Web is preferable to the higher level API within the current proposed WebRTC API draft.  This would result in the most agile and robust standard, one that will empower web developers to create innovative experiences for years and decades to come.

  • Interoperability @ Microsoft

    MS Open Tech develops the open source Android SDK for Windows Azure Mobile Services

    • 0 Comments

    imageFurthering the goal of bridging Microsoft and non-Microsoft technologies, Microsoft Open Technologies, Inc. developed the Android SDK for Windows Azure Mobile Services that is being announced today by Scott Guthrie on his blog.

    Windows Azure Mobile Services was created to make it easier for developers to build engaging and dynamic mobile apps that scale. By using Mobile Services, developers are not only able to connect their applications to a scalable and secure backend hosted in Windows Azure, but also store data in the cloud, authenticate users and send push notifications.

    The Android SDK lets you connect your favorite Android phone or tablet (Android 2.2+) to a cloud backend and deliver push notifications via Google Cloud Messaging. It also allows you to authenticate your users via their Google, Facebook, Twitter, or Microsoft credentials. To enable this, the MS Open Tech engineering team delivered the following key features:

    • Data API: this API simplifies the communication between Android apps and the tables exposed through Windows Azure Mobile Services using a fluent API for queries and automatic JSON serialization/deserialization.
    • Identity API: this API allows leveraging Microsoft Account, Facebook, Twitter or Google authentication in an Android app.
    • Service Filters: these components allow the developer to intercept and customize the requests between the Mobile client and Windows Azure Mobile Services, providing a filter pipeline to handle the generated requests and responses.

    The SDK is available on GitHub under the Apache 2.0 license and community contributions are very welcome.

    You can learn more about the new SDK reading Scott’s blog, and the getting started tutorial and come back soon as we are working on more samples/demos/tutorials.

    clip_image004

  • Interoperability @ Microsoft

    MS Open Tech Updates HTML5Labs HTTP/2.0 Prototype Delivering Internet Security in Open Source Encryption Libraries

    • 2 Comments

    Download prototype that provides support in OpenSSL for Application Layer Protocol Negotiation

    Adalberto Foresti
    Principal Program Manager,
    Microsoft Open Technologies, Inc.

    As part of the HTTP/2.0 effort, the industry is collaborating in the IETF Transport Layer Security Working Group (TLS WG) towards a safer and simpler Internet communication security approach. The conversation within the TLS WG on the best way to reinforce Internet communication security continues at a fast pace.

    At Microsoft Open Technologies, Inc. we have been participating in this industry collaboration and are now releasing a refreshed open source HTTP/2.0 prototype on HTML5Labs.com that introduces support in the OpenSSL open source encryption library for ALPN (Application Layer Protocol Negotiation).

    Earlier in February we had published on HTML5Labs an updated version of our HTTP/2.0 prototype that introduced support for ALPN. Shortly thereafter, on Thursday 2/21, Stephan Friedl and Andrei Popov proposed an update to the ALPN spec draft that refines the protocol in a couple of important aspects:

    - “Application Layer Protocol Negotiation Extension” now defines ProtocolNameList and ProtocolName as variable-length arrays, as typically done in TLS. This increases payload size by 2 bytes, but allows the use of the normal TLS parsers.

    - “Protocol Selection” defines a new fatal alert no_application_protocol, to be used with ALPN extension only, instead of using a generic handshake_failure alert. This is done to help distinguish application protocol negotiation issues from other handshake failures.

    As we mentioned, the new prototype on HTML5Labs also leverages OpenSSL on Apache as a backend. We are making the associated patch available as open source to allow a hands-on side by side comparison of TLS with ALPN builds with the alternative based on NPN. This should allow interested developers to verify the benefits of ALPN and its compliance with established TLS design principles that we called out in our earlier prototype.

    As always, we encourage you to try to the code out, and let us know your feedback. Go ahead and download the MS Open Tech HTTP/2.0 prototype using ALPN from HTML5 Labs!

     

  • Interoperability @ Microsoft

    Apache Qpid Proton AMQP libraries now available for Windows.

    • 0 Comments

    Back in November Microsoft Open Technologies, Inc. announced that Advanced Message Queuing Protocol (AMQP) 1.0 was approved as an OASIS Standard.  AMQP 1.0 enables interoperability using wire-level messaging between compliant clients and brokers. Applications can achieve full-fidelity message exchange between components built in multiple languages and frameworks and running on different operating systems.  

    Today we’re happy to share the news that the Apache Qpid Proton C AMQP library has been updated to support Windows. Proton C also includes bindings for several interpreted languages including PHP, Python, Ruby and Perl, all of which can now be used on Windows.

    UPDATE 04/18/2013 - The following paragraph was in error.  The Windows Proton libraries do not work with the latest preview release of service bus due to lack of SSL support. We apologize for the error. 

    These Proton clients can be used in conjunction with Windows Azure Service Bus, which introduced support for AMQP 1.0 as a preview feature last October, with GA planned later this year. Applications can use AMQP to access the queuing and publish/subscribe brokered messaging features.  Service Bus is a multi-protocol service, so in addition to AMQP, applications can also use REST/HTTP to access Service Bus from any platform.

    For more information, check out the official OASIS site, this developer’s guide, and downloads(the 0.4 version supports Windows) for Qpid Proton. MS Open Tech was one of many contributors to this project, and we appreciate all the work that the community is doing to help developers take full advantage of AMQP across many different languages, frameworks, and platforms.

  • Interoperability @ Microsoft

    WS-Management adopted as ISO/IEC international standard

    • 1 Comments

    DMTF (Distributed Management Task Force) announced today that the DMTF Web Services Management standard (WS-Man) version 1.1 has now been adopted by ISO (International Organization for Standardization) and IEC (International Electrotechnical Commission) as an international standard, ISO/IEC 17963:2013. This is a great milestone on the industry’s journey toward broad adoption of interoperable, royalty-free, standards-based solutions for management of systems, applications, and devices.

    WS-Man is designed to address the cost and complexity of IT management by providing a common way for systems to access and exchange management information across the entire IT infrastructure. It is used as a network access protocol by many CIM (Common Information Model) based management solutions, including the DMTF’s CIM based DASH (Desktop and Mobile Architecture for Server Hardware) and SMASH(Systems Management Architecture for Server Hardware) solutions, as well as the DMTF’s Virtualization Management (VMAN) standards which we use to manage Windows Hyper-V. WS-Man is also the primary protocol for management of Windows Server 2012, and has been supported by all versions of Windows since XP (both client and server) through Windows Remote Management (WinRM). For more information about WS-Man and how it is supported in Windows Server, System Center, and PowerShell, see Jeffrey Snover’s blog post on the Windows Server blog.

    Microsoft has a longstanding commitment to interoperability and standards in the management arena. In the early 1990s, Microsoft was one of the founding members of DMTF, and worked closely with industry partners on the development of CIM, a flexible standard that has been adopted for a wide variety of uses across computer systems, operating systems, networks, and storage devices. WS-Man and CIM are a powerful combination, with a rapidly growing ecosystem, and ISO/IEC adoption of WS-Man as an international standard will enable further adoption. Microsoft worked with the industry to standardize WS-Man CIM mappings for common management scenarios.

    Microsoft also developed OMI (Open Management Infrastructure), a high-performance small-footprint implementation of a CIM+WS-Man server, released last year by The Open Group as an open-source project under the Apache 2 License. Written in portable C, OMI provides an enterprise-grade CIM and WS-Man implementation so that hardware and software vendors can focus their investments on providers and schemas within their domain expertise. OMI opens up management of hardware devices from any vendor in a datacenter using a “Datacenter Abstraction Layer” or DAL – enabling management of devices and servers that implement standard protocols and schemas from standards compliant tools like PowerShell.

    Through those and related initiatives, we are continuing to help the industry deliver on the promise of standards-based solutions that address the cost and complexity of systems management. For example, DMTF also announced today that the DMTF Platform Management standard, which provides a common architecture for communication between management subsystem components, was adopted by ANSI (American National Standards Institute) and INCITS (International Committee for Information Technology Standards) as a US national standard, INCITS 495-2012 Platform Management. As DMTF VP of Technology Hemal Shah noted in today's announcement, “Adoption and recognition of the Platform Management and Web Services Management standards by these organizations provide additional credibility, while increasing the accessibility of these solutions to IT managers.”

    These developments are further evidence of the global interest in interoperable, royalty-free, standards-based solutions to management of systems, applications, and devices. Congratulations to everyone who has worked to help achieve these important milestones!

    Colleen Evans
    Principal Program Manager
    Microsoft Open Technologies, Inc.

    Doug Mahugh
    Lead Technical Evangelist
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    Come see Microsoft Open Technologies at ApacheCon next week!

    • 0 Comments

     

    Microsoft Open Technologies, Inc will be at ApacheCon in Portland next week and we hope to see you there.  We’re sponsoring the Hackathon on Monday, and come see my session on options for implementing CouchDB on Windows Azure on Thursday.  Other than that we’ll be around all week, so if you see one of us stop and say hi!

    Brian Benz
    Senior Technical Evangelist
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    Pointer Events Standardization Reaches Key Maturity Milestone at W3C, MS Open Tech Releases Second Drop of Pointer Events Prototype for WebKit

    • 0 Comments

    From:
    Asir Vedamuthu Selvasingh, Principal Program Manager
    Microsoft Open Technologies, Inc.

    Adalberto Foresti, Principal Program Manager
    Microsoft Open Technologies, Inc.

    Developers can start building multi-input websites and apps with greater confidence that an emerging industry standard will enable building a single website targeting multiple devices and platforms.

    Only three months after its creation, the W3C Pointer Events Working Group has announced that Pointer Events has reached “Last Call Working Draft” status and is considered feature complete by the Working Group. The W3C Pointer Events Working Group has been hard at work over the last few months to standardize a single device input model – mouse, pen and touch – across multiple browsers. Congratulations to the W3C Pointer Events Working Group!

    Microsoft Open Technologies, Inc. (MS Open Tech), and the Microsoft Corp. Internet Explorer teams have been working with our colleagues across the industry, engaging developers to test and provide feedback on the specification, and incorporating all the received feedback into this Last Call Working Draft.

    Last Call Working Draft” means that members of the Working Group, including representatives from Google, jQuery Foundation, KAIST, Microsoft, Mozilla, Nokia, Opera, Zynga, and others, consider that this specification has satisfied all the technical requirements outlined in the Working Group Charter. The working group intends to advance the specification to implementation after this Last Call review.

    Build now with Pointer Events

    What’s cool is that you can go build websites using Pointer Events today. The Working Group is using Microsoft’s member submission as a starting point for the specification, which is based on the APIs available today in IE10 on Windows 8 and Windows Phone 8.

    If you are building your apps using Pointer Events and testing these apps on various browsers, you should try out the hand.js polyfill developed by David Catuhe from Microsoft France. Check out a demo that uses hand.js - universal virtual joystick. We expect that native implementations for WebKit-based browsers will follow shortly.

    To demonstrate cross-browser interoperability for Pointer Events, MS Open Tech developed a Pointer Events prototype for WebKit on HTML5 Labs and submitted the patch to the WebKit community. Today MS Open Tech posted an updated version of the patch on HTML5 Labs and on the WebKit issue tracker, incorporating community feedback received on the previous version. Working with the WebKit community, MS Open Tech will continue updating this prototype to implement the latest draft of the specification.

    Recently, MS Open Tech hosted an HTML5 Labs Test Jam event on Feb. 11 to share an early preview of the new prototype and collect feedback, and the browser community has been playing with the prototype as noted in a blog post by our friends at AppendTo. AppendTo shares Chromium builds for OSx and for Windows that integrate the Pointer Events patch by MS Open Tech.

    Learn more about Pointer Events

    If you are attending W3Conf this week in San Francisco, you don’t want to miss “Pointing Forward” at 3:00 pm on Thursday, February 21, presented by Jacob Rossi, program manager for Internet Explorer and co-editor of the W3C Pointer Events specification. You also can watch a live stream of his conference presentation on the W3Conf site and UStream, or later on video on demand.

    And, you can learn more by checking out the Pointer Events Primer on WebPlatform.org, developed by Rob Dolin, senior program manager at MS Open Tech. The primer provides guidance on how to use Pointer Events in ways similar to mouse events, and how to access and use additional attributes such as pointer type, button(s) pressed, touch size and pen tilt. The primer is a great resource if you are migrating your code from handling mouse, to consistently handling input from mouse, pen and touch.

    We’ve been happy to share the great progress for the W3C Pointer Events emerging standard. Stay tuned for more updates as we work together on this open standard that can further enable natural and simple computing interfaces on the Web.

  • Interoperability @ Microsoft

    IETF standards community reaches preliminary agreements on next generation of Internet protocol HTTP/2.0

    • 0 Comments

    From:
    Gabriel Montenegro
    Principal Software Development Engineer, Microsoft Corporation

    Brian Raymor
    Senior Program Manager, Microsoft Open Technologies, Inc.

    HTTP, the Hypertext Transfer Protocol, is one of the most important protocols for the Internet, and we’re pleased to report progress on the next generation HTTP/2.0 as we recently returned from our interim HTTPbis Working Group meeting in Tokyo (HTTPbis is the HTTP Working Group in the Internet Engineering Task Force).

    Our industry standards community reached preliminary agreements on the next steps for the first in a series of experimental implementations of HTTP/2.0 that will improve the performance for how every application and service on the Web communicates today.

    Progress on Negotiation and Flow Control

    In our previous post - Sharing proposals for negotiation and flow control for HTTP/2.0 at IETF 85 – we shared our positions on Negotiation and Flow Control and outlined future plans to make progress in these areas.

    After final review at the interim HTTP/2.0 meeting, we’re pleased to announce that HTTP 2.0 Negotiation that Microsoft co-authored with Exceliance and Orange and HTTP 2.0 Principles for Flow Control that Microsoft co-authored with Ericsson and Google were incorporated into the latest HTTP/2 base draft.

    Implementation Draft Specification

    The most important outcome of the interim meeting in Tokyo was the recommendation to create a HTTP/2.0 “Implementation Draft Specification” based on the set of features that have achieved rough consensus in the HTTPBIS working group at this time. There was strong agreement among the attendees with this direction and commitment to implementing this draft specification when available.

    The implementation draft is targeted for March with another HTTP/2.0 interim meeting proposed between June-September where interoperability testing can occur.

    The full proposal is available here. Many of these features are dependent on the rapid execution of their related action items

    The proposed feature list includes significant changes to:

    • Upgrade
    • Header Compression
    • Flow Control
    • Framing
    • Server Push

    The intent is to converge on the details using the IETF HTTPBIS mailing list and then implement and validate the subsequent implementation draft. And then repeat the process based on our experience and new understanding – as Mark Nottingham (IETF HTTPBIS chair) has clarified:

    Note that we are NOT yet firmly choosing any particular path; rather, we're working on proposals in code as well as text, based upon discussion to date. As such, we're likely to have several such implementation drafts that progressively refine the approach we're taking. I.e., we don't have to agree that the above is what we want HTTP/2.0 to look like -- only that it's interesting to make these changes now, so that we can test them.

    Looking Ahead

    We are pleased with the direction of the HTTPBIS working group and are looking forward to interoperability testing with our HTML5 Labs HTTP/2.0 prototype.

    Based on the action items from the interim meeting in Tokyo, there is already active discussion on the IETF HTTPBIS mailing list as more detailed proposals are prepared and shared with the working group. We encourage the community to openly and actively contribute to the mailing list and strongly consider prototyping the implementation draft when available.

    We are looking forward to further discussions at the IETF 86 HTTPBIS meeting on March 15 in Orlando where we continue our goal to help ensure, along with our IETF colleagues, that HTTP/ 2.0 meets the needs of the broader Internet community.

    Gabriel Montenegro and Brian Raymor

  • Interoperability @ Microsoft

    New MS Open Tech HTML5 Labs HTTP/2.0 prototype shows a safer and simpler Internet communication security approach

    • 0 Comments
    Cisco and Microsoft security experts make ALPN application protocol negotiation recommendation to the IETF TLS Working Group to help HTTP/2.0 effort

    As part of the HTTP/2.0 effort, the industry is collaborating to reinforce Internet communication security in the IETF Transport Layer Security Working Group (TLS WG). Two security experts from Cisco and Microsoft Corp. have submitted ALPN-01 (Application Layer Protocol Negotiation), a safer and simpler application protocol negotiation approach, backed up by a new HTML5 Labs HTTP/2.0 prototype by Microsoft Open Technologies, Inc., incorporating an initial implementation of ALPN-01.

    Stephan Friedl (Cisco) and Andrei Popov (Microsoft Corp.) co-authored the ALPN-01 Internet draft that is under discussion among the TLS WG mailing lists. This is in response to discussions at the IETF 85 meeting in Atlanta where the IETF TLS WG received a request from the HTTPBIS Working Group for “a mechanism that allows clients and servers to negotiate the particular application protocol to use once the session is established." Currently, there are two proposals:

    The new ALPN-01 (Application Layer Protocol Negotiation) Internet draft proposes a protocol negotiation in accordance with established TLS architecture with the following benefits:

    • ALPN places ownership of protocol selection on the server, not the client. This allows the server to select an appropriate certificate based on the application protocol, which is in line with existing TLS handshake extensions.
    • ALPN performs protocol negotiation by default in the clear: in general there is no need for encrypted communication during the handshake. This permits servers to differentiate routing, QOS and firewalling by protocol.
    • For use cases that can justify the tradeoff with additional latency, ALPN still retains support for confidential protocol negotiation through standard TLS renegotiation.

    Thanks to these benefits, and because of its stricter adherence to established TLS design principles, ALPN represents the best choice to address the requirements articulated by the HTTBIS working group for HTTP/2.0 protocol negotiation.

    Our HTML5 Labs prototype is the first implementation that is based on the ALPN-01 Internet draft. It is an evolution of earlier prototypes that couples a modified command-line C# client with a basic HTTP/2.0 server. We plan to further develop it in the coming weeks, and we look forward to your feedback both on the TLS WG mailing list and through Html5 Labs. We will gladly apply changes to the draft as well whenever applicable.

    Go ahead and download the MS Open Tech HTTP/2.0 prototype using ALPN from HTML5 Labs! And please share your thoughts on this post below.

  • Interoperability @ Microsoft

    Happy Birthday XML!

    • 0 Comments

    XML was first published as a W3C Recommendation on 10 February 1998.

    I would have never dreamt, 15 years ago, that we would be so successful in our dream of exchanging information freely between different platforms and now across devices and clouds. For me, this has been the beginning of the Openness revolution.  I truly believe that that the strength of XML is its inherent unique capability of representing homogenously documents and data: those scenarios and capabilities will be even more important for the next 15 years.

    Vive XML and to its bright future!

    Jean Paoli
    President, Microsoft Open Technologies, Inc.
    Co-Creator, XML 1.0 @ W3C

  • Interoperability @ Microsoft

    Ready, set, go download the latest release of the Windows Azure Plugin for Eclipse with Java

    • 2 Comments

     

    It’s ready…the February 2013 Preview release of the Windows Azure Plugin for Eclipse with Java from our team at Microsoft Open Technologies, Inc.

    You’ve been asking for the ability to deploy JDKs, servers, and user-defined components from external sources instead of including them in the deployment package when deploying to the cloud, and that’s available in this release. There are also a few other minor updates for components, cloud publishing and Windows Azure properties. Have a look at the latest plugin documentation for a complete list of updates.

    Deploy JDKs, Servers and user-defined components from Blob Storage

    You can now deploy JDKs, application servers, and other components from public or private Windows Azure blob storage downloads instead of including them in the deployment package used to deploy to the cloud.

    Having the option of referring to an external object instead including an object in the deployment package gives you flexibility when building your deployment packages. It also means faster deployment times and smaller deployment packages.

    Here’s an example showing inclusion of a JDK. Note the new Deploy from download option:

    image

     

    Note the other tabs for server and applications – those options let you select a server (Tomcat, for example), or any component that you want to include in the install and setup but that you don’t want to include in the deployment package.

    Getting the Plugin

    Here are complete instructions for downloading and installing the Windows Azure Plugin for Eclipse with Java for the first time, which also works for updates.

    Let us know how the process goes and how you like the new features!

  • Interoperability @ Microsoft

    Netflix: Solving Big Problems with Reactive Extensions (Rx)

    • 0 Comments

    More good news for Reactive Extensions (Rx).

    Just yesterday, we told you about improvements we’ve made to two Microsoft Open Technologies, Inc., releases: Rx and ActorFx, and mentioned that Netflix was already reaping the benefits of Rx.

    To top it off, on the same day, Netflix announced a Java implementation of Rx, RxJava, was now available in the Netflix Github repository. That’s great news to hear, especially given how Ben Christensen and Jafar Husain outlined on the Netflix Tech blog that their goal is to “stay close to the original Rx.NET implementation” and that “all contracts of Rx should be the same.”

    Netflix also contributed a great series of interactive exercises for learning Microsoft's Reactive Extensions (Rx) Library for JavaScript as well as some fundamentals for functional programming techniques.

    Rx as implemented in RxJava is part of the solution Netflix has developed for improving the processing of 2+ billion incoming requests a day for millions of customers around the world.

    To summarize, here’s a great quote from Ben Christensen on the Netflix Tech Blog about Rx:

    “Functional reactive programming with RxJava has enabled Netflix developers to leverage server-side concurrency without the typical thread-safety and synchronization concerns. The API service layer implementation has control over concurrency primitives, which enables us to pursue system performance improvements without fear of breaking client code.”

  • Interoperability @ Microsoft

    New releases from the MS Open Tech Hub: Rx 2.1 and ActorFx V0.2

    • 0 Comments

    From the Rx and ActorFx team:
    Claudio Caldato, Principal Program Manager Lead, MS Open Tech
    Erik Meijer, Partner Architect, Microsoft Corp.
    Brian Grunkemeyer, Senior Software Development Engineer, MS Open Tech Hub
    Joe Hoag, Senior Software Development Engineer, MS Open Tech Hub

    Today Microsoft Open Technologies, Inc., is releasing updates to improve two cloud programming projects from our MS Open Tech Hub: Rx and ActorFx .

    Reactive Extension (Rx) is a programming model that allows developers to use a common interface for writing applications that interact with diverse data sources, like stock quotes, Tweets, computer events, and Web service requests. Since Rx was open-sourced by MS Open Tech in November, 2012, it has become an important under-the-hood component of several high-availability multi-platform applications, including NetFlix and GitHub.

    Rx 2.1 is available now via the Rx CodePlex project and includes support for Windows Phone 8, various bug fixes and contributions from the community.

    ActorFx provides a non-prescriptive, language-independent model of dynamic distributed objects for highly available data structures and other logical entities via a standardized framework and infrastructure. ActorFx is based on the idea of the mathematical Actor Model, which was adapted by Microsoft’s Eric Meijer for cloud data management.

    ActorFx V0.2 is available now at the CodePlex ActorFx project, originally open sourced in December 2012. The most significant new feature in our early prototype is Actor-to-Actor communication.

    The Hub engineering program has been a great place to collaborate on these projects, as these assignments give us the agility and resources to work with the community. Stay tuned for more updates soon!

  • Interoperability @ Microsoft

    Congratulations on the latest development for OVF!

    • 0 Comments

    Interoperability in server and cloud space has found even more evidence with the release announcement of Open Virtualization Format (OVF) 2.0 standard. We congratulate DMTF for this new milestone, a further proof that customers and industry partners care deeply about interoperability and we are proud of our participation to advance this initiative.

    Browsing the OVF 2.0 standards specification, it is evident the industry is aligning around common scenarios and it comes as a pleasant surprise how some of those emerging scenarios have been driving our own thinking in the direction for System Center.

    Microsoft has collaborated closely with Distributed Management Task Force (DMTF) and our industry partners to ensure OVF provides improved capabilities for virtualization and cloud interoperability scenarios to the benefit of customers.

    OVF 2.0 and DMTF are making progress on key emerging patterns for portability of virtual machines and systems, and it’s nice to see OVF being driven by the very same emerging use cases we have been analyzing with our System Center VMM customers such as shared Hyper-V host clusters, encryption for credential management and virtual machine boot order management (not to mention network virtualization, placement groups and multi-hypervisor support).

    Portability in the cloud and interoperability of virtualization technologies across platforms using Linux and Windows virtual machines continues to be important to Microsoft and to our customers and are increasingly becoming key industry trends. We continue to assess and improve interoperability for core scenarios using the SC 2012 VMM. We also believe moving in this direction will provide great benefit to our customer and partner eco-system, as well as bring real-world experience to our participation with OVF in DMTF.

    See the overview for further details and other enhancements in System Center 2012 VMM.

    Mark Gayler
    Senior Technical Evangelist
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    Need to discover, access, analyze and visualize big and broad data? Try F#.

    • 1 Comments

    Microsoft Research just released a new iteration of Try F#, a set of tools designed to make it easy for anyone – not just developers – to learn F# and take advantage of its big data, cross-platform capabilities.

    F# is the open-source, cross-platform programming language invented by Don Syme and his team at Microsoft Research to help reduce the time-to-deployment for analytical software components in the modern enterprise.

    Big data definitively is big these days and we are excited about this new iteration of Try F#. Regardless of your favorite language, or if you’re on a Mac, a Windows PC, Linux or Android, if you need to deal with complex problems, you will want to take a look at F#!

    Kerry Godes from Microsoft’s Openness Initiative connected with Evelyne Viegas, Director of Semantic Computing at Microsoft Research, to find out more about how you can use “Try F# to seamlessly discover, access, analyze and visualize big and broad data.” For the complete interview, go to the Openness blog or check out www.tryfsharp.org to get started “writing simple code for complex problems”.

      

  • Interoperability @ Microsoft

    Using Drupal on Windows Azure to create an OData repository

    • 1 Comments

    OData is an easy to use protocol that provides access to any data defined as an OData service provider.  Microsoft Open Technologies, Inc., is collaborating with several other organizations and individuals in development of the OData standard in the OASIS OData Technical Committee, and the growing OData ecosystem is enabling a variety of new scenarios to deliver open data for the open web via standardized URI query syntax and semantics. To learn more about OData, including the ecosystem, developer tools, and how you can get involved, see this blog post.

    In this post I’ll take you through the steps to set up Drupal on Windows Azure as an OData provider.  As you’ll see, this is a great way to get started using both Drupal and OData, as there is no coding required to set this up. 

    It also won’t cost you any money – currently you can sign up for a 90 day free trial of Windows Azure and install a free Web development tool (Web Matrix) and a free source control tool (Git) on your local machine to make this happen, but that’s all that’s required from a client point of view.  We’ll also be using a free tier for the Drupal instance, so you may not need to pay even after the 90 day trial, depending on your needs for bandwidth or storage.

    So let’s get started!

    Set up a Drupal instance on Windows Azure using the Web Gallery. 

    The Windows Azure team has made setting up a Drupal instance incredibly easy and quick – in a few clicks and a few minutes your site will be up and running.  Once you’ve signed up for Windows Azure and have your account set up, click on New > Quick Create > from Gallery, as shown here:

     

    clip_image002[13]

     

    Then click on the Drupal 7 instance, as shown here.  The Web Gallery is where you’ll find images of the latest Web applications, preconfigured and ready to set up.  Currently we’re using the Acquia version of Drupal 7 for Drupal:

    clip_image004[13]

    Enter some basic information about your site, including the URL (.azurewebsites.net will be added on t what you choose), the type of database you want to work with (currently SQL Server and MySQL are supported for Drupal), the region you want your app instance deployed :

    clip_image006[13]

     

    Next, add a database name, username and password for the database, and a region that the database should be deployed :

    clip_image008[13]

    That’s it!  In a few minutes your Windows Azure Web Site dashboard will appear with options for monitoring and working with your new Drupal instance:

    clip_image010[13]

     

    Setting up the OData provider

    So far we have a Drupal instance but it’s not an OData provider yet.  To get Drupal set up as an OData provider, we’re going to have to add a few folders and files, and configure some Drupal modules. 

    Because good cloud systems protect your data by backing it up and providing seamless, invisible redundancy, working with files in the cloud can be tricky.  But the Windows Azure team provide a free, easy to use tool to work with files on Windows azure, called Web Matrix.  Web Matrix lets you easily download your files, work with them locally, test your work and publish changes back up to your site when you’re ready.  It’s also a great development tool that supports most modern Web application development languages.

    Once you’ve downloaded and installed Web Matrix on your local machine, you simply click on the Web Matrix icon on the bottom right under the dashboard, as show in the image above.  Web Matrix will confirm that you want to make a local copy of your Windows Azure Web site and download the site:

    clip_image012[13]

    Web Matrix will detect the type of Web site you’re working with, set up a local instance Database and start downloading the Web site to the instance:

    clip_image014[13]

     

     

     

    When Web Matrix is done downloading your site you’ll see a dashboard showing you options for working with your local site.  For this example, we’re only going to be working with files locally, so click the files icon shown here:

    clip_image016[13]

    We need to add some libraries and modules to our Drupal Instance to make the Windows Azure standard configuration of Drupal 7 become an OData provider.  There are three sets of files we need to download and place in specific places in our instance.  You’ll need Git, or your favorite Git-compatible tool installed on your local machine to retrieve some of these files:

    1) Download the OData Producer Library for PHP V1.2 to your local machine from https://github.com/MSOpenTech/odataphpprod/
    Under the sites > all folder,  create a folder called libraries> odata (create the libraries folder if it doesn’t exist ) and copy in the downloaded files.

    2) Download version 2 of the Drupal Libraries API from your local machine from http://drupal.org/project/libraries
    Under
    the sites > all folder, create a folder called modules > libraries (yes, there are two libraries directories in different places) and copy in the downloaded files.

    3) Download r2integrated's OData Server files  to your local machine from //git.drupal.org/sandbox/r2integrated/1561302.git
    Under the sites > all folder, create a folder called modules > odata_server and copy in the downloaded files.

     

    Here’s what the directories should look like when you’re done:

    clip_image018[13]

     

    Next, click on the Publish button, to upload the new files to your Windows Azure Web site via WebMatrix. After a few minutes your files should be loaded up and ready to use.

    OData Configuration in Drupal on Windows Azure

    Next, we will configure the files we just uploaded to provide data to OData clients. 

    From the top Menu, Go to the Drupal modules, and navigate down to the “other”section.

    Enable Libraries and OData Server, then save configuration.  The modules should look like this when you’re done:

    clip_image020[13]

    Next, go to Site Configuration from the top menu, and navigate down to the Development section. Under development, click on OData Settings

    Under Node, enable page and or article, (click on expose then to OData clients), the select the fields from each Node you want to return in an OData search.  You can also return Comments, Files, Taxonomy Terms, Taxonomy Vocabularies, and Users.  All are off by default and have to be enabled to expose properties, fields, and references through the OData server:

    clip_image022[15]

    Click Save Configuration and you’re ready to start using your Windows Azure Drupal Web site as an OData provider! 

    One last thing - unfortunately, the default data in Drupal consists of exactly one page, so search results are not too impressive.  You’ll probably want to add some data to make the site useful as an OData provider. The best way to do that is via the Drupal feeds module. 

    Conclusion

    As promised at the beginning of this post, we’ve now created an OData provider based on Drupal to deliver open data for the open Web.  From here any OData consumer can consume the OData feed and doesn’t have to know anything about the underlying data source, or even that it’s Drupal on the back end.  The consumers simply see it as an OData service provider.  Of course there’s more effort involved in getting your data imported, organizing it and building OData clients to consume the data, but this is a great start with minimal effort using existing, free tools.
  • Interoperability @ Microsoft

    MS Open Tech publishes HTML5 Labs prototype of a Customizable, Ubiquitous Real Time Communication over the Web API proposal

    • 7 Comments

    Prototype with interoperability between Chrome on a Mac and IE10 on Windows

    From:

    Martin Thomson
    Senior Architect, Skype, Microsoft Corp.

    Bernard Aboba
    Principal Architect, Lync, Microsoft Corp.

    Adalberto Foresti
    Principal Program Manager, Microsoft Open Technologies, Inc.

    The hard work continues at the W3C WebRTC Working Group where we collectively aim to define a standard for customizable, ubiquitous Real Time Communication over the Web. In support of our earlier proposal, Microsoft Open Technologies, Inc., (MS Open Tech) is now publishing a working prototype implementation of the CU-RTC-Web proposal on HTML5Labs to demonstrate a real world interoperability scenario – in this interop case, voice chatting between Chrome on a Mac and IE10 on Windows via the API.

    By publishing this working prototype in HTML5 Labs, we hope to:

    • Clarify the CU-RTC-Web proposal with interoperable working code so others can understand exactly how the API could be used to solve real-world use cases.
    • Show what level of usability is possible for Web developers who don’t have deep knowledge of the underlying networking protocols and interface formats.
    • Encourage others to show working example code that shows exactly how their proposals could be used by developers to solve use cases in an interoperable way.
    • Seek developer feedback on how the CU-RTC-Web addresses interoperability challenges in Real Time Communications.
    • Provide a source of ideas for how to resolve open issues with the current draft API as the CU-RTC-Web proposal is cleaner and simpler.

    Our earlier CU-RTC-Web blog described critical requirements that a successful, widely adoptable Web RTC browser API will need to meet:

    • Honoring key web tenets – The Web favors stateless interactions that do not saddle either party of a data exchange with the responsibility to remember what the other did or expects. Doing otherwise is a recipe for extreme brittleness in implementations; it also raises considerably the development cost, which reduces the reach of the standard itself.
    • Customizable response to changing network quality – Real time media applications have to run on networks with a wide range of capabilities varying in terms of bandwidth, latency, and packet loss. Likewise, these characteristics can change while an application is running. Developers should be able to control how the user experience adapts to fluctuations in communication quality. For example, when communication quality degrades, the developer may prefer to favor the video channel, favor the audio channel, or suspend the app until acceptable quality is restored. An effective protocol and API should provide developers with the tools to tailor the application response to the exact needs of the moment.
    • Ubiquitous deployability on existing network infrastructure – Interoperability is critical if WebRTC users are to communicate with the rest of the world with users on different browsers, VoIP phones, and mobile phones, from behind firewalls and across routers and equipment that is unlikely to be upgraded to the current state of the art anytime soon.
    • Flexibility in its support of popular media formats and codecs as well as openness to future innovation – A successful standard cannot be tied to individual codecs, data formats or scenarios. They may soon be supplanted by newer versions that would make such a tightly coupled standard obsolete just as quickly. The right approach is instead to support multiple media formats and to bring the bulk of the logic to the application layer, enabling developers to innovate.

    CU-RTC-Web extends the media APIs of the browser to the network. Media can be transported in real time to and from browsers using standard, interoperable protocols.

    clip_image002

    The CU-RTC-Web first starts with the network. The RealtimeTransportBuilder coordinates the creation of a RealtimeTransport. A RealtimeTransport connects a browser with a peer, providing a secured, low-latency path across the network.

    At the network layer, CU-RTC-Web demonstrates the benefits of a fully transparent API, providing applications with first class access to this layer. Applications can interact directly with transport objects to learn about availability and utilization, or to change transport characteristics.

    The CU-RTC-Web RealtimeMediaStream is the link between media and the network. RealtimeMediaStream provides a way to convert the browsers internal MediaStreamTrack objects – an abstract representation of the media that might be produced by a camera or microphone – into real-time flows of packets that can traverse networks.

    Rather than using an opaque and indecipherable blob of SDP: Session Description Protocol (RFC 4566) text, CU-RTC-Web allows applications to choose how media is described to suit application needs. The relationship between streams of media and the network layer they traverse is not some arcane combination of SDP m= sections and a=mumble lines. Applications build a real-time transport and attach media to that transport.

    Microsoft made this API proposal to the W3C WebRTC Working Group in August 2012, and revised it in October 2012, based on our experience implementing this prototype. The proposal generated both positive interest and healthy skeptical concern from working group members. One common concern was that it was too radically different from the existing approach, which many believed to be almost ready for formal standardization. It has since become clear, however, that the existing approach (the RTCWeb protocol and WebRTC APIs specifications) is far from complete and stable, and needs considerable refinement and clarification before formal standardization and before it’s used to build interoperable implementations.

    The approach proposed in CU-RTC Web also would allow for existing rich solutions to more easily adopt and support the eventual WebRTC standard. A good example is Microsoft Lync Server 2013 that is already embracing Web technologies like REST and Hypermedia with a new API called the Microsoft Unified Communications Web API (UCWA see http://channel9.msdn.com/posts/Lync-Developer-Roundtable-UCWA-Overview). UCWA can be layered on the existing draft WebRTC API, however it would interoperate more easily with WebRTC implementations if the standard adopted would follow a cleaner CU-RTC-Web proposal.

    The prototype can be downloaded from HTML5Labs here. We look forward to receiving your feedback: please comment on this post or send us a message once you have played with the API, including the interop scenario between Chrome on a Mac and IE10 on Windows.

    We’re pleased to be part of the process and will continue to collaborate with the working group to close the gaps in the specification in the coming months as we believe the CU-RTC-Web proposal can provide a simpler and thus more easily interoperable API design.

  • Interoperability @ Microsoft

    One step closer to full support for Redis on Windows, MS Open Tech releases 64-bit and Azure installer

    • 3 Comments

    I’m happy to report new updates today for Redis on Windows Azure: the open-source, networked, in-memory, key-value data store. We’ve released a new 64-bit version that gives developers access to the full benefits of an extended address space. This was an important step in our journey toward full Windows support. You can download it from the Microsoft Open Technologies github repository.

    Last April we announced the release of an important update for Redis on Windows: the ability to mimic the Linux Copy On Write feature, which enables your code to serve requests while simultaneously saving data on disk.

    Along with 64-bit support, we are also releasing a Windows Azure installer that enables deployment of Redis on Windows Azure as a PaaS solution using a single command line tool. Instructions on using the tool are available on this page and you can find a step-by-step tutorial here. This is another important milestone in making Redis work great on the Windows and Windows Azure platforms.

    We are happy to communicate that we are using now the Microsoft Open Technologies public github repository as our main go-to SCM so the community will be able to follow what is happening more closely and get involved in our project.

    We have already received some great feedback from developers interested in using Redis on Windows Azure, so we are committed to an open development process in collaboration to the over 400 Github followers which, among other benefits, will provide more frequent releases.

    Now our journey continues with two additional major steps:

    - Stress Testing: Our test team spent quite some time testing the code but we need more extensive stress testing that will exercise the new code’s reliability and also guarantee Redis on Windows Azure can be used under significant workload and for an extended period of time before it can be reliably used for production scenarios.

    - Redis 2.6: Our development team will be focused in getting the code base up to the latest version on Linux, 2.6. UPDATED 01/22/2013: an alpha version of Redis 2.6 was released today. It has a few known issues, but we expect to have a stable version in a few days.

    In addition we want to make easier for developers to deploy Redis by adding support for nuGet and WebPI deployment. We will make these features available very soon.

    If you are interested in running Redis on Windows, the best thing you can do is to use this release as much as you can, log bugs and share your comments and suggestions. We also have a long list of features/changes/enhancements that we’re ready to make so let us know if you’re interested in helping - we’re looking for a few more smart developers that want to join our dev team as contributors to the project on Github. Let us know if you want to join the virtual team!

    Claudio Caldato
    Principal Program Manager Lead
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    Getting Started with VM Depot

    • 1 Comments

    Do you need to deploy a popular OSS package on a Windows Azure virtual machine, but don’t know where to start? Or do you have a favorite OSS configuration that you’d like to make available for others to deploy easily? If so, the new VM Depot community portal from Microsoft Open Technologies is just what you need. VM Depot is a community-driven catalog of preconfigured operating systems, applications, and development stacks that can easily be deployed on Windows Azure.

    You can learn more about VM Depot in the announcement from Gianugo Rabellino over on Port 25 today. In this post, we’re going to cover the basics of how to use VM Depot, so that you can get started right away.

    Deploying an Image from VM Depot

    Deploying an image from VM Depot is quick and simple. As covered in the online documentation, VM Depot will auto-generate a deployment script for use with the Windows Azure command-line tool for Mac and Linux that you can use to deploy virtual machine instances from a selected image. You can use the command line tool on any system that supports Node.js – just install the latest version of Node and then download the tool from this page on WindowsAzure.com. For more information about how to use the command line tool, see the documentation page.

    Publishing an Image on VM Depot

    To publish an image on VM Depot, you’ll need to follow these steps:

    Step 1: create a custom virtual machine. There are two approaches you can take for creating your custom virtual machine. The quickest and simplest approach is to create a Linux virtual machine from the image gallery in Windows Azure and then customize your VM by installing or configuring open source software on it. And for those who’d like to build an image from scratch, you can create and upload a virtual hard disk that contains the Linux operating system and then customize your image as desired.

    Regardless of which approach you used to create your image, you’ll then need to save it to a public storage container in Windows Azure as a .VHD file. The easiest way to do this is to deploy your image to Azure as a virtual machine and then capture it to a .VHD file. Note that you’ll need to make the storage container for your .VHD file public (they’re private by default) in order to publish your image – you can do this through the Windows Azure management portal or by using a tool such as CloudXplorer.

    Step 2: publish your image on VM Depot. Once your image is stored in a public storage container, the final step is to use the Publish option on the VM Depot portal to publish your image. If it’s your first time using VM Depot, you’ll need to use your Windows Live™ ID, Yahoo! ID, or Google ID to sign in and create a profile.

    See the Learn More section for more detailed information about the steps involved in publishing and deploying images with VM Depot.

    As you can see, VM Depot is a simple and powerful tool for efficiently deploying OSS-based virtual machines from images created by others, or for sharing your own creations with the developer community. Try it out, and let us know your thoughts on how we can make VM Depot even more useful!

    Doug Mahugh
    Lead Technical Evangelist
    Microsoft Open Technologies, Inc.

    Eduard Koller
    Senior Program Manager
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    How to develop for Windows Phone 8 on your Mac

    • 15 Comments

    UPDATE 01/07/13: added instructions to enable Hyper-V in Parallels Desktop VM

     

    Interested in developing apps for Windows Phone 8, but you are developing on a Mac? No problem...check out the guide below to find a variety of options.

    First you should consider whether to build native WP8 applications or Web applications. Applications will run directly on the phone platform, and will deliver advanced performance and a fully integrated experience to the end user. Web applications developed with HTML5 and JavaScript will take advantage of the Web standards support of Internet Explorer 10 and the cross platform nature of HTML5 applications.There is a lot of debate about which way to go, native app or Web app with HTML5, and I would say that the answer is… it depends. In this post, I will try to present the main options to go one way or the other based on the assumption that you have a Mac and want to stick to it Smile.

    WP8 application development on a Mac

    To build applications for Windows Phone, you need Visual Studio 2012 and the WP8 SDK. There is a free version that bundles these two and that allows you to do pretty much all you need to build and publish an application to the Windows Phone store:

    • Write and debug code in an advanced code editor
    • Compile to app package
    • Test the application in an emulator leveraging advanced features
    • Connecting and deploying to an actual device and do cross-debugging, and performance analysis
    • … and these are only the basic features available, there are plenty more!

    Visual Studio 2012 runs on Windows 8 and Windows 7 but the Windows Phone emulator relies on Hyper-V, which comes only in Windows 8 64 bit. So basically, you need to have a Windows 8 64 bit install if you want to leverage the emulator, and you need a way to have Hyper-V enabled in your Windows 8 install.

    Using a recent Macintosh, you have a couple of options to run Windows 8:

    1. Run Windows 8 on your Mac natively using Boot Camp
    2. Run Windows 8 in a virtual environment using software like VMWare Fusion 5 or Parallels Desktop 8 for your Mac

    There is plenty of documentation online on how to set up the environments for both options to get Windows to run on your Mac, and you can also find details on MSDN here.

    Boot Camp

    If you want to go the Boot Camp way, once you have set up Windows 8, you can go ahead and follow the default instructions to download and install the WP8 SDK.

    VMWare Fusion 5 or Parallels Desktop 8

    If you want to use VMWare Fusion or Parallels and still be able to use the WP8 Emulator, here are the steps you need to follow:

    • Install VMWare Fusion 5 or Parallels Desktop 8if you don’t have it yet
    • Download Windows 8 64 bits ISO:
      • you can find the evaluation version on the evaluation center here.
      • If you want the retail version then it is a little tricky on a Mac as there is no way to download the retail iso directly. The trick consists in installing the evaluation version of Windows 8 on a VMware Fusion VM or Parallels following the below instructions, then from Windows 8, run the Windows 8 setup (a link is available in the first lines of the email you will receive after the purchase of Windows 8) that will offer the option of downloading the retail ISO after entering you product key as described on this article.
    • Create a new VM setting up the below parameters:
      • On WMWare Fusion 5:
        • ensure that you have the following settings (be sure to check the “Enable hypervisor applications in this Virtual machine” option):

    1

        • Important:
          • Hyper-V requires at least 2 cores to be present.
          • The Windows Phone Emulator will use 258MB or 512MB virtual memory, therefore, don’t be shy with memory assigned to the VM and assign at least 2 GB.
          • In the advanced settings, ensure you have selected “Preferred virtualization engine: Intel VT-x with EPT” option
        • Modify the .vmx file to add or modify the following settings:
          • hypervisor.cpuid.v0 = "FALSE"
          • mce.enable = "TRUE"
          • vhv.enable = "TRUE"
      • On Parallels Desktop 8:
        • Ensure that you have the following settings for the new VM (go into VM Settings>General>CPUs):

    Parrallels

        • Still in settings, you need to enable virtualization pass-thru support in Options>Optimizations>Nested Virtualization

    Screen Shot 2013-01-04 at 3.58.43 PM

     

    • Install Windows 8 on your VMware Fusion or Parallels Desktop VM (you can find plenty of guides online on how to install a VM from an ISO)
    • Once Windows 8 is installed, download and install the WP8 SDK.

    4

    The SDK install will setup the Hyper-V environment and will set things up for you to be able to use the Emulator within the VMWare Fusion or Parallels Desktop image.

    on VMware Fusion… on Parallels Desktop…

    Screen Shot 2012-12-04 at 1.48.09 PM

    Screen Shot 2013-01-04 at 4.04.35 PM 

    You are now set to build, debug and test WP8 applications. You can start your development and debugging by leveraging the emulator and its tools, and you can consider using an actual Windows Phone 8 device, plugging it in your Mac, and setting things up so that the USB device shows up in the VM.

    You can find extensive information on how to use Visual Studio 2012 for Windows Phone 8 development, along with its emulator, and how to publish an application, get samples, as well as everything a developer needs here.

    WP8 Web applications development on a Mac

    Here we are talking about two different things:

    • Development for mobile websites that will render well in the Windows Phone 8 browser.
    • HTML5 application development using the Web Browser control hosted by a native application, model that is used by frameworks and tools such as Apache Cordova (a.k.a. PhoneGap), also known as hybrid applications.

    Windows 8 offers a “native HTML5/JS” model that allows you to develop applications in HTML5 and JavaScript that will execute directly on top of the application platform, but we will not discuss this model here as Windows Phone 8 proposes a slightly different model for HTML5 and JS applications development.

    On Windows Phone 8, in both cases mentioned above, the HTML5/JavaScript/CSS code will be rendered and executed in the same Internet Explorer 10 engine on Windows Phone 8. This means that whether you are writing a mobile website, or a PhoneGap type application, you can do so on your usual tool or editor all the way down to the debugging and testing phases.

    While you can do a lot of debugging in a Web browser for your HTML5/JS code, you will need to do actual tests and debugging on the actual platform (WP8 Emulator or/and actual device). Even if you are using Web standards, you need to consider that the level of support might not be the same on all platforms. And if you are using third party code, you also need to ensure that the code doesn’t contain platform specific elements so that things will run correctly. For example, you need to get rid of any dependencies on WebKit specifics.

    Making sure your Web code is not platform specific

    When writing this code, you need to consider the various platforms that your mobile Web application will be used. Obviously the less specifics there are for each of the platforms, the better for you as a developer! Good news is that HTML5 support is getting better and better across modern mobile browsers. IE10 in Windows Phone 8 is no exception and brings extended standards support, hardware acceleration and great rendering performances. You can take a look at the following site directly from your Windows Phone 8 device to check that out: www.atari.com/arcade

    5140_clip_image008_295E204E

    To learn more on how to make sure your mobile Web code will render well on Internet Explorer 10 on Windows Phone 8 as well as on other modern mobile browsers, you can read this article.

    Testing and debugging your Web application for WP8 on a Mac

    Once you have clean HTML5 code that runs and renders well in a Web browser, you will need to test it on IE10 on a Windows Phone 8 device or emulator.

    In the IE10 desktop, there are powerful debugging tools (“F12”), which is not the case on Windows Phone 8. One of the recommended ways to do advanced debugging is to leverage the “F12” debugging capabilities on IE10 Desktop in order to cover most if not all of the debugging and testing cases for your mobile Web application for Windows Phone 8. For a Mac, you will need to look into the various options to install a Windows 8 virtual machine, which are mentioned in the beginning of this article, and load your code in Internet Explorer 10 within Windows 8. Once IE is launched, press the "F12" key or go to the settings menu and select “F12 Developer tools.”

    1667_clip_image010_223EE3D6

    In the debugging tool at the bottom, you can then change the User agent setting and the resolution from the “Tools” menu to match what IE10 on Windows Phone 8 exposes.

    4314_clip_image012_490CFA16

    Once you have done these tests on Internet Explorer 10 desktop, you can deploy and test on an actual Windows Phone 8 device or on the emulator (see previous chapters on how to set things up to make the emulator work on a Mac).

    Now what?

    With these steps you should be set to start developing and deploying Windows Phone 8 applications from your Mac.

    But there are certainly other tips and tricks that you will figure out and you may already know. We would love to hear from you to make this post even more useful for developers wishing to expand their reach to the Windows Phone 8 platform. Do not hesitate to comment on this post with your suggestions, ideas, tips…

  • Interoperability @ Microsoft

    New MS Open Tech Prototype of the HTTP/2.0 initial draft in an Apache HTTP server module

    • 0 Comments

    We continue to see good momentum within the HTTP/2.0 Working Group (IETF 85 meeting) toward identifying suitable technical answers for the seven key areas of discussion, which we had identified back in August, including an update to the HTTP/2.0 Flow Control Principles draft, which Microsoft co-authored with Google and Ericsson.

    Through our continuing support of the HTTP/2.0 standardization through code, we have made some updates to our prototypes and just posted them on HTML5 Labs. We have moved from the Node.js implementation used server-side by our earlier prototypes to a modified implementation of an existing Apache module for which we are making available in the associated patch.

    In this latest iteration, we have made three changes in particular to advance discussions on the HTTP/2.0 initial draft and thinking around interoperable implementations:

    Negotiation: we have improved upon our initial implementation of the protocol upgrade that we released last month, supporting the scenario where the server does not accept a protocol upgrade.

    Flow Control: our prototype uses an infinite Window Update size that is effectively the simplest possible implementation and can be expected to be chosen for many real-world deployments, e.g. by specialized devices for the “Internet of things.”

    Server push: we have implemented a behavior on the client that resets connections upon receipt of unrequested data from the server. This is particularly important where push might be especially unwelcome on mobile/low bandwidth connections.

    This iteration continues to demonstrate our ongoing commitment to the HTTP/2.0 standardization process. Throughout this journey, we have honored the tenets that we stated in earlier updates:

    • Maintain existing HTTP semantics.
    • Maintain the integrity of the layered architecture.
    • Use existing standards when available to make it easy for the protocol to work with the current Web infrastructure.
    • Be broadly applicable and flexible by keeping the client in control of content.
    • Account for the needs of modern mobile clients, including power efficiency, support for HTTP-based applications, and connectivity through tariffed networks.

    These tenets will continue to inform the direction of both our proposals to the IETF and of our engineering efforts.

    Please try out the prototype, give us feedback and we’ll keep you posted on next steps in the working group. We will also follow up soon with test data resulting from our work on this code.

    As we have stated throughout this process, we’re excited for the Web to get faster and more capable. HTTP/2.0 is an important part of that progress and we look forward to improving on the HTTP/2.0 initial draft in collaboration with our fellow working group participants and the Web community at large as we aim for an HTTP/2.0 that meets the needs of the entire Web, including browsers, apps, and mobile devices.

    Adalberto Foresti
    Principal Program Manager
    Microsoft Open Technologies, Inc.
    A subsidiary of Microsoft Corporation

  • Interoperability @ Microsoft

    Microsoft Open Technologies releases Windows Azure support for Solr 4.0

    • 2 Comments

    Microsoft Open Technologies is pleased to share the latest update to the Windows Azure self-deployment option for Apache Solr 4.0.

    Solr 4.0 is the first release to use the shared 4.x branch for Lucene & Solr and includes support for SolrCloud functionality. SolrCloud allows you to scale a single index via replication over multiple Solr instances running multiple SolrCores for massive scaling and redundancy.

    To learn more about Solr 4.0, have a look at this 40 minute video covering Solr 4 Highlights, by Mark Miller of LucidWorks from Apache Lucene Eurocon 2011.

    To download and install Solr on Windows Azure visit our GitHub page to learn more and download the SDK.

    Another alternative for implementing the best of Lucene/Solr on Windows Azure is provided by our partner LucidWorks. LucidWorks Search on Windows Azure delivers a high-performance search solution that enables quick and easy provisioning of Lucene/Solr search functionality without any need to install, manage or operate Lucene/Solr servers, and it supports pre-built connectors for various types of enterprise data, structured data, unstructured data and web sites.

  • Interoperability @ Microsoft

    Open source release from MS Open Tech: Pointer Events initial prototype for WebKit

    • 0 Comments

    From:

    Adalberto Foresti, Principal Program Manager, Microsoft Open Technologies, Inc.
    Scott Blomquist, Senior Development Engineer, Microsoft Open Technologies, Inc.

    It’s great to see that the W3C Pointer Events Working Group has expanded its membership and published the first working draft last week in the process to standardize a single input model across all types of devices. To further contribute to the technical discussions, today Microsoft Open Technologies, Inc., published an early open source HTML5 Labs Pointer Events prototype of the W3C Working Draft for WebKit. We want to work with the WebKit developer community to enhance this prototype. Over time, we want this prototype to implement all the features that will be defined by the W3C Working Group’s Pointer Events specification. The prototype will help with interoperability testing with Internet Explorer.

    The Web today is fragmented into sites designed for only one type of input. The goal of a Pointer Events standard is to help Web developers to only need to code to one pointer input model across all types of devices and to have that code work across multiple browsers. Google, Microsoft, Mozilla, Nokia and Zynga are among the industry members working to solve this problem in the W3C Pointer Events WG.

    Microsoft submitted the Pointer Events specification to the W3C just three months ago. The working group is using Microsoft’s Member submission as a starting point for the specification, which is based on the APIs available today in IE10 on Windows 8 and Windows Phone 8.

    Our team developed this Pointer Events prototype of the W3C Working Draft for WebKit as a starting point for testing interoperability between Internet Explorer and WebKit in this space. As we have done in the past on HTML5 Labs, the prototype intends to inform discussions and provide information grounded on implementation experience. Please provide feedback on this initial implementation in the comments of this blog and in the WebKit mailing lists. We also would love to get some advice on how/when to submit this patch to the main WebKit trunk.

    Overall, we believe that we are on a solid path forward in this standardization process. In a short time, we have a productive working group, a first W3C Working Draft specification, and an early proof of concept for WebKit that should provide valuable insights. We’re looking forward to working closely with the community to develop this open source code in WebKit so we can start testing interoperability with Internet Explorer.

  • Interoperability @ Microsoft

    Breaking news: HTML 5.0 and Canvas 2D specification’s definition is complete!

    • 1 Comments

    imageToday marks an important milestone for Web development, as the W3C announced the publication of the Candidate Recommendation (CR) version of the HTML 5.0 and Canvas 2D specifications.

    This means that the specifications are feature complete: no new features will be added to the final HTML 5.0 or the Canvas2D Recommendations. A small number of features are marked “at risk,” but developers and businesses can now rely on all others being in the final HTML 5.0 and Canvas 2D Recommendations for implementation and planning purposes. Any new features will be rolled into HTML 5.1 or the next version of Canvas 2D.

    It feels like yesterday when I was publishing a previous post on HTML5 progress toward a standard, as HTML5 reached "Last Call" status in May 2011. The W3C set an ambitious timeline to finish HTML 5.0, and this transition shows that it is on track. That makes me highly confident that HTML 5.0 can reach Recommendation status in 2014.

    The real-world interoperability of many HTML 5.0 features today means that further testing can be much more focused and efficient. As a matter of fact, the Working Group will use the “public permissive” criteria to determine whether a feature that is implemented by multiple browsers in an interoperable way can be accepted as part of the standard without expensive testing to verify.

    Work in this “Candidate Recommendation” phase will focus on analyzing current HTML 5.0 implementations, establishing priorities for test development, and working with the community to develop those tests. The WG will also look into the features tagged as “at risk” that might be moved to HTML 5.1 or the next version of Canvas2D if they don’t exhibit a satisfactory level of interoperability by the end of the CR period.

    At the same time, work on HTML 5.1 and the next version of Canvas2D are underway and the W3C announced first working drafts that include features such as media and graphics. This work is on a much faster track than HTML5 has been, and 5.1 Recommendations are expected in 2016. The HTML Working Group will consider several sources of suggested new features for HTML 5.1. Furthermore, HTML 5.1 could incorporate the results of various W3C Community Groups such as the Responsive Images Community Group or the WHATCG. HTML 5.1 will use the successful approach that the CSS 3.0 family of specs has used to define modular specifications that extend HTML’s capabilities without requiring changes to the underlying standard. For example, the HTML WG already has work underway to standardize APIs for Encrypted Media Extensions, which would allow services such as Netflix to stream content to browsers without plugins, and Media Source Extensions to facilitate streaming content in a way that adapts to the characteristics of the network and device.

    Reaching Candidate Recommendation further indicates the high level of collaboration that exists in the HTML WG. I would especially like to thank the W3C Team and my co-chairs, Sam Ruby (IBM) and Maciej Stachowiak (Apple), for all their hard work in helping to get to CR. In addition, the HTML WG editorial team lead by Robin Berjon deserves a lot of credit for finalizing the CR drafts and for their work on the HTML 5.1 drafts.

    /paulc

    Paul Cotton, Microsoft Canada
    W3C HTML Working Group co-chair

  • Interoperability @ Microsoft

    Using LucidWorks on Windows Azure (Part 2 of a multi-part MS Open Tech series)

    • 0 Comments

     

    LucidWorks Search on Windows Azure delivers a high-performance search service based on Apache Lucene/Solr open source indexing and search technology. This service enables quick and easy provisioning of Lucene/Solr search functionality on Windows Azure without any need to manage and operate Lucene/Solr servers, and it supports pre-built connectors for various types of enterprise data, structured data, unstructured data and web sites.

    In June, we shared an overview of the LucidWorks Search service for Windows Azure, and in our first post in this series we provided more detail on features and benefits. For this post, we’ll start with the main feature of LucidWorks – quickly creating a LucidWorks instance by selecting LucidWorks from the Azure Marketplace and adding it to an existing Azure Instance. It takes a few clicks and a few minutes.

    Signing up

    LucidWorks Search is listed under applications in the Windows Azure Marketplace. To set up a new instance of LucidWorks on Windows Azure, just click on the Learn More button:

    image

    That takes you to the LucidWorks Account Signup Page. From here, you select a plan, based on the type of storage being used and the number of documents to index. There are currently four plans available: Micro, which has no monthly fee, Small and Medium, which have pre-set fees, and Large, which is negotiated directly with LucidWorks based on several parameters. All of the account levels have fees for overages, and the option to move to the next tier is always available via the account page.

    The plans are differentiated on document limits in indexes, the number of queries that can be performed per month, the frequency that indexes are updated, and index targets. Index targets are the types of content that can be indexed – for a Micro, only Websites can be indexed, for small and large, files, RDBMS, and XML content can also be indexed. For large instances ODBC data drivers can be used to make content available to indexes.

    image

    Once the plan is selected, enter your information, including Billing Information:

    image

    Once the payment is processed (Or in the case of Micro, no payment), a new instance is generated and you’re redirected to an account page, and invited to start building collections!

    Configuration

    image

    In the next part of the series we’ll cover setting up collections in more detail, for now let’s cover the account settings and configuration. Here’s the main screen for collections:

    image

    The first thing you see is the Access URL options. You can access your collections via Solr or REST API, and here’s where you get the predefined URL for either. When you drill down into the collections you see a status screen first:

    image

    This shows you the index size and stats about modification, queries per second, and updates per second, displayable by the last hour, day or week. This screen is also where you can see the most popular queries.

    Data Sources

    If you were managing external data sources, here’s where you configure them, via the Manage Data Sources button.

    image

    From here you can select a new data source from the drop-down. The list in this drop-down is as of this writing, and may change over time – check here for more information on currently supported data sources.

    Indexing

    The Indexing Settings are the next thing to manage in your LucidWorks on Azure account. Here’s the Indexing UI:

    image

    Indexing Settings

    De-duplication manages how duplicate documents are handled. (As we discussed in our first post, any individual item that is indexed and/or searched is called a document.) Off ignores duplicates, Tag identifies duplicates with a unique tag, and Overwrite replaces duplicate documents with new documents when they are indexed. Remember that de-duplication only applies to the indexes of data, not the data itself – only the indexed reference to the document is de-duplicated – so duplicates will still exist in the source data even if data in the indexes has been de-duplicated. Duplicates are determined based on key fields that you set in the fields editing UI.

    Default Field Type is used for setting the type of data for fields whose type LucidWorks cannot determine using its built-in algorithms.

    Auto-commit and Auto-soft commit settings determine when the index will be updated. Max time is how long to wait before committing, and max docs is how many documents are collected before a commit. Soft commits are used for real time searching, while regular commits manage the disk-stored indexes.

    Activities manage the configuration of indexes, suggested autocomplete entries, and user result click logging.

    Full documentation of indexing settings can be found here.

    Field Settings

    Field Settings allow configuration of each field in the index. Fields displayed below are automatically defined by data extraction and have been indexed:

    image

    Field types defined by LucidWorks have been optimized for most types of content, and should not generally be changed. The other settings need to be configured once the index has run and defined your fields:

    image

    For example, a URL field would be a good candidate for de-duplication, and you may want to index it for autocomplete as well. You can also indicate on Field Settings whether you want to display URLs in search results. Here is full documentation of Field Settings.

    Other Indexing Settings

    Dynamic Fields are almost the same as fields, but are created or modified when the index is created. For example, adding a value before or after a field value, or adding one or more fields together to form a single value.

    Field Types is where you add custom field types in addition to the default field types created by your LucidWorks installation.

    Schedules is where you add and view schedules for indexing.

    Querying

    Querying Settings is where you can edit the configuration for how queries are conducted:

    image

     

    The Default Sort sets results to be sorted by relevance, date, or random.

    There are four Query Parsers available out of the Box for LucidWorks; a custom LucidWorks parser, as well as standard Lucene, dismax and extended dismax. More information on the details of each parser is available here.

    Unsupervised feedback resubmits the query using the top 5 results of the initial query to improve results.

    This is also where you configure the rest of your more familiar query behavior, like where stop words will be used, auto complete, and other settings, the full details of which are here.

    Next up: Creating custom Web site Search using LucidWorks.

    In the next post in the series, we’ll demonstrate setting up a custom Web site that integrated LucidWorks Search, and the configuration settings we use to optimize search for that site. After that, in future posts we’ll discuss tips and tricks for working with specific types of data in Lucidworks.

    Brian Benz
    Senior Technical Evangelist
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    Lowering the barrier of entry to the cloud: announcing the first release of Actor Framework from MS Open Tech (Act I)

    • 0 Comments

    From:

    Erik Meijer, Partner Architect, Microsoft Corp.

    Claudio Caldato, Principal Program Manager Lead, Microsoft Open Technologies, Inc.

     

    There is much more to cloud computing than running isolated virtual machines, yet writing distributed systems is still too hard. Today we are making progress towards easier cloud computing as ActorFX joins the Microsoft Open Technologies Hub and announces its first, open source release. The goal for ActorFx is to provide a non-prescriptive, language-independent model of dynamic distributed objects, delivering a framework and infrastructure atop which highly available data structures and other logical entities can be implemented.

    ActorFx is based on the idea of the Actor Model developed by Carl Hewitt, and further contextualized to managing data in the cloud by Erik Meijer in his paper that is the basis for the ActorFx project − you can also watch Erik and Carl discussing the Actor model in this Channel9 video.

    What follows is a quick high-level overview of some of the basic ideas behind ActorFx. Follow our project on CodePlex to learn where we are heading and how it will help when writing the new generation of cloud applications.

    ActorFx high-level Architecture

    At a high level, an actor is simply a stateful service implemented via the IActor interface. That service maintains some durable state, and that state is accessible to actor logic via an IActorState interface, which is essentially a key-value store.

    image

     

    There are a couple of unique advantages to this simple design:

    • Anything can be stored as a value, including delegates.  This allows us to blur the distinction between state and behavior – behavior is just state.  That means that actor behavior can be easily tweaked “on-the-fly” without recycling the service representing the actor, similar to dynamic languages such as JavaScript, Ruby, and Python.
    • By abstracting the IActorState interface to the durable store, ActorFx makes it possible to “mix and match” back ends while keeping the actor logic the same.  (We will show some actor logic examples later in this document.)

    ActorFx Basics

    The essence of the ActorFx model is captured in two interfaces: IActor and IActorState.

    IActorState is the interface through which actor logic accesses the persistent data associated with an actor, it is the interface implemented by the “this” pointer.

    public interface IActorState
        {
            void Set(string key, object value);
            object Get(string key);
            bool TryGet(string key, out object value);
            void Remove(string key);
            Task Flush(); // "Commit"
        }
    

    By design, the interface is an abstract key-value store.  The Set, Get, TryGet and Remove methods are all similar to what you might find in any Dictionary-type class, or a JavaScript object.  The Flush() method allows for transaction-like semantics in the actor logic; by convention, all side-effecting IActorState operations (i.e., Set and Remove) are stored in a local side-effect buffer until Flush() is called, at which time they are committed to the durable store (if the implementation of IActorState implements that).

    The IActor interface

    An ActorFx actor can be thought of as a highly available service, and IActor serves as the computational interface for that service.  In its purest form, IActor would have a single “eval” method:

    public interface IActor
        {
            object Eval(Func<IActorState, object[], 
    object> function, object[] parameters); }

    That is, the caller requests that the actor evaluate a delegate, accompanied by caller-specified parameters represented as .NET objects, against an IActorState object representing a persistent data store.  The Eval call eventually returns an object representing the result of the evaluation.

    Those familiar with object-oriented programming should be able to see a parallel here.   In OOP, an instance method call is equivalent to a static method call into which you pass the “this” pointer.  In the C# sample below, for example, Method1 and Method2 are equivalent in terms of functionality:

    class SomeClass
        {
            int _someMemberField;
    
            public void Method1(int num)
            {
                _someMemberField += num;
            }
    
            public static void Method2(SomeClass thisPtr, int num)
            {
                thisPtr._someMemberField += num;
            }
        }
    

    Similarly, the function passed to the IActor.Eval method takes an IActorState argument that can conceptually be thought of as the “this” pointer for the actor.  So actor methods (described below) can be thought of as instance methods for the actor.

    Actor Methods

    In practice, passing delegates to actors can be tedious and error-prone.  Therefore, the IActor interface calls methods using reflection, and allows for transmitting assemblies to the actor:

    public interface IActor
        {
            string CallMethod(string methodName, string[] parameters);
            bool AddAssembly(string assemblyName, byte[] assemblyBytes);
        }
    

    Though the Eval method is still an integral part of the actor implementation, it is no longer part of the actor interface (at least for our initial release).  Instead, it has been replaced in the interface by two methods:

    • The CallMethod method allows the user to call an actor method; it is translated internally to an Eval() call that looks up the method in the actor’s state, calls it with the given parameters, and then returns the result.
    • The AddAssembly method allows the user to transport an assembly containing actor methods to the actor.

    There are two ways to define actor methods:

    (1)   Define the methods directly in the actor service, “baking them in” to the service.

    myStateProvider.Set(
    "SayHello"
    ,
    (
    Func<IActorState, object[], object
    >)
       
    delegate(IActorState astate, object
    [] parameters)
        {
            
    return "Hello!"
    ;
         });

    (2)   Define the methods on the client side.

            [ActorMethod]
            public static object SayHello(IActorState state, object[] parameters)
            {
                return "Hello!";
            }
    

           

    You would then transport them to the actor “on-the-fly” via the actor’s AddAssembly call.

    All actor methods must have identical signatures (except for the method name):

    • They must return an object.
    • They must take two parameters:
      • An IActorState object to represent the “this” pointer for the actor, and
      • An object[] array representing the parameters passed into the method.

    Additionally, actor methods defined on the client side and transported to the actor via AddAssembly must be decorated with the “ActorMethod” attribute, and must be declared as public and static.

    Publication/Subscription Support

    We wanted to be able to provide subscription and publication support for actors, so we added these methods to the IActor interface:

    public interface IActor
        {
            string CallMethod(string clientId, int clientSequenceNumber,
    string methodName, string[] parameters); bool AddAssembly(string assemblyName, byte[] assemblyBytes); void Subscribe(string eventType); void Unsubscribe(string eventType); void UnsubscribeAll(); }

    As can be seen, event types are coded as strings.  An event type might be something like “Collection.ElementAdded” or “Service.Shutdown”.  Event notifications are received through the FabricActorClient.

    Each actor can define its own events, event names and event payload formats.  And the pub/sub feature is opt-in; it is perfectly fine for an actor to not support any events.

    A simple example: Counter

    If you wanted your actor to support counter semantics, you could implement an actor method as follows:

            [ActorMethod]
            public static object IncrementCounter(IActorState state, 
    object[] parameters) { // Grab the parameter var amountToIncrement = (int)parameters[0]; // Grab the current counter value int count = 0; // default on first call object temp; if (state.TryGet("_count", out temp)) count = (int)temp; // Increment the counter count += amountToIncrement; // Store and return the new value state.Set("_count", count); return count; }

    Initially, the state for the actor would be empty.

    After an IncrementCounter call with parameters[0] set to 5, the actor’s state would look like this:

    Key

    Value

    “_count”

    5

     

     

     

    After another IncrementCounter call with parameters[0] set to -2, the actor’s state would look like this:

    Key

    Value

    “_count”

    3

     

     

     

    Pretty simple, right? Let’s try something a little more complicated.

    Example: Stack

    For a slightly more complicated example, let’s consider how we would implement a stack in terms of actor methods.  The code would be as follows:

            [ActorMethod]
            public static object Push(IActorState state, object[] parameters)
            {
                // Grab the object to push
                var pushObj = parameters[0];
     
                // Grab the current size of the stack
                int stackSize = 0; // default on first call
                object temp;
                if (state.TryGet("_stackSize", out temp)) stackSize = (int)temp;
    
                // Store the newly pushed value
                var newKeyName = "_item" + stackSize;
                var newStackSize = stackSize + 1;
                state.Set(newKeyName, pushObj);
                state.Set("_stackSize", newStackSize);
    
                // Return the new stack size
                return newStackSize;
            }
    
            [ActorMethod]
            public static object Pop(IActorState state, object[] parameters)
            {
                // No parameters to grab
    
                // Grab the current size of the stack
                int stackSize = 0; // default on first call
                object temp;
                if (state.TryGet("_stackSize", out temp)) stackSize = (int)temp;
    
                // Throw on attempt to pop from empty stack
                if (stackSize == 0) throw new InvalidOperationException(
    "Attempted to pop from an empty stack"); // Remove the popped value, update the stack size int newStackSize = stackSize - 1; var targetKeyName = "_item" + newStackSize; var retrievedObject = state.Get(targetKeyName); state.Remove(targetKeyName); state.Set("_stackSize", newStackSize); // Return the popped object return retrievedObject; } [ActorMethod] public static object Size(IActorState state, object[] parameters) { // Grab the current size of the stack, return it int stackSize = 0; // default on first call object temp; if (state.TryGet("_stackSize", out temp)) stackSize = (int)temp; return stackSize; }

    To summarize, the actor would contain the following items in its state:

    • The key “_stackSize” whose value is the current size of the stack.
    • One key “_itemXXX” corresponding to each value pushed onto the stack.

     

    After the items “foo”, “bar” and “spam” had been pushed onto the stack, in that order, the actor’s state would look like this:

    Key

    Value

    “_stackSize”

    3

    “_item0”

    “foo”

    “_item1”

    “bar”

    “_item2”

    “spam”

     

     

     

     

    A pop operation would yield the string “spam”, and leave the actor’s state looking like this:

    Key

    Value

    “_stackSize”

    2

    “_item0”

    “foo”

    “_item1”

    “bar”

     

     

    The Actor Runtime Client

    Once you have actors up and running in the Actor Runtime, you can connect to those actors and manipulate them via use of the FabricActorClient.  This is the FabricActorClient’s interface:

    public class FabricActorClient
        {
            public FabricActorClient(Uri fabricUri, Uri actorUri, bool useGateway);
            public bool AddAssembly(string assemblyName, byte[] assemblyBytes, 
    bool replaceAllVersions = true); public Object CallMethod(string methodName, object[] parameters); public IDisposable Subscribe(string eventType,
    IObserver<string> eventObserver); }

    When constructing a FabricActorClient, you need to provide three parameters:

    • fabricUri: This is the URI associated with the Actor Runtime cluster on which your actor is running.  When in a local development environment, this is typically “net.tcp://127.0.0.1:9000”. When in an Azure environment, this would be something like “net.tcp://<yourDeployment>.cloudapp.net:9000”.
    • actorUri: This is the URI, within the ActorRuntime, that is associated with your actor.  This would be something like “fabric:/actor/list/list1” or “fabric:/actor/adhoc/myFirstActor”.
    • useGateway: Set this to false when connecting to an actor in a local development environment, true when connecting to an Azure-hosted actor.

    The AddAssembly method allows you to transport an assembly to the actor.  Typically that assembly would contain actor methods, effectively add behavior to or changing the existing behavior of the actor.  Take note that the “replaceAllVersions” parameter is ignored.

    What’s next?

    This is only the beginning of a journey. The code we are releasing today is an initial basic framework that can be used to build a richer set of functionalities that will make ActorFx a valuable solution for storing and processing data on the cloud. For now, we are starting with a playground for developers who want to explore how this new approach to data storage and management on the cloud can become a new way to see old problems. We will keep you posted on this blog and you are of course more than welcome to follow our Open Source projects on our MSOpenTech CodePlex page. See you there!

  • Interoperability @ Microsoft

    Windows Azure Authentication module for Drupal using WS-Federation

    • 0 Comments

    At Microsoft Open Technologies, Inc., we’re happy to share the news that Single Sign-on of Drupal Web sites hosted on Windows Azure with Windows Live IDs and / or Google IDs is now available.  Users can now log in to your Drupal site using Windows Azure's WS-Federation-based login system with their Windows Live or Google ID. Simple Web Tokens (SWT) are supported and SAML 2.0 support is currently planned but not yet available.

    Setup and configuration is easy via your Windows Azure account administrator UI.  Setup details are available via the Drupal project sandbox here.  Full details of setup are here.

    Under the hood, WS-Federation is used to identify and authenticate users and identity providers.  WS-Federation extends WS-Trust to provide a flexible Federated Identity architecture with clean separation between trust mechanisms (In this windows Live and Google), security token formats (In this case SWT), and the protocol for obtaining tokens. 

    The Windows Azure Authentication module acts as a relying party application to authenticate users. When downloaded, configured and enabled on your Drupal Web site, the module:

    -Makes a request via the Drupal Web site for supported identity providers

    -Displays a list of supported identity providers with Authentication links

    -Provides return URL for authentication, parsing and validating the returned SWT

    -Logs the user in or directs the user to register

  • Interoperability @ Microsoft

    MS Open Tech Contributes Support for Windows ETW and Perf Counters to Node.js

    • 0 Comments

    Here’s the latest about Node.js on Windows. Last week, working closely with the Node.js core team, we checked into the open source Node.js master branch the code to add support for ETW and Performance Counters on Windows. These new features will be included in the new V0.10 when it is released. You can download the source code now and build Node.js on your machine if you want to try out the new functionality right away.

    Developers need advanced debugging and performance monitoring tools. After working to assure that Node.js can run on Windows, our focus has been to provide instrumentation features that developers can use to monitor the execution of Node applications on Windows. For Windows developers this means having the ability to collect Event Tracing for Windows ® (ETW) data and use Performance Counters to monitor application behavior at runtime. ETW is a general-purpose, high-speed tracing facility provided by the Windows operating system. To learn more about ETW, see the MSDN article Improve Debugging And Performance Tuning With ETW.

    ETW

    With ETW, Node developers can monitor the execution of node applications and collect data on key metrics to investigate and performance and other issues. One typical scenario for ETW is profiling the execution of the application to determine which functions are most expensive (i.e. the functions where the application spends the most time). Those functions are the ones developers should focus on in order to improve the overall performance of the application.

    In Node.js we added the following ETW events, representing some of the most interesting metrics to determine the health of the application while it is running in production:

    • NODE_HTTP_SERVER_REQUEST: node.js received a new HTTP Request
    • NODE_HTTP_SERVER_RESPONSE: node.js responded to an HTTP Request
    • NODE_HTTP_CLIENT_REQUEST: node.js made an HTTP request to a remote server
    • NODE_HTTP_CLIENT_RESPONSE: node.js received the response from an HTTP Request it made
    • NODE_NET_SERVER_CONNECTION: TCP socket open
    • NODE_NET_STREAM_END: TCP Socket close
    • NODE_GC_START: V8 starts a new GC
    • NODE_GC_DONE: V8 finished a GC

    For Node.js ETW events we also added some additional information about the JavaScript track trace at the time the ETW event was generated. This is important information that the developer can use to determine what code has been executed when the event was generated.

    Flamegraphs

    Most Node developers are familiar with Flamegraphs, which are a simple graphical representation of where time is spent during application execution. The following is an example of a Flamegraph generated using ETW.

    clip_image002

    For Windows developers we built the ETWFlamegrapth tool (based on Node.js) that can parse etl files, the log files that Windows generates when ETW events are collected. The tool can convert the etl file to a format that can be used with the Flamegraph tool that Brendan Gregg created.

    To generate a Flamegraph using Brendan’s tool, you need to follow the simple instructions listed in the ETWFlamegraph project page on Github. Most of the steps involve processing the ETW files so that symbols and other information are aggregated into a single file that can be used with the Flamegraph tool.

    ETW relies on a set of tools that are not installed by default. You’ll either need to install Visual Studio (for instance, Visual Studio 2012 installs the ETW tools by default) or you need to install the latest version of the Windows SDK tools. For Windows 7 the SDK can be found here.

    To capture stack traces:

    1. xperf -on Latency -stackwalk profile
    2. <run the scenario you want to profile, ex node.exe myapp.js>
    3. xperf -d perf.etl
    4. SET _NT_SYMBOL_PATH=srv*C:\symbols*http://msdl.microsoft.com/downloads/symbols
    5. xperf -i perf.etl -o perf.csv -symbols

    To extract the stack for process node.exe and fold the stacks into perf.csv.fold, this includes all information about function names that will be shown in the Framegraph.

    node etlfold.js perf.csv node.exe. (etlfold.js is the file found in the ETWFlamegraph project on GitHub).

    Then run the flamegraph script (requires perl) to generate the svg output:

    flamegraph.pl perf.csv.fold > perf.svg

    If the Node ETW events for JavaScript symbols are available then the procedure becomes the following.

    1. xperf -start symbols -on NodeJS-ETW-provider -f symbols.etl -BufferSize 128
    2. xperf -on Latency -stackwalk profile
    3. run the scenario you want to profile.
    4. xperf -d perf.etl
    5. xperf -stop symbols
    6. SET _NT_SYMBOL_PATH=srv*C:\symbols*http://msdl.microsoft.com/downloads/symbols
    7. xperf -merge perf.etl symbols.etl perfsym.etl
    8. xperf -i perfsym.etl -o perf.csv -symbols

    The remaining steps are the same as in the previous example.

    Note: for more advanced scenarios where you may want to have stack traces that include the Node.js core code executed at the time the event is generated, you need to include node.pdb (the debugging information file) in the symbol path so the ETW tools can resolve and include them in the Framegraph.

    PerfCounters

    In addition to ETW, we also added Performance Counters (PerfCounters). Like ETW, Performance counters can be used to monitor critical metrics at runtime, the main differences being that they provide aggregated data and Windows provides a great tool to display them. The easiest way to work with PerfCounters is to use the Performance monitor console but PerfCounters are also used by System Center and other data center management applications. With PerfCounters a Node application can be monitored by those management applications, which are widely used for instrumentation of large cloud and enterprise-based applications.

    In Node.js we added the following performance counters, which mimic very closely the ETW events:

    • HTTP server requests: number of incoming HTTP requests
    • HTTP server responses: number of responses
    • HTTP client requests: number of HTTP requests generated by node to a remote destination
    • HTTP client responses: number of HTTP responses for requests generated by node
    • Active server connections: number of active connections
    • Network bytes sent: total bytes sent
    • Network bytes received: total bytes received
    • %Time in GC: % V8 time spent in GC
    • Pipe bytes sent: total bytes sent over Named Pipes.
    • Pipe bytes received: total bytes received over Named Pipes.

    All Node.js performance counters are registered in the system so they show up in the Performance Monitor console.

    clip_image003

    While the application is running, it’s easy to see what is happening through the Performance Monitor console:

    clip_image004

    The Performance Monitor console can also display performance data in a tabular form:

    clip_image005

    Collecting live performance data at runtime is an important capability for any production environment. With these new features we have given Node.js developers the ability to use a wide range of tools that are commonly used in the Windows platform to ensure an easier transition from development to production.

    More on this topic very soon, stay tuned.

    Claudio Caldato
    Principal Program Manager Lead
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    System Center VMM working with OVF community to improve virtual machine interoperability

    • 0 Comments

    Portability and interoperability of virtualization technologies across platforms using Linux and Windows virtual machines are important to Microsoft and to our customers.

    To that end, System Center VMM continues to gain valuable interoperability and portability experience using Open Virtualization Format (OVF) with their OVF Export/Import tool and partners such as Citrix and VMware.

    For more information, see System Center's most recent post from Cheng Wei and that of Citrix's technical architect Shishir Pardikar.

    Monica Martin
    Senior Program Manager
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    Using LucidWorks on Windows Azure (Part 1 of a multi-part MS Open Tech series)

    • 0 Comments

    LucidWorks Search on Windows Azure delivers a high-performance search service based on Apache Lucene/Solr open source indexing and search technology. This service enables quick and easy provisioning of Lucene/Solr search functionality on Windows Azure without any need to manage and operate Lucene/Solr servers, and it supports pre-built connectors for various types of enterprise data, structured data, unstructured data and web sites.

    In June, we shared an overview of the LucidWorks Search service for Windows Azure. For this post, the first in a series, we’ll cover a few of the concepts you need to know to get the most out of the LucidWorks search service on Windows Azure. In future posts we’ll show you how to set up a LucidWorks service on Windows Azure and demonstrate how to integrate search with Web sites, unstructured data and structured data.

    Options for Developers

    Developers can add search to their existing Web Sites, or create a new Windows Azure Web site with search as a central function.  For example, in future posts in this series, we’ll create a simple Windows Azure web site that will use the LucidWorks search service to index and search the contents of other Web sites.  Then we’ll enable search from the same demo Web site against a set of unstructured data and MySQL structured data in other locations.

    Overview:  Documents, Fields, and Collections

    LucidWorks creates an index of unstructured and structured data.  Any individual item that is indexed and/or searched is called a Document.  Documents can be a row in a structured data source or a file in an unstructured data source, or anything else that Solr/Lucene understands.

    An individual item in a Document is called a Field.  Same concept – fields can be columns of data in a structured source or a word in an unstructured source, or anything in between.  Fields are generally atomic, in other words they cannot be broken down into smaller items.

    LucidWorks calls groups of Documents that can be managed and searched independently of each other Collections. Searching, by default is on one collection at a time, but of course programmatically a developer can create search functionality that returns results for more than one Collection.

    Security via Collections and Filters

    Collections are a great way to restrict a group of users, controlled by access to Windows Azure Web sites and by LucidWorks.  In addition, LucidWorks Admins can create Filters inside a Collection.  User identity can be integrated with an existing LDAP directory, or managed programmatically via API.

    LucidWorks additional Features

    LucidWorks adds value to Solr/Lucene with some very useful UI enhancements that can be enabled without programming. 

    Persistent Queries and Alerts, Auto-complete, spellcheck and similar terms.

    Users can create their own persistent queries.  Search terms are automatically monitored and Alerts are delivered to a specified email address using the Name of the alert as the subject line. You can also specify how often the persistent query should check for new data and how often alerts are generated.

    Search term Typeahead can be enabled via LucidWorks’ auto-complete functionality. Auto-complete tracks the characters the user has already entered and displays terms that start with those characters.

    When results re displayed, LucidWorks can spell-check queries and offer alternative terms based on similar spellings of words and synonyms in the query.  

    Stopwords

    Search engines use Stopwords to remove common words from queries and query indexes like “a”, “and”, or “for” that add no value to searches.   LucidWorks has an editable list of Stopwords that is a great start to increase search relevance. 

    Increasing Relevance with Click Scoring

    Click scoring tracks common queries and query results and tracks which results are most often selected against query terms and scores relevance based on the comparison results.  Results with a higher relevance are placed higher in search result rankings, based on user activity.

    LucidWorks on Windows Azure – Easy Deployment

    The best part of LucidWorks is how easily Enterprise Search can be added as a service.  In our next LucidWorks blog post we’ll cover how to quickly get up and running with Enterprise search by adding a LucidWorks service to an existing Windows Azure Web site. 

  • Interoperability @ Microsoft

    Sharing proposals for negotiation and flow control for HTTP/2.0 at IETF 85

    • 0 Comments

    From:

    Gabriel Montenegro
    Principal Software Development Engineer, Microsoft Corporation

    Brian Raymor
    Senior Program Manager, Microsoft Open Technologies, Inc.

    Rob Trace
    Senior Program Manager Lead, Microsoft Corporation

     

    We just returned from the IETF 85 meeting in Atlanta, where the HTTPbis working group held face to face meetings to begin work on HTTP/2.0. As outlined in our previous IETF 84 report, there are seven key technical areas where consensus has not yet emerged or the initial draft did not specify clear behavior for an interoperable implementation. The IETF 85 meeting focused on three of these areas:

    Discussion on server push was deferred until more data is available.

    Negotiation

    As noted in the HTTPbis charter, the working group needs to explicitly consider:

    A negotiation mechanism that is capable of not only choosing between HTTP/1.x and HTTP/2.x, but also for bindings of HTTP URLs to other transports (for example).

    To move the discussion forward, Microsoft presented Upgrade-based Negotiation for HTTP/2.0 at the HTTPbis meeting. This presentation is based on our draft proposal which allows HTTP/2.0 to be negotiated either in the clear or over TLS. Further details on its design and MS Open Tech related HTML5 Labs prototype are available in More HTTP/2.0 Prototyping: a Suggested Approach to the Protocol Upgrade.

    The working group consensus was “to pursue this path” and gather more data on its success in real world deployments when the connection is not secure. Drafts for alternatives that enhance or bypass the Upgrade approach were also solicited.

    Flow Control

    There has been limited discussion in the HTTPbis working group on flow control. Microsoft presented Flow Control Principles for HTTP 2.0 to build consensus around the rules and guidelines for future Flow Control prototypes and experimentation. Based on the response to the presentation, Mark Nottingham, the HTTPbis chair, requested a draft proposal to be submitted which incorporated suggestions from other participants. Microsoft submitted the first version of HTTP 2.0 Principles for Flow Control with contributions from Ericsson. Further versions with additional contributors are expected.

    Conclusion

    We were very pleased with the progress of the discussions as reflected in the audio and the draft meeting minutes.

    As Lao Tzu wrote “A journey of a thousand miles begins with a single step”. IETF 85 was the first step towards the proposed completion date of November 2014. Next steps are a potential interim face to face meeting in January or February 2013 and then IETF 86 in March 2014. We’re looking forward to contributing and participating in these sessions.

    Gabriel Montenegro, Brian Raymor, and Rob Trace

  • Interoperability @ Microsoft

    OData at Information on Demand 2012

    • 0 Comments

    I attended IBM’s Information on Demand conference two weeks ago, where I had the opportunity to talk to people about OData (Open Data Protocol). Microsoft and IBM are collaborating on the development of the OData standard in the OASIS OData Technical Committee, and for this conference we were demonstrating a simple OData feed on a DB2 database, consumed by a variety of client applications.

    Here’s a high-level view of the architecture of the demo app:

    OData-diagram

    For this demo, we deployed an OData service on Windows Azure that exposes a few entities from a DB2 database running on IBM’s cloud platform. By leveraging WCF Data Services in Visual Studio, we were able to create this OData feed in a matter of minutes.

    Here’s a screencast that shows the steps involved in creating the demo service and consuming it from various client devices and applications:

    For more information about using OData with DB2 or Informix, see “Use OData with IBM DB2 and Informix” on the IBM DeveloperWorks site.

    The growing OData ecosystem is enabling a variety of new scenarios to deliver open data for the open web, and it was great to have the opportunity to learn from so many perspectives this week! Standardizing a URI query syntax and semantics means that data providers and data consumers can focus on innovative ways to add value by combining disparate data sources, and assures interoperability between a wide variety of data producers and consumers. To learn more about OData, including the ecosystem, developer tools, and how you can get involved, see this blog post.

    Special thanks to Susan Malaika, Brent Gross, and John Gera of IBM for all of their help with putting together the demo and their support at the booth throughout the conference. We’re looking forward to continued collaboration with our colleagues at IBM and the many other organizations involved in the ongoing standardization of OData!

  • Interoperability @ Microsoft

    MS Open Tech Open Sources Rx (Reactive Extensions) – a Cure for Asynchronous Data Streams in Cloud Programming

    • 9 Comments

    From:
    Erik Meijer, Architect, Microsoft Corp.
    Claudio Caldato, Lead Program Manager, Microsoft Open Technologies, Inc.

    Updated with a quote from Ferranti Computer Systems NV

    Updated: added quotes from Netflix and BlueMountain Capital Management

    If you are a developer that writes asynchronous code for composite applications in the cloud, you know what we are talking about, for everybody else Rx Extensions is a set of libraries that makes asynchronous programming a lot easier. As Dave Sexton describes it, “If asynchronous spaghetti code were a disease, Rx is the cure.”

    Reactive Extensions (Rx) is a programming model that allows developers to glue together asynchronous data streams. This is particularly useful in cloud programming because helps create a common interface for writing applications that come from diverse data sources, e.g., stock quotes, Tweets, computer events, Web service requests.

    Today, Microsoft Open Technologies, Inc., is open sourcing Rx. Its source code is now hosted on CodePlex to increase the community of developers seeking a more consistent interface to program against, and one that works across several development languages. The goal is to expand the number of frameworks and applications that use Rx in order to achieve better interoperability across devices and the cloud.

    Rx was developed by Microsoft Corp. architect Erik Meijer and his team, and is currently used on products in various divisions at Microsoft. Microsoft decided to transfer the project to MS Open Tech in order to capitalize on MS Open Tech’s best practices with open development.

    There are applications that you probably touch every day that are using Rx under the hood. A great example is GitHub for Windows.

    According to Paul Betts at GitHub, "GitHub for Windows uses the Reactive Extensions for almost everything it does, including network requests, UI events, managing child processes (git.exe). Using Rx and ReactiveUI, we've written a fast, nearly 100% asynchronous, responsive application, while still having 100% deterministic, reliable unit tests. The desktop developers at GitHub loved Rx so much, that the Mac team created their own version of Rx and ReactiveUI, called ReactiveCocoa, and are now using it on the Mac to obtain similar benefits."

    And Scott Weinstein with Lab49 adds, “Rx has proved to be a key technology in many of our projects. Providing a universal data access interface makes it possible to use the same LINQ compositional transforms over all data whether it’s UI based mouse movements, historical trade data, or streaming market data send over a web socket. And time based LINQ operators, with an abstracted notion of time make it quite easy to code and unit test complex logic.”

    Netflix Senior Software Developer Jafar Husain explained why they like Rx. "Rx dramatically simplified our startup flow and introduced new opportunities for performance improvements. We were so impressed by its versatility and quality, we used it as the basis for our new data access platform. Today we're using both the Javascript and .NET versions of Rx in our clients and the technology is required learning for new members of the team."

    And Howard Mansell, Quantitative Strategist with BlueMountain Capital Management added, “We are very pleased that Microsoft are Open-Sourcing the Reactive Extensions for .NET. This will allow users to better reason about performance and optimize their particular use cases, which is critical for performance and latency sensitive applications such as real-time financial analysis.”

    From Belgium, Guido Van de Velde, Director  MECOMS Product Organisation for Ferranti Computer Systems NV, explains how Rx is important for their global company: “Ferranti uses Rx in its vertical solution for the utility market, MECOMS™, to process and manage all data and events from the Smart Grid. Its architecture allows the setup of data processing pipelines which can scale and deliver excellent performance. Performance testing together with Microsoft showed that this architectures supports up to hundreds of millions of smart meters and other sensors, running on  commodity hardware. Thanks to Rx we can focus on component functionalities and don’t have to worry about interfaces and connections between the different components saving significant development time.”

    Part of the Rx development team will be on assignment with the MS Open Tech Hub engineering program to accelerate the open development of the Rx project and to collaborate with open source communities. Erik will continue to drive the strategic directions of the technology and leverage MS Open Tech Hub engineering resources to update and improve the Rx libraries. With the community contribution we want to see Rx be adopted by other platforms. Our goal is to build an open ecosystem of Rx-compliant libraries that will help developers tackle the complexity of asynchronous programming and improve interoperability.

    We are also happy to see that our decision is welcome by open source developers.

    “Open sourcing Rx just makes sense. My hope is that we’ll see a couple of virtuous side-effects of this decision. Most likely will be faster releases for bug fixes and performance improvements, but the ability to understand the inner workings of the Rx code should encourage the creation of additional tools and Rx providers to remote data sources,” said Lab 49’s Scott Weinstein.

    According to Dave Sexton, http://davesexton.com/blog, “It’s a solid library built around core principles that hides much of the complexity of controlling and coordinating asynchrony within any kind of application. Opening it will help to lower the learning curve and increase the adoption rate of this amazing library, enabling developers to create complex asynchronous queries with relative ease and without any spaghetti code left over.”

    Starting today, the following libraries are available on CodePlex:

    • Reactive Extensions
      • Rx.NET: The Reactive Extensions (Rx) is a library for composing asynchronous and event-based programs using observable sequences and LINQ-style query operators.
      • RxJS: The Reactive Extensions for JavaScript (RxJS) is a library for composing asynchronous and event-based programs using observable sequences and LINQ-style query operators in JavaScript which can target both the browser and Node.js.
      • Rx++: The Reactive Extensions for Native (RxC) is a library for composing asynchronous and event-based programs using observable sequences and LINQ-style query operators in both C and C++.
    • Interactive Extensions
      • Ix: The Interactive Extensions (Ix) is a .NET library which extends LINQ to Objects to provide many of the operators available in Rx but targeted for IEnumerable<T>.
      • IxJS: An implementation of LINQ to Objects and the Interactive Extensions (Ix) in JavaScript.
      • Ix++: An implantation of LINQ for Native Developers in C++.
    • Bindings
      • Tx: a set of code samples showing how to use LINQ to events, such as real-time standing queries and queries on past history from trace and log files, which targets ETW, Windows Event Logs and SQL Server Extended Events.
      • LINQ2Charts: an example for Rx bindings. Similar to existing APIs like LINQ to XML, it allows developers to use LINQ to create/change/update charts in an easy way. We would love to see more Rx bindings like this one.
      • With these libraries we are giving developers open access to both push-based and pull-based data via LINQ in Microsoft’s three fundamental programming paradigms (native, JScript and Managed code).

    We look forward to seeing you guys use the library, share your thoughts and contribute to the evolution of this fantastic technology built for all you developers.

  • Interoperability @ Microsoft

    Advanced Message Queuing Protocol (AMQP) 1.0 approved as an OASIS Standard

    • 0 Comments

    We are very excited to share the news that AMQP 1.0 was approved as an OASIS Standard on October 29, 2012.

    AMQP 1.0 libraries are available for a variety of languages and platforms. The interest amongst users is growing. Support for AMQP 1.0 is anticipated in various message-oriented middleware implementations. AMQP 1.0 is the protocol of choice for open and interoperable messaging from the client all the way to the cloud!

    AMQP 1.0 as an open, interoperable, wire level messaging protocol enables interoperability between compliant clients and brokers. Applications can achieve full-fidelity message exchange between components built using different languages and frameworks and running on different operating systems. Further, as an inherently efficient application layer binary protocol, AMQP 1.0 enables new possibilities in messaging that scale from the client to the cloud.

    IIT Software GmbH, INETCO Systems Ltd., Microsoft, Red Hat and StormMQ have publicly posted statements about their use of AMQP 1.0 to the OASIS AMQP Technical Committee.

    Several AMQP 1.0 client libraries are currently available:

    1. AMQP 1.0 JMS library for Java from Apache Qpid

    2. AMQP 1.0 library for Java from SwiftMQ (IIT Software GmbH)

    3. Proton AMQP 1.0 library for C (including PHP and Python bindings) from Apache Qpid (Linux only today)

    Several other AMQP 1.0 client libraries are being developed. For example, the Apache Qpid community is porting the Proton AMQP 1.0 library to Windows. AMQP 1.0 client libraries for other languages, such as JavaScript and Ruby, are anticipated in the next several months.

    Windows Azure Toolkit for Eclipse, November 2012 Preview (version 1.8.0) now includes a new component “Package for Apache Qpid Client Libraries for JMS (by MS Open Tech)” which makes it easier for Java developers who use Eclipse to develop Java applications that use AMQP 1.0 for messaging.

    Stay tuned for more information as more libraries and implementations become available!

    Thanks,
    Ram Jeyaraman (Co-chair of OASIS AMQP Technical Committee and Senior Program Manager, Microsoft Open Technologies, Inc., a subsidiary of Microsoft Corporation)
    Doug Mahugh (Senior Technical Evangelist, Microsoft Open Technologies, Inc., a subsidiary of Microsoft Corporation)

    Additional Information

    AMQP Member Section Site: http://www.amqp.org

    OASIS AMQP Technical Committee: http://www.oasis-open.org/committees/amqp

  • Interoperability @ Microsoft

    Windows Azure Plugin for Eclipse with Java – November 2012 Preview

    • 0 Comments

    I’m pleased to announce the availability of a major update to our Eclipse tooling, the “Windows Azure Toolkit for Eclipse, November 2012 Preview (version 1.8.0)”. This release accompanies the release of the Windows Azure SDK v1.8, as well as the AMQP 1.0 messaging protocol support in Windows Azure Service Bus, and exposes a number of related features recently enabled by Windows Azure.

    The key highlights of this release include:

    a) The updated “Windows Azure Plugin for Eclipse with Java” supports using Windows Server 2012 as the target operating system in the cloud

    b) The plugin also now allows you to easily configure Windows Azure Caching, so you can use a memcached-compatible client for co-located, in-memory caching scenarios

    c) The toolkit includes a new component: “Package for Apache Qpid Client Libraries for JMS (by MS Open Tech)”, which is a distribution of the latest client libraries from Apache supporting AMQP 1.0-based messaging recently enabled by Windows Azure Service  Bus

    d) Plus a number of additional customer-feedback driven enhancements and bug fixes

    To learn more, see our latest documentation.

    Martin Sawicki
    Principal Program Manager
    Microsoft Open Technologies, Inc.
    A subsidiary of Microsoft Corporation

  • Interoperability @ Microsoft

    More HTTP/2.0 Prototyping: a Suggested Approach to the Protocol Upgrade

    • 0 Comments

    The activities of IETF’s HTTPbis working group continue next week at IETF 85 in Atlanta, marking another step in the path of HTTP/2.0 to Proposed Standard. Fruitful discussions are happening on many facets of the specification, filling the gaps wherever no obvious consensus had yet emerged or the initial draft did not clearly specify a given behavior that will be essential for a working, interoperable implementation.

    In an earlier blog post, we called out seven specific areas where the group will need to do additional work. Gabriel Montenegro and Willy Tarreau have now submitted a new proposal which describes a suggested approach for Negotiation in HTTP/2.0, in order to move the discussion forward on one of those key subjects. As it is, the proposal can already be used to negotiate HTTP 2.0 either in the clear or over TLS. Naturally, this proposal is a starting point and will undergo revisions going forward based on working group discussions (e.g., to further optimize the handshake).

    As outlined in the proposal itself, the mechanism is very simple. It leverages the Upgrade header defined in HTTP/1.1 and already in use in WebSocket. A client who is uncertain about whether the server supports HTTP/2.0 will initiate a request using HTTP/1.1 and include an upgrade header:

    GET /default.htm HTTP/1.1

    Host: server.example.com

    Connection: Upgrade

    Upgrade: HTTP/2.0

    At this point, if the server supports HTTP/1.1 only, it will just ignore the upgrade request and respond normally for an HTTP/1.1 connection:

    HTTP/1.1 200 OK

    Content-length: 243

    Content-type: text/html

    ...

    If instead the server does support HTTP/2.0, it will upgrade the connection and send the first HTTP/2.0 frame, with the important benefit of achieving that without any additional roundtrips.

           HTTP/1.1 101 Switching Protocols

           Connection: Upgrade

           Upgrade: HTTP/2.0

     

           [ HTTP/2.0 frame ]

    We have implemented this behavior and updated the prototype which we originally released back in May. Please download the latest version, check it out and let us know what you think: we look forward to hearing your feedback. And stay tuned for additional, completely redesigned prototypes coming soon!

    Adalberto Foresti
    Principal Program Manager
    Microsoft Open Technologies, Inc.
    A subsidiary of Microsoft Corporation

  • Interoperability @ Microsoft

    //build/ today with open source frameworks on Windows Phone 8

    • 9 Comments

    Added support for Windows Phone 8 in Apache Cordova, Sencha Touch, Cocos2D, Ogre3D and other open source frameworks.

    The cool news for developers keeps on rolling at //build/ 2012. We’re thrilled to relay the announcements from a broad range of open source communities that their support for Windows Phone 8 goes live on “Day 1” of the SDK availability, along with other partners. There are several open source frameworks to choose from today.

    The Windows Phone team and Microsoft Open Technologies, Inc. engaged early in the process with open source communities to enable Windows Phone 8 in these popular open source and cross platform frameworks. We provided technical support and information, gave early access to the tools and MS Open Tech contributed code to the Cocos 2D and Ogre3D projects.

    The market opportunity just got bigger and easier for all developers with this news. We believe it is important that developers have choices and can reuse their skills and code to build Windows Phone 8 applications.

    This added support for Windows Phone 8 in diverse open source and cross platform frameworks was made possible thanks to new features in Windows Phone 8: native C++ programming and Internet Explorer 10 expanded HTML5 support.

    Developers who have applications based on these frameworks can publish them to the Windows Phone Store in record time. And this applies to various domains, like gaming with C++ or C# frameworks such as Cocos 2D, Ogre 3D and SharpDX, or cross platform development with HTML5 and JavaScript leveraging Apache Cordova, Trigger.io, Sencha Touch or jQuery Mobile. Developers using popular open source tools and frameworks such as SQLite or GalaSoft MVVM toolkit will also be able to reuse their code and skills.

    “Nearly 50% of Sencha customers have expressed interest in building apps for Windows Phone 8 in the next 6-12 months. Supporting Windows Phone 8 is a natural choice for Sencha to enable our customers to build universal apps for mobile devices.” - Abraham Elias, CTO Sencha Inc.

    Jay Garcia, CTO at Modus Create, and his team are developing a mobile companion application for the game Diablo III:
    Using Blizzard’s Diablo III web APIs in combination with PhoneGap and Sencha Touch, we were able to hugely increase the game’s fan base because we could build and publish our application to both iOS and Android with the same HTML5 and JavaScript code base. It literally took us a few days to get the same code to run on Windows Phone 8 thanks to this newly added support.”

    You can read more about Modus Create work to migrate their application to Windows Phone 8 on their blog post.

    Craig Walker, CTO at Xero commented on the new support for Windows Phone 8 in Sencha Touch:
    Using web standards-based technologies such as Sencha Touch and Apache Cordova for our mobile accounting software application Xero Touch helped us target a wide range of platforms so our customers could focus on their business, not the underlying technology. Support for these technologies in Windows Phone 8 tools made it an easy Xero Touch build for our dev team, and a smart addition for our customers who need flexibility managing their business on the go.”

    Microsoft Open Technologies, Inc., supported the jQuery Mobile and Sencha Touch communities to deliver themes that will allow developers to integrate their applications into the Windows Phone 8 user experience.

    As Craig Walker from Xero stresses, it is crucial for developers to be able to deliver a seamless consumer experience integrated into the platform. You can see below a video demonstrating the Sencha Touch theme for Windows Phone 8.

    Brett Nagy, Technical Director at Microgroove, and his team got a chance to try the Windows Phone 8 tools and the early Sencha Touch support for Windows Phone 8:
    Our apps have been making companies more productive for well over a decade. Sencha Touch support for Windows Phone 8 has made our engineer team more productive by allowing us to easily re-use code from one mobile platform to another.
    Within a couple of hours, we had the basic Windows Phone 8 themed version of an existing app without requiring any changes to its JavaScript codebase. Now that producing builds that run on Windows Phone 8 is part of our regular workflow, the next step is to build out functionality that really takes advantage of that platform. Knowing that we can do that in HTML + JS allows us to extend our reach beyond iOS and Android with minimal change to our projects timelines.

    For developers using jQuery Mobile, Sergey Grebnov from Akvelon, who previously published a jQuery Mobile theme for Windows Phone 7.5 is releasing a new jQuery Mobile theme for Windows phone 8. You can see below a short demo of how to apply the theme to a Windows Phone 8 application.

    This is the first time so many open source and cross platform frameworks are on board with Windows Phone on the first day of a new SDK version release. It is great to see how much communities are eager to work with Windows Phone.

    And today is just the beginning. We want to continue this effort to help open source developers enable their frameworks on Windows Phone 8. It’s important for developers to reuse their skills, expand the market opportunity to make money on our devices, and build the next generation of apps. Imagine the possibilities.

    Go check out the various frameworks and let us know if you think of other ones you would love to be able to use to build Windows Phone 8 applications.

  • Interoperability @ Microsoft

    Simplifying Big Data Interop – Apache Hadoop on Windows Server & Windows Azure

    • 0 Comments

    As a proud member of the Apache Software Foundation, it’s always great to see the growth and adoption of Apache community projects. The Apache Hadoop project is a prime example. Last year I blogged about how Microsoft was engaging with this vibrant community, Microsoft, Hadoop and Big Data. Today, I’m pleased to relay the news about increased interoperability capabilities for Apache Hadoop on the Windows Server and Windows Azure platforms and an expanded Microsoft partnership with Hortonworks.

    Microsoft Technical Fellow David Campbell announced today new previews of Windows Azure HDInsight Service and Microsoft HDInsight Server, the company’s Hadoop-based solutions for Windows Azure and Windows Server.

    Here’s what Dave had to say in the official news about how this partnership is simplifying big data in the enterprise.

    “Big Data should provide answers for business, not complexity for IT. Providing Hadoop compatibility on Windows Server and Azure dramatically lowers the barriers to setup and deployment and enables customers to pull insights from any data, any size, on-premises or in the cloud.”

    Dave also outlined how the Hortonworks partnership will give customers access to an enterprise-ready distribution of Hadoop with the newly released solutions.

    And here’s what Hortonworks CEO Rob Bearden said about this expanded Microsoft collaboration.

    “Hortonworks is the only provider of Apache Hadoop that ensures a 100% open source platform. Our expanded partnership with Microsoft empowers customers to build and deploy on platforms that are fully compatible with Apache Hadoop.”

    An interesting part of my open source community role at MS Open Tech is meeting with customers and trying to better understand their needs for interoperable solutions. Enhancing our products with new Interop capabilities helps reduce the cost and complexity of running mixed IT environments. Today’s news helps simplify deployment of Hadoop-based solutions and allows customers to use Microsoft business intelligence tools to extract insights from big data.

  • Interoperability @ Microsoft

    Interoperability Elements of a Cloud Platform: Technical Examples

    • 0 Comments

    Two years ago we shared our view on Interoperability Elements of a Cloud Platform. Back then we talked to customers and developers and came out with an overview of an open and interoperable cloud, based on four distinct elements: Data Portability, Standards, Ease of Migration and Deployment, and Developer Choice. Since then, we have been laser focused on the quest for an interoperable and flexible cloud platform that would enable heterogeneous workloads.

    Windows Azure is committed to openness across the entire application stack, with service APIs and service management APIs exposed as RESTful endpoints that can be used from any language or runtime, key services such as Caching, Service Bus, and Identity that can be hosted either on-premises or in the cloud, and open source SDKs for popular languages that give developers a choice of tools for building cloud-based applications and services.

    In this blog post I’ll recap some of the most important news of the last year in each of these areas. As I mentioned in a blog postearlier this year, when a journey reaches an important milestone it’s good to look back and think about the road so far. We’ve come even farther down that road now, and here are many technical examples of what has been accomplished.

    Data Portability

    When customers create data in an on-premises application, they have a high level of confidence that they have control over the data stored in the on-premise environment. Customers should have a comparable level of control over their data when they are using cloud platforms. Here are some examples of how Windows Azure supports Data Portability:

    Standards

    Cloud platforms should reuse existing and commonly used standards when it makes sense to do so. If existing standards are not sufficient, new standards may be created. Here are some of the ways we’re working to support standards for cloud computing:

    Ease of Migration and Deployment

    Cloud platforms should provide a secure migration path that preserves existing investments and enable co-existence between on-premise software and cloud services. Here are some examples of ease of migration and deployment on Windows Azure:

    Developer Choice

    Cloud platforms should enable developer choice in tools, languages and runtimes to facilitate the development of interoperable customer solutions. This approach will also broaden the community of developers that write for a given cloud platform and therefore enhance the quality of services that the platform will offer to customers. Here are some of the ways that Windows Azure is delivering on  developer choice:

    It’s exciting to see how far we’ve come, and we still have much to do as well. The Interoperability Elements of a Cloud Platform originally came out of discussions with customers, partners, and developers about what they need from an interoperable cloud, and we’re continuing those discussions going forward, and we will continue to deliver on these important elements!

    Gianugo Rabellino
    Senior Director, Open Source Communities
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    W3C’s Web Platform Docs – Your “Go To” for All Things Web Development

    • 0 Comments

    From:

    Jean Paoli, President, Microsoft Open Technologies, Inc.

    Michael Champion, Senior Program Manager, Microsoft Open Technologies, Inc.

     

    We are thrilled to share the news that the W3C announced the alpha release of Web Platform Docs. Adobe, Facebook, Google, HP, Microsoft, Mozilla, Nokia and Opera are among the stewards of the project. Together, we worked with the W3C on creating this wiki-styled site and contributed thousands of web documentation articles.

    W3C’s Web Platform Docs is a community site designed to be a comprehensive and authoritative resource for developers to help them build modern web applications that will work across browsers and devices, and share their own expertise, which will further the goal of web platform interoperability and same markup.

    Currently, developers need to do a lot of research about what technologies work on which platforms when building websites and applications with HTML5, CSS and other open web standards. It’s costly and inefficient for them to spend precious hours consulting multiple resources to understand how to employ web technologies in a way that functions across browsers, operating systems and devices. W3C’s Web Platform Docs addresses these issues by offering a single “go-to” source for web developer documentation, and providing a site that the community can continually edit and improve.

    Microsoft Open Technologies, Inc., represented by Michael Champion, and the Microsoft Internet Explorer team represented by Eliot Graff, have been involved from the very inception of the project as we strongly believe this community site is key in the journey to an interoperable web platform and same markup.

    As an initial contribution, Microsoft donated more than 3,200 topics from MSDN and will continue to add content moving forward. This is an open community – web developers can get an account at webplatform.org to make their own contribution – fill in gaps, correct errors, and flesh out the documentation with sample code to explain how to use the web platform to its full potential.

    So what does this mean for you, the developer?

    You will save time and resources, knowing you can consult with confidence a community-curated site to learn about standards, innovations and best practices including:

    • What technologies really interoperate across platforms and devices;
    • The standardization status of each technology specification;
    • The stability and implementation status of specific features in actual browsers.

    W3C’s Web Platform Docs is an open site where anyone can become a member and contribute. Microsoft and the other founding stewards helped boot up the wiki (and will continue to contribute new content), but YOU, the developer community, own the site. W3C convened the community and will administer webplatform.org in the future, but you don’t have to join W3C to participate in this effort.

    All materials on W3C’s Web Platform Docs are freely available and licensed to foster sharing and reuse.

    Begin simplifying your web development and check out W3C’s Web Platform Docs today. Better still, sign up for an account, find a topic of interest, and contribute your expertise!

  • Interoperability @ Microsoft

    New open source options for Windows Azure web sites: MediaWiki and phpBB

    • 1 Comments

     Need to set up a powerful wiki quickly? Looking for an open source bulletin board solution for your Windows Azure Web Site? Today, we are announcing the availability of MediaWiki and phpBB in the Windows Azure Web Applications gallery. MediaWiki is the open source software that powers WikiPedia and other large-scale wiki projects, and phpBB is the most widely used open source bulletin board system in the world.

    You can deploy a free Windows Azure Web Site running MediaWiki or phpBB with just a few mouse clicks. Sign up for the free trial if you don’t already have a Windows Azure subscription, and then select the option to create a new web site from the gallery.

    This will take you to a screen where you can select from a list of applications to be automatically installed by Windows Azure on the new web site you’re creating. You’ll see many popular open source packages there, including MediaWiki and phpBB. Select the option you’d like, and then you’ll be prompted for a few configuration details such as the URL for your web site and database settings for the application:

    Fill in the required fields, click the Next button, and you’ll soon have a running ready-to-use web site that is hosting your selected application.

    The Windows Web App Gallery also includes MediaWiki and phpBB, so you can deploy either of them on-premises as well. See the MediaWiki and phpBB entries in the gallery.

    The MediaWiki project now includes the Windows Azure Storage extensions that allow you to story media files on Windows Azure. You can use this functionality for MediaWiki sites deploy to Windows Azure Web Sites, or for other deployments as well. More information can be found on the MediaWiki wiki.

    A big thanks to everyone who helped to make MediaWiki and phpBB work so well on Windows Azure! Markus Glazer, volunteer developer at Wikimedia Foundation, submitted the MediaWiki package to the Windows Azure Web Sites Gallery and integrated MediaWiki with Windows Azure Storage. Nils Adermann from the phpBB community submitted the updated phpBB 3.0.11 package to the Windows Azure Web Sites Gallery with the necessary changes for integration with Windows Azure.

    The addition of phpBB and MediaWiki is a great example of Windows Azure’s support for open source software applications, frameworks, and tools. We’re continuing to work with these and other communities to make Windows Azure a great place to host open source applications. What other open source technologies would you like to be able to use on Windows Azure?

    Doug Mahugh
    Senior Technical Evangelist
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    Sublime Text, Vi, Emacs: TypeScript enabled!

    • 57 Comments

    TypeScript is a new open and interoperable language for application scale JavaScript development created by Microsoft and released as open source on CodePlex. You can learn about this typed superset of JavaScript that compiles to plain JavaScript reading Soma’s blog.

    At Microsoft Open Technologies, Inc. we are thrilled that the discussion is now open with the community on the language specification: you can play (or even better start developing with TypeScript) with the bits, read the specification and provide your feedback on the discussion forum. We also wanted to make it possible for developers to use their favorite editor to write TypeScript code, in addition to the TypeScript online playground and the Visual Studio plugin.

    Below you will find sample syntax files for Sublime Text, Vi and Emacs that will add syntax highlighting to the files with a .ts extension. We want to hear from you on where you think we should post these files for you to be able to optimize them and help us make your TypeScript programming an even greater experience, so please comment on this post or send us a message.

     

    sublime_text_icon_2181
    TypeScript support for
    Sublime Text
    emacs
    TypeScript support for
    Emacs
    vim-logo
    TypeScript support for
    Vim

    Olivier Bloch
    Senior Technical Evangelist
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    Windows Azure Plugin for Eclipse with Java – September 2012 Preview

    • 2 Comments

    The Windows Azure Plugin for Eclipse with Java (by Microsoft Open Technologies) – September 2012 Preview has been released. This service update includes a number of additional bug fixes since the August 2012 Preview, as well as some feedback-driven usability enhancements in existing features:

    • Support for Windows 8 and Windows 2012 Server as the development OS, resolving issues previously preventing the plugin from working properly on those operating systems
    • Improved support for specifying endpoint port ranges
    • Bug fixes related to file paths containing spaces
    • Role context menu improvements for faster access to role-specific configuration settings
    • Minor refinements in the “Publish to cloud” wizard and a number of additional bug fixes

    You can learn more about the plugin on the Windows Azure Dev Center.

    To learn how to install the plugin, go here.

    Martin Sawicki
    Principal Program Manager
    Microsoft Open Technologies, Inc.
    A subsidiary of Microsoft Corporation

  • Interoperability @ Microsoft

    Open Sourcing POSH-NPM, a set of Powershell scripts to enable tab-completion for NPM commands

    • 5 Comments

    Two weeks ago we released a .NET library for NPM, today we are releasing a small utility that will make easier for Windows developers to use NPM in Powershell.

    Posh-npm is a set of Powershell scripts that enable tab completion on the Powershell console for all NPM commands. For instance by typing npm ins<tab> in Powershell will complete the command by listing all available commands that starts with ins.

    The WebMatrix team is working on adding console support for node.js and will be using the posh-npm library to provide tab completion in WebMatrix as well.

    Special thanks to Keith Dahlby’s posh-git project, it made our life much easier.

    Claudio Caldato
    Principal Program Manager
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    MongoDB Seattle recap

    • 1 Comments

    DSC_4062Last Friday I had the opportunity to attend and participate in the MongoDB Seattle conference down along the Seattle waterfront at Bell Harbor Conference Center. It was a great way to end a week of MongoDB-related activities around Seattle and Redmond! I met many people who are working with MongoDB and considering their options for cloud deployment, and I co-presented with 10gen software engineer Sridhar Nanjundeswaran on “MongoDB and Windows Azure.��

    On the day before the conference, I caught up with Aaron Heckman, Node.js engineer at 10gen, and we recorded a video of a cool demo app he had built and deployed on Windows Azure that uses Node.js and MongoDB. Aaron knows Node.js and MongoDB very well but had never worked with Windows Azure before, so his experience is a great example of how quickly and easily Node+Mongo developers can deploy apps on Azure.

    Thanks to the team at 10gen for putting on a great event, and thanks to everyone who participated and helped make it so useful and fun! You can find links to additional information about deploying MongoDB on Windows Azure over on the 10gen blog, and also be sure to check out the Windows Azure section on MongoDB.org.

    For those on the US east coast, MongoDB Boston is coming up on October 24, and my colleague Jim O’Neil will be presenting on the details of running MongoDB on Windows Azure. To find other MongoDB events check out the events page on 10gen’s site, and for information on upcoming Windows Azure events see the Windows Azure Events page on WindowsAzure.com.

    Doug Mahugh
    Senior Technical Evangelist
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    Open sourcing npm.net, a .NET library for the Node.js package manager (npm)

    • 0 Comments

    Today I’m happy to announce the open source release of the npm.net library. This is the same library that the WebMatrix team used to implement the NPM package discovery feature as explained in Justin’s blog. The library gives developers using managed code access to NPM commands to, for instance, deploy or update node.js modules on a client machine.

    We are releasing the source code of the library today so developers that are interested in building automation tools or any other sort of integration between node.js and .NET can leverage some of the work we have done for the WebMatrix team.

    Claudio Caldato
    Principal Program Manager
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    Windows Azure Plugin for Eclipse with Java - August 2012 Preview

    • 0 Comments

    Gearing up for back to school, the Microsoft Open Technologies Inc. team has been busy updating the Windows Azure Plugin for Eclipse with Java.

    This August 2012 Preview update includes some feedback-driven usability enhancements in existing features along with number additional bug fixes since the July 2012 Preview. The principal enhancements are the following:

    • Inside the Windows Azure Access Control Service Filter:
      • Option to embed the signing certificate into your application’s WAR file to simplify cloud deployment
      • Option to create a new self-signed certificate right from the ACS filter wizard UI
    • Inside the Windows Azure Deployment Project wizard (and the role’s Server Configuration property page):
      • Automatic discovery of the JDK location on your computer (which you can override if necessary)
      • Automatic detection of the server type whose installation directory you select

    You can learn more about the plugin on the Windows Azure Dev Center.

    To find out how to install, go here.

    Martin Sawicki
    Principal Program Manager
    Microsoft Open Technologies, Inc.
    A subsidiary of Microsoft Corporation

  • Interoperability @ Microsoft

    Microsoft at DrupalCon Munich next week

    • 0 Comments

    Microsoft at DrupalCon is becoming a tradition. After having partnered closely with the Drupal community to make Drupal available on Windows, Microsoft teams are continuing this engagement and are eager to meet Drupal developers in Munich next week.

    If you are going to the event, don’t miss the various panels and sessions Microsoft attendees will participate:

    And of course, you should stop by the booth to say hi!

    If you’re not in Germany and can’t attend DrupalCon, we encourage you to follow @Gracefr and @Brian_Swan on Twitter. They’ll provide insights on the cool things happening there. Don’t miss Brian’s blog that offers great technical information on Drupal on Windows Azure and other related topics: a must read!

  • Interoperability @ Microsoft

    Microsoft Releases New Dev Tools Compiled With Open Source Code

    • 0 Comments

    Jason Zander blogged about new releases of Microsoft’s developer tools today – tools that include many contributions from the open source community with the MS Open Tech Hub on CodePlex.

    The OSS community helped build out the source code for ASP.NET MVC 4, Web API, Web Pages 2 and Entity Framework 5 – key components in the new releases of Visual Studio 2012, Team Foundation Server 2012, and .NET Framework 4.5. Through CodePlex, developers outside Microsoft submitted patches and code contributions that the MS Open Tech Hub development team reviewed for potential inclusion in these products. I described this process in more detail last month, More of Microsoft’s App Development Tools Goes Open Source.

    Today’s news had an additional cool factor. As Jason highlighted in his blog, “Developing great apps for Windows 8 is an important goal of this release. Therefore, in coordination with today’s developer tools releases, you’ll notice that the final version of Windows 8 has released to the web as well.”

    There is a ton of great resources on these tools that you can check out and download today. The ASP.net website is a great place to start. I also recommend my friend Scott Hanselman’s new videos.

    Microsoft’s partner-centric approach has been with the company since the very beginning. Today’s milestone shows that all developers can contribute to and benefit from Microsoft’s open platforms in the future.

    Gianugo Rabellino
    Senior Director Open Source Communities
    Microsoft Open Technologies, Inc.
    A subsidiary of Microsoft Corporation

  • Interoperability @ Microsoft

    Using the Cloudant Data Layer for Windows Azure

    • 0 Comments

    If you need a highly scalable data layer for your cloud service or application running on Windows Azure, the Cloudant Data Layer for Windows Azure may be a great fit. This service, which was announced in preview mode in June and is now in beta, delivers Cloudant’s “database as a service” offering on Windows Azure.

    From Cloudant’s data layer you’ll get rich support for data replication and synchronization scenarios such as online/offline data access for mobile device support, a RESTful Apache CouchDB-compatible API, and powerful features including full-text search, geo-location, federated analytics, schema-less document collections, and many others. And perhaps the greatest benefit of all is what you don’t get with Cloudant’s approach: you’ll have no responsibility for provisioning, deploying, or managing your data layer. The experts at Cloudant take care of those details, while you stay focused on building applications and cloud services that use the data layer.

    You can do your development in any of the many languages supported on Windows Azure, such as .NET, Node.JS, Java, PHP, or Python. In addition, you’ll get the benefits of Windows Azure’s CDN (Content Delivery Network) for low-latency data access in diverse locations. Cloudant pushes your data to data centers all around the globe, keeping it close to the people and services who need to consume it.

    For a free trial of the Cloudant Data Layer for Windows Azure, create a new account on the signup page and select “Lagoon” as your data center location.

    For an example of how to use the Cloudant Data Layer, see the tutorial “Using the Cloudant Data Layer for Windows Azure,” which takes you through the steps needed to set up an account, create a database, configure access permissions, and develop a simple PHP-based photo album application that uses the database to store text and images:

    clip_image002

    The sample app uses the SAG for CouchDB library for simple data access. SAG works against any Apache CouchDB database, as well as Cloudant’s CouchDB-compatible API for the data layer.

    My colleague Olivier Bloch has provided another great example of using existing CouchDB libraries to simplify development when using the Cloudant Data Layer. In this video, he demonstrates how to put a nice Windows 8 design front end on top of the photo album demo app:

    clip_image004

    This example takes advantage of the couch.js library available from the Apache CouchDB project, as well as the GridApp template that comes with Visual Studio 2012. Olivier shows how to quickly create the app running against a local CouchDB installation, then by simply changing the connection string the app is running live against the Cloudant data layer running on Windows Azure.

    The Cloudant data layer is a great example of the new types of capabilities – and developer opportunities – that have been created by Windows Azure’s support for Linux virtual machines. As Sam Bisbee noted in Cloudant’s announcement of the service, “The addition of Linux-based virtual machines made it possible for us to offer the Cloudant Data Layer service on Azure.”

    If you’re looking for a way to quickly build apps and services on top of a scalable high-performance data layer, check out what the Cloudant Data Layer for Windows Azure has to offer!

    Doug Mahugh
    Senior Technical Evangelist
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    MS Open Tech is hiring!

    • 1 Comments

    Do you have a passion for interoperability, open source, and open standards? If you’re an experienced developer, program manager, technical diplomat, or evangelist who can help our team build technical bridges between Microsoft and
    non-Microsoft technologies, check out the blog post by Gianugo Rabellino over on the Port 25 blog today. We’re hiring, with open positions you can apply to right now. We’d love to hear from you!

    Doug Mahugh
    Senior Technical Evangelist
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    Customizable, Ubiquitous Real Time Communication over the Web (CU-RTC-Web)

    • 3 Comments

    UPDATE: See our latest W3C WebRTC Working Group blog post on 01-17-2013 http://aka.ms/WebRTCPrototypeBlog describing our new CU-RTC-Web prototype that you can download on HTML5 Labs.

    From:

    Matthew Kaufman - Inventor of RTMFP, the most widely used browser-to-browser RTC protocol on the web
    Principal Architect, Skype, Microsoft Corp.

    Martin Thomson
    Senior Architect, Skype, Microsoft Corp.

    Jonathan Rosenberg - Inventor of SIP and SDP offer/answer
    GM Research Product & Strategy, Skype, Microsoft Corp.

    Bernard Aboba
    Principal Architect, Lync, Microsoft Corp.

    Jean Paoli
    President, Microsoft Open Technologies, Inc.

    Adalberto Foresti
    Senior Program Manager, Microsoft Open Technologies, Inc.

     

     

    Today, we are pleased to announce Microsoft’s contribution of the CU-RTC-Web proposal to the W3C WebRTC working group.

    Thanks in no small part to the exponential improvements in broadband infrastructure over the last few years, it is now possible to leverage the digital backbone of the Internet to create experiences for which dedicated media and networks were necessary until not too long ago.

    Inexpensive, real time video conferencing is one such experience.

    The Internet Engineering Task Force and the World Wide Web Consortium created complementary working groups to bring these experiences to the most familiar and widespread application used to access the Internet: the web browser. The goal of this initiative is to add a new level of interactivity for web users with real-time communications (Web RTC) in the browser.

    While the overarching goal is simple to describe, there are several critical requirements that a successful, widely adoptable Web RTC browser API will need to meet:

    • Honoring key web tenets – The Web favors stateless interactions which do not saddle either party of a data exchange with the responsibility to remember what the other did or expects. Doing otherwise is a recipe for extreme brittleness in implementations; it also raises considerably the development cost which reduces the reach of the standard itself.
    • Customizable response to changing network quality – Real time media applications have to run on networks with a wide range of capabilities varying in terms of bandwidth, latency, and packet loss.  Likewise these characteristics can change while an application is running. Developers should be able to control how the user experience adapts to fluctuations in communication quality.  For example, when communication quality degrades, the developer may prefer to favor the video channel, favor the audio channel, or suspend the app until acceptable quality is restored.  An effective protocol and API should provide developers with the tools to tailor the application response to the exact needs of the moment.
    • Ubiquitous deployability on existing network infrastructure – Interoperability is critical if WebRTC users are to communicate with the rest of the world with users on different browsers, VoIP phones, and mobile phones, from behind firewalls and across routers and equipment that is unlikely to be upgraded to the current state of the art anytime soon.
    • Flexibility in its support of popular media formats and codecs as well as openness to future innovation – A successful standard cannot be tied to individual codecs, data formats or scenarios. They may soon be supplanted by newer versions that would make such a tightly coupled standard obsolete just as quickly. The right approach is instead to support multiple media formats and to bring the bulk of the logic to the application layer, enabling developers to innovate.

    While a useful start at realizing the Web RTC vision, we feel that the existing proposal falls short of meeting these requirements. In particular:

    • No Ubiquitous deployability: it shows no signs of offering real world interoperability with existing VoIP phones, and mobile phones, from behind firewalls and across routers and instead focuses on video communication between web browsers under ideal conditions. It does not allow an application to control how media is transmitted on the network. On the other hand, implementing innovative, real-world applications like security consoles, audio streaming services or baby monitoring through this API would be unwieldy, assuming it could be made to work at all. A Web RTC standard must equip developers with the ability to implement all scenarios, even those we haven’t thought of.
    • No fit with key web tenets: it is inherently not stateless, as it takes a significant dependency on the legacy of SIP technology, which is a suboptimal choice for use in Web APIs. In particular, the negotiation model of the API relies on the SDP offer/answer model, which forces applications to parse and generate SDP in order to effect a change in browser behavior. An application is forced to only perform certain changes when the browser is in specific states, which further constrains options and increases complexity. Furthermore, the set of permitted transformations to SDP are constrained in non-obvious and undiscoverable ways, forcing applications to resort to trial-and-error and/or browser-specific code. All of this added complexity is an unnecessary burden on applications with little or no benefit in return.

     

    The Microsoft Proposal for Customizable, Ubiquitous Real Time Communication over the Web

    For these reasons, Microsoft has contributed the CU-RTC-Web proposal that we believe does address the four key requirements above.

    • This proposal adds a real-time, peer-to-peer transport layer that empowers web developers by having greater flexibility and transparency, putting developers directly in control over the experience they provide to their users.
    • It dispenses with the constraints imposed by unnecessary state machines and complex SDP and provides simple, transparent objects.
    • It elegantly builds on and integrates with the existing W3C getUserMedia API, making it possible for an application to connect a microphone or a camera in one browser to the speaker or screen of another browser. getUserMedia is an increasingly popular API that Microsoft has been prototyping and that is applicable to a broad set of applications with an HTML5 client, including video authoring and voice commands.

    The following diagram shows how our proposal empowers developers to create applications that take advantage of the tremendous benefits offered by real-time media in a clear, straightforward fashion.

    image

    We are looking forward to continued work in the IETF and the W3C, with an open and fruitful conversation that converges on a standard that is both future-proof and an answer to today’s communication needs on the web. We would love to get community feedback on the details of our CU-RTC-Web proposal document and we invite you to stay tuned for additional content that we will soon publish on http://html5labs.com in support of our proposal.

  • Interoperability @ Microsoft

    HTTP/2.0 makes a great step forward in Vancouver, but this is just the beginning!

    • 0 Comments

    From:

    Henrik Frystyk Nielsen
    Principal Architect, Microsoft Open Technologies, Inc.

    Rob Trace
    Senior Program Manager Lead, Microsoft Corporation

    Gabriel Montenegro
    Principal Software Development Engineer, Microsoft Corporation

     

     

    We just came back from the IETF meeting in Vancouver, where the HTTP working group was meeting to decide on the way forward for HTTP/2.0. We are very happy with the discussions and overall outcomes as reflected in the meeting minutes and as summarized by the Chair, Mark Nottingham. At the meeting, the working group clarified the direction for HTTP/2.0 and began to draft a new charter. The group agreed that seven key areas need deep, data-driven discussion as part of the HTTP/2.0 specification process, and the resulting standard will not be backward compatible with any existing proposals (SPDY, HTTP Speed+Mobility, and Network-Friendly HTTP Upgrade). The charter calls for a proposed completion date for the standard of November 2014. In other words, while we are excited about where we are, it is clear that we are just at the beginning of the process toward HTTP 2.0.

    Seven Key areas under discussion

    The meeting outlined clearly the need for discussions and consensus over seven key technical areas such as Compression, Mandatory TLS, and Client Pull/Server Push. This list of issues is aligned with the position that Microsoft’s Henrik Frystyk Nielsen outlined in an earlier message to the HTTP discussion list (see excerpts below). Overall, we believe there needs to be robust discussions about how we bring together the best elements of the current SPDY, HTTP Speed+Mobility, and Network-Friendly HTTP Upgrade proposals.

      Area

      Opinion that seems to prevail

      1. Compression

      SPDY or Friendly

      2. Multiplexing

      SPDY

      3. Mandatory TLS

      Speed+Mobility

      4. Negotiation

      Friendly or Speed+Mobility

      5. Client Pull/Server Push

      Speed+Mobillity

      6. Flow Control

      SPDY

      7. WebSockets

      Speed+Mobility

     

    HTTP/2.0 specification must be data-driven

    We are particularly gratified to see this language in the proposed charter:

    It is expected that HTTP/2.0 will:
    * Substantially and measurably improve end-user perceived latency in most cases, over HTTP/1.1 using TCP.

    This supports Microsoft’s position that the HTTP update must be data-driven to ensure that it provides the desired benefits for users. . The SPDY proposal has done a good job of raising awareness of the opportunities to improve Web performance.

    Almost equal performance between SPDY and HTTP 1.1

    To compare the performance of SPDY with HTTP 1.1 we have run tests comparing download times of several public web sites using a controlled tested study. The test uses publically available software run with mostly default configurations while applying all the currently available optimizations to HTTP 1.1. You can find a preliminary report on the test results here: http://research.microsoft.com/apps/pubs/?id=170059. The results mirror other data (http://www.guypo.com/technical/not-as-spdy-as-you-thought) that indicate mixed results with SPDY performance.

    Our results indicate almost equal performance between SPDY and HTTP 1.1 when one applies all the known optimizations to HTTP 1.1. SPDY's performance improvements are not consistent and significant. We will continue our testing, and we welcome others to publish their results so that HTTP 2.0 can choose the best changes and deliver the best possible performance and scalability improvements compared to HTTP 1.1.

    We discussed those results in Vancouver and it was great to see the interest that this research received from the community on the IETF mailing list and on Twitter.

    Existing proposals will change a lot – No backward compatibility

    In light of the discussions and the proposed charter, HTTP2.0 will undoubtedly not be backward compatible with any of the current proposals (SPDY, Speed+Mobility, Friendly); in fact, we expect that it might differ in substantial ways from each of these proposals. Consequently, we caution implementers against embracing unstable versions of the specification too eagerly. The proposed charter calls for an IETF standard by November 2014.

    We are happy that the working group decided, for practical reasons, to use the text from http://datatracker.ietf.org/doc/draft-mbelshe-httpbis-spdy/ as a starting point. The discussions around the previously cited seven design elements will deeply modify this text . As the Chair wrote, “It’s important to understand that SPDY isn’t being adopted as HTTP/2.0” . This is in line with the Microsoft approach: Our HTTP Speed+Mobility proposal starts from both the Google SPDY protocol (a separate submission to the IETF for this discussion) and the work the industry has done around WebSockets, and the main departures from SPDY are to address the needs of mobile devices and applications.

    Looking ahead

    We’re excited for the web to get faster, more stable, and more capable. HTTP/2.0 is an important part of that progress, and we look forward to an HTTP/2.0 that meets the needs of the entire web, including browsers, apps, and mobile devices.

    Henrik Frystyk Nielsen, Gabriel Montenegro and Rob Trace

    Message to the IETF mailing list from Henrik

    Dear All,

    We remain committed to the HTTP/2.0 standards process and look forward to seeing many of you this week at the IETF meeting in Vancouver to continue the discussion.  In the spirit of open discussion, we wanted to share some observations in advance of the meeting and share the latest progress from prototyping and testing.

    There are currently three different proposals that the group is working through:

       * SPDY (http://tools.ietf.org/html/draft-mbelshe-httpbis-spdy),
       * HTTP Speed+Mobility (http://tools.ietf.org/html/draft-montenegro-httpbis-speed-mobility),
       * Network-Friendly HTTP Upgrade (http://tools.ietf.org/html/draft-tarreau-httpbis-network-friendly).

    The good news is that everyone involved wants to make the Web faster, more scalable, more secure, and more mobile-friendly, and each proposal has benefits in different areas that the discussion can choose from.

    --- A Genuinely Faster Web ---

    The SPDY proposal has been great for raising awareness of Web performance. It takes a "clean slate" approach to improving HTTP.

    To compare the performance of SPDY with HTTP/1.1 we have run tests comparing download times of several public web sites using a controlled tested study. The test uses publically available software run with mostly default configurations while applying all the currently available optimizations to HTTP/1.1. You can find a preliminary report on the test results here: http://research.microsoft.com/apps/pubs/?id=170059. The results mirror other data (http://www.guypo.com/technical/not-as-spdy-as-you-thought) that indicate mixed results with SPDY performance.

    Our results indicate almost equal performance between SPDY and HTTP/1.1 when one applies all the known optimizations to HTTP/1.1. SPDY's performance improvements are not consistent and significant. We will continue our testing, and we welcome others to publish their results so that HTTP/2.0 can choose the best changes and deliver the best possible performance and scalability improvements compared to HTTP/1.1.

    --- Taking the Best from Each ---

    Speed is one of several areas of improvement. Currently, there's no clear consensus that any one of the proposals is the clear choice or even starting point for HTTP/2.0 (based on our reading the Expressions of Interest and discussions on this mailing list. A good example of this is the vigorous discussion around mandating TLS encryption (http://tools.ietf.org/html/rfc5246) for HTTP/2.0.

    We think a good approach for HTTP/2.0 is to take the best solution for each of these areas from each of the proposals.  This approach helps us focus the discussion for each area of the protocol. Of course, this approach would still allow the standard to benefit from the extensive knowledge gained from implementing existing proposals.

    We believe that the group can converge on consensus in the following areas, based on our reading of the Expressions of Interest, by starting from the different proposals.

    ------------------|------------------
    Area              | Opinion that
                      | seems to prevail
    ------------------|------------------
    1. Compression    | SPDY or Friendly
    ------------------|------------------
    2. Multiplexing   | SPDY
    ------------------|------------------
    3. Mandatory TLS  | Speed+Mobility
    ------------------|------------------
    4. Negotiation    | Friendly or
                      |   Speed+Mobility
    ------------------|------------------
    5. Client Pull/   | Speed+Mobility
          Server Push |
    ------------------|------------------
    6. Flow Control   | SPDY
    ------------------|------------------
    7. WebSockets     | Speed+Mobility
    ------------------|------------------

    Below, we discuss each HTTP/2.0 element and the current consensus that appears to be forming within the Working Group.

    1. Compression

    Compression is simple to conceptualize and implement, and it is important. Proxies and other boxes in the middle on today's Web often face problems with it. The HTTP/2.0 discussion has been rich but with little consensus.

    Though some studies suggest that SPDY's header compression approach shows promise, other studies show this compression to be prohibitively onerous for intermediary devices. More information here would help us make sure we're making the Web faster and better.

    Also, an entire segment of implementers are not interested in compression as defined in SPDY.  That's a challenge because the latest strawman for the working group charter (http://lists.w3.org/Archives/Public/ietf-http-wg/2012JulSep/0784.html) states that the "resulting specification(s) are expected to be meet these goals for common existing deployments of HTTP; in particular, ... intermediation (by proxies, Corporate firewalls, 'reverse' proxies and Content Delivery Networks)."

    We think the SPDY or Friendly proposals is a good starting point for progress.

    2. Multiplexing

    All three proposals define similar multiplexing models. We haven't had substantial discussion on the differences. This lack of discussion suggests that there is rough consensus around the SPDY framing for multiplexing.

    We think that the SPDY proposal is a good starting point here and best captures the current consensus.

    3. Mandating Always On TLS

    There is definitely no consensus to mandate TLS for all Web communication, but some major implementers have stated they will not to adopt HTTP/2.0 unless the working group supports a "TLS is mandatory" position. A very preliminary note from the chair (http://lists.w3.org/Archives/Public/ietf-http-wg/2012JulSep/0601.html) states that there is a lack of consensus for mandating TLS.

    We think the Speed+Mobility proposal is a good starting point here as it provides options to turn TLS on (or not).

    4. Negotiation

    Only two of the proposals actually discuss how different endpoints agree to use HTTP/2.0.

    (The SPDY proposal does not specify a negotiation method. Current prototype implementations use the TLS-NPN (http://tools.ietf.org/html/draft-agl-tls-nextprotoneg) extension.  While the other proposals use HTTP Upgrade to negotiate HTTP/2.0, some parties have expressed non-support for this method as well.)

    We think either of the Friendly or Speed+Mobility proposals is a good starting point because they are the only ones that have any language in this respect.

    5. Client Pull and Server Push

    There are tradeoffs between a server push model and a client pull model. The main question is how to improve performance while respecting bandwidth and client caches.

    Server Push has not had the same level of implementation and experimentation as the other features in SPDY. More information here would help us make sure we're making the Web faster and better.

    We think the Speed+Mobility proposal is a good starting point here, suggesting that this issue may be better served in a separate document rather than tied to the core HTTP/2.0 protocol.

    6. Flow Control

    There has only been limited discussion in the HTTPbis working group on flow control. Flow Control offers a lot of opportunity make the Web faster as well as to break it; for example, implementations need to figure out how to optimize for opposing goals (like throughput and responsiveness) at the same time.

    The current version of the SPDY proposal specifies a flow control message with many settings are that are not well-defined. The Speed+Mobilty proposal has a simplified flow control model based on certain assumptions. More experimentation and information here would help us make sure we're making the Web faster and better.

    We think that the SPDY proposal is a good starting point here.

    7. WebSockets

    We see support  for aligning HTTP/2.0 with a future version of WebSockets, as suggested in the introduction of the Speed+Mobility proposal.

    --- Moving forward ---

    We're excited for the Web to get faster, more stable, and more capable, and HTTP/2.0 is an important part of that.

    We believe that bringing together the best elements of the current SPDY, HTTP Speed+Mobility, and Network-Friendly HTTP Upgrade proposals is the best approach to make that happen.

    Based on the discussions on the HTTPbis mailing list, we've suggested which proposals make the most sense to start from for each of the areas that HTTP/2.0 is addressing. Each of these areas needs more prototyping and experimentation and data. We're looking forward to the discussion this week.

    Sincerely,

    Henrik Frystyk Nielsen

    Principal Architect, Microsoft Open Technologies, Inc.

    Gabriel Montenegro

    Principal Software Development Engineer, Microsoft Corporation

    Rob Trace

    Senior Program Manager Lead, Microsoft Corporation

    Adalberto Foresti

    Senior Program Manager, Microsoft Open Technologies, Inc.”

  • Interoperability @ Microsoft

    Windows Azure Storage plugin for WordPress

    • 2 Comments

    The Windows Azure Storage Plugin for WordPress was updated today to use the new Windows Azure SDK for PHP. The plugin comes with a comprehensive user guide, but for a quick overview of what it does and how to get started, see Brian Swan’s blog post. Cory Fowler also has some good information on how to contribute to the plugin, which is an MS Open Tech open-source project hosted on the SVN repo of the WordPress Plugin Directory.

    This plugin allows you to use Windows Azure Storage Service to host the media files for a WordPress blog. I use WordPress on my personal blog where I write mostly about photography and sled dogs, so I installed the plugin today to check it out. The installation is quick and simple (like all WordPress plugins, you just need to copy the files into a folder under your wp-content/plugins folder), and the only setup required is to point it at a storage account in your Windows Azure subscription. Brian’s post has all the details.

    The plugin uses the BlobRestProxy class exposed by the PHP SDK to store your media files in Windows Azure blob storage:

    Once the plugin is installed, you don’t need to think about it – it does everything behind the scenes, while you stay focused on the content you’re creating. If you’re writing a blog post in the WordPress web interface, you’ll see a new button for Windows Azure Storage, which you can use to upload and insert images into your post:

    Brian’s post covers the details of how to upload media files through the plugin’s UI under the new button.

    If you click on the Add Media icon (clip_image001) instead, you can add images from the Media Library, which is also stored in your Windows Azure storage account under the default container (which you can select when configuring the plugin).

    If you use Windows Live Writer (as I do), you don’t need to do anything special at all to take advantage of the plugin. When you publish from Live Writer the media files will automatically be uploaded to the default container of your storage account, and the links within your post will point to the blobs in that container as appropriate.

    To the right is a blog post I created that takes advantage of the plugin. I just posted it from Live Writer as I usually do, and the images are stored in the wordpressmedia container of my dmahughwordpress storage account, with URLs like this one:

    http://dmahughwordpress.blob.core.windows.net/wordpressmedia/2012/08/DSC_7914.jpg

    Check it out, and let us know if you have any questions. If you don’t have an Azure subscription, you can sign up for a free trial here.

  • Interoperability @ Microsoft

    OSCON photos are here! Thanks, Julian!

    • 0 Comments

    Hey OSCON friends, drum roll please … our Microsoft-sponsored photographer and joyful open source geek extraordinaire Julian Cash has posted your photos from our booth, along with a fun video, on his JC Event Photo OSCON Event page and his Facebook page. Find the photo you love from among his shots of you, give it a right-click, “save picture as…” in your favorite format, and it’s all yours. Copy the photo to your favorite social media sites and send a copy to Mom – I did!

    On behalf of the MS Open Tech evangelism team, thanks to everyone who spent time with us at OSCON. I blogged earlier about the new friends we made and the cool conversations we all had about emerging technologies, but I think it’s time to simply say that a picture is a worth a thousand words …

    DSC_2938-Edit

    Clockwise from top right: Gianugo Rabellino, “Grazie!”— Olivier Bloch, “Merci!” — Doug Mahugh and Robin Bender Ginn, “Thank you! Thank you!”

  • Interoperability @ Microsoft

    Node.js script for releasing a Windows Azure blob lease

    • 1 Comments

    This post covers a workaround for an issue that may affect you if you’re deploying Windows Azure virtual machines from VHDs stored in Windows Azure blob storage. The issue doesn’t always occur (in fact, our team hasn’t been able to repro it), and it will be fixed soon. If you run into the issue, you can use any one of several workarounds covered below.

    Blob leases are a mechanism provided by Windows Azure for ensuring that only one process has write access to a blob. As Steve Marx notes in his blog post on the topic, “A lease is the distributed equivalent of a lock. Locks are rarely, if ever, used in distributed systems, because when components or networks fail in a distributed system, it’s easy to leave the entire system in a deadlock situation. Leases alleviate that problem, since there’s a built-in timeout, after which resources will be accessible again.”

    In the case of VHD images stored as blobs, Windows Azure uses a lease to ensure that only one virtual machine at a time has the VHD mounted in a read/write configuration. In certain cases, however, we’ve found that the lease may not expire correctly after deleting the virtual machine and deleting the disk or OS image associated with the VHD. This can cause a lease conflict error message to occur when you try to delete the VHD or re-use it later in a different virtual machine.

    If you’re affected by this issue, you can explicitly break the lease that has not expired, or you can make a copy of the VHD and use that copy for provisioning a new virtual machine. Craig Landis has posted instructions on the Windows Azure community forum for how to do this from Windows machines; he also covers the same techniques in a separate post addressing a variation on the issue.

    For those who are managing Windows Azure virtual machines from Linux or Mac desktops, our team has developed a Node.js script that can be used to break a lease if needed. Here are the steps to follow for installing and running the script:

    1. Verify through the Windows Azure management portal that the VHD is not actually in use. Craig’s forum post provides guidance on how to do this.

    2. If you don’t have the Windows Azure command line tool for Mac and Linux installed, you can get it by installing the Windows Azure SDK for Node.js. SDK installation instructions for Windows, Mac, and Linux can be found on the Windows Azure Node.js Developer Center.

    3. Download and import your Windows Azure publish settings file, as covered under “Manage your account information and publish settings” in the command line tool documentation.

    4. Copy the the breakLease.js file (available here) to the node_modules/azure-cli subfolder under your Node.js global modules folder. You can find your global modules folder with the npm ls –g command. For example, on my Windows machine that command returns c:\Users\dmahugh\AppData\Roaming\npm, so I need to copy the script to c:\Users\dmahugh\AppData\Roaming\npm\node_modules\azure-cli.

    After you’ve completed those setup steps, you can break a blob lease by running the script with a single parameter, the URL of the blob:

    > node breakLease.js <absolute-url-to-blob>

    The script prints out information about the steps it takes to break the lease:

    image

    That’s all there is to it. As I mentioned earlier, this workaround is only needed in certain cases until the underlying cause has been fixed. Please let us know if you run into any issues using this script.

  • Interoperability @ Microsoft

    OSCON 2012

    • 0 Comments

    It was great to see everyone at OSCON last week! The MS Open Tech team had a fun and productive week meeting new people, reconnecting with old friends, learning about the latest OSS trends, and playing with the amazing 82" Perceptive Pixel touch screen at our booth. Julian Cash took over 4000 photos of visitors to the booth, and if you were one of the lucky people who spent time making creative photos with him, stay tuned. We'll post an update shortly when all of the photos have been uploaded to his web site.

    If you weren't able to attend OSCON this year, you can find speaker slides and videos on the OSCON web site. Those videos are also a great resource for those who attended the conference -- for example, I've just finished watching Laurie Petrycki's interview with Alex Payne about Scala's interesting combination of functional and object-oriented programming language constructs.

    There are two interviews with our team's leader Gianugo Rabellino that are available on YouTube and well worth watching to better understand the work we're doing with open source communities:

    Thanks to all the hard-working event organizers, exhibitors, sponsors, and attendees who made OSCON such a well-run and successful show!

    Doug Mahugh
    Senior Technical Evangelist
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    More of Microsoft’s App Development Tools Goes Open Source

    • 1 Comments

    Today marks a milestone since we launched Microsoft Open Technologies, Inc. (MS Open Tech) as we undertake some important open source projects. We’re excited to share the news that MS Open Tech will be open sourcing the Entity Framework (EF), a database mapping tool useful for application development in the .NET Framework. EF will join the other open source components of Microsoft’s dev tools – MVC, Web API, and Web Pages with Razor Syntax – on CodePlex to help increase the development transparency of this project.

    MS Open Tech will serve as an accelerator for these projects by working with the open source communities through our new MS Open Tech CodePlex landing page. Together, we will help build out its source code until shipment of the next product version.

    This will enable everyone in the community to monitor and provide feedback on code check-ins, bug-fixes, new feature development, and build and test the products on a daily basis using the most up-to-date version of the source code.

    The newly opened EF will, for the first time, allow developers outside Microsoft to submit patches and code contributions that the MS Open Tech development team will review for potential inclusion in the products.

    We were happy to see the welcoming response when Scott Guthrie announced a similar open development approach with ASP.NET MVC4 and Web API in March. He said they have found it to be a great way to build an even tighter feedback loop with developers – and ultimately deliver even better products as a result. Check out what Scott has to say about this new EF news on his blog today.

    Together, this news further demonstrates how we want to enable our growing community of developers to build great applications. Take a look at the projects you’ll find on CodePlex:

    • Entity Framework – The ADO.NET Entity Framework is a widely adopted Object/Relational Mapping (ORM) framework that enables developers to work with relational data as domain-specific objects, eliminating the need for most of the data access plumbing code that developers usually need to write
    • ASP.net MVC 4 – this is the newest release of the ASP.net MVC (Model-View-Controller) framework. It is a web framework applying the MVC pattern to build web sites that separate data, presentation and actions.
    • Web API – this is a framework that augments ASP.net MVC to expose easily XML and JSON APIs consumable by websites or mobile devices. You can view it as a special model that instead of returning HTML (views) returns JSON or XML (data)
    • Web Pages/ Razor version 2, i.e. a view engine for MVC. It is a way to mix HTML and server code so that you can bind HTML pages to code and data.

    We are proud to have created an engineering culture for open development through the people that work at MS Open Tech. We’ve grown into an innovative hub where engineers assemble to build, accept and contribute to open source projects. Today we profiled our new MS Open Tech Hub where engineering teams across Microsoft may be temporarily assigned to MS Open Tech to participate in the Hub, where they will collaborate with the community, work with the MS Open Tech full time employees contribute to MS Open Tech projects, and create open source engineering best practices. Read more about our Hub on our Port 25 blog and meet the team working on the Entity Framework, MVC, Web API, and Web Pages with Razor Syntax projects at MS Open Tech. We’re nimble and we have a lot of fun in the process.

    Gianugo Rabellino
    Senior Director Open Source Communities
    Microsoft Open Technologies, Inc.
    A subsidiary of Microsoft Corporation

  • Interoperability @ Microsoft

    MongoDB Installer for Windows Azure

    • 0 Comments

    Do you need to build a high-availability web application or service? One that can scale out quickly in response to fluctuating demand? Need to do complex queries against schema-free collections of rich objects? If you answer yes to any of those questions, MongoDB on Windows Azure is an approach you’ll want to look at closely.

    People have been using MongoDB on Windows Azure for some time (for example), but recently the setup, deployment, and development experience has been streamlined by the release of the MongoDB Installer for Windows Azure. It’s now easier than ever to get started with MongoDB on Windows Azure!

    MongoDB

    MongoDB is a very popular NoSQL database that stores data in collections of BSON (binary JSON) objects. It is very easy to learn if you have JavaScript (or Node.js) experience, featuring a JavaScript interpreter shell for administrating databases, JSON syntax for data updates and queries, and JavaScript-based map/reduce operations on the server. It is also known for a simple but flexible replication architecture based on replica sets, as well as sharding capabilities for load balancing and high availability. MongoDB is used in many high-volume web sites including Craigslist, FourSquare, Shutterfly, The New York Times, MTV, and others.

    If you’re new to MongoDB, the best way to get started is to jump right in and start playing with it. Follow the instructions for your operating system from the list of Quickstart guides on MongoDB.org, and within a couple of minutes you’ll have a live MongoDB installation ready to use on your local machine. Then you can go through the MongoDB.org tutorial to learn the basics of creating databases and collections, inserting and updating documents, querying your data, and other common operations.

    MongoDB Installer for Windows Azure

    The MongoDB Installer for Windows Azure is a command-line tool (Windows PowerShell script) that automates the provisioning and deployment of MongoDB replica sets on Windows Azure virtual machines. You just need to specify a few options such as the number of nodes and the DNS prefix, and the installer will provision virtual machines, deploy MongoDB to them, and configure a replica set.

    Once you have a replica set deployed, you’re ready to build your application or service. The tutorial How to deploy a PHP application using MongoDB on Windows Azure takes you through the steps involved for a simple demo app, including the details of configuring and deploying your application as a cloud service in Windows Azure. If you’re a PHP developer who is new to MongoDB, you may want to also check out the MongoDB tutorial
    on php.net
    .

    Developer Choice

    MongoDB is also supported by a wide array of programming languages, as you can see on the Drivers page of MongoDB.org. The example above is PHP-based, but if you’re a Node.js developer you can find a the tutorial Node.js Web Application with Storage on MongoDB over on the Developer Center, and for .NET developers looking to take advantage of MongoDB (either on Windows Azure or Windows), be sure to register for the free July 19 webinar that will cover the latest features of the MongoDB .NET driver in detail.

    The team here at Microsoft Open Technologies is looking forward to working closely with 10gen to continue to improve the MongoDB developer experience on Windows Azure going forward. We’ll keep you updated here as that collaboration continues!

    Doug Mahugh
    Senior Technical Evangelist
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    Check out the updated HTTP Speed+Mobility Open Source Prototype

    • 0 Comments

    Microsoft Open Technologies, Inc. has just released an update to the open source HTTP Speed+Mobility Prototype that it first announced in early May to the developer community. This update implements the latest changes made by Microsoft to the HTTP Speed+Mobility proposal to the IETF httbis workgroup on June 15, 2012.

    As Jean Paoli and Sandeep Singhal had articulated in their blog post back in March, the HTTPbis working group in the Internet Engineering Task Force (IETF) has approved a new charter to define HTTP “2.0” to address performance limitations with HTTP. The original HTTP Speed+Mobility proposal was the first contribution made by Microsoft toward that goal.

    The updated proposal reaffirms the guiding principles of HTTP Speed+Mobility. Specifically, in our view any successful update to the HTTP protocol will have to:

    • Maintain existing HTTP semantics.
    • Maintain the integrity of the layered architecture.
    • Use existing standards when available to make it easy for the protocol to work with the current web infrastructure.
    • Be broadly applicable and flexible, by keeping the client in control of content.
    • And, last but not least, account for the needs of modern mobile clients, including power efficiency, support for HTTP-based applications, and connectivity through costed networks.

    We would like to thank the community for your interest in our proposal and for providing valuable feedback on the initial prototype implementation. We made several notable enhancements to the proposal, which the new version of the prototype now implements:

    • We implemented an updated Session Layer to more clearly define the separation with the other layers. The Session Layer is now formally defined as a WebSocket extension.
    • The Streams Layer was simplified to take advantage of the WebSockets integration. We removed all of the redundancy within the WebSockets Framing. For example, HTTP Speed+Mobility frames no longer have a dedicated length field, as the length of the payload is already specified in the underlying Websocket frame.
    • Finally, a new flow control logic was implemented. The prototype implements a simple receive buffer management scheme based on the credit control mechanism now specified in the proposal. We believe that it provides a good balance between throughput and flow control, while adhering to our stated tenet that the Client is in control of the Content.

    Collectively, these changes make the HTTP Speed+Mobility protocol both better integrated with the existing RFCs it builds upon, and at the same time, simpler to implement and debug.

    As always, we encourage you to download the prototype, try it out, inspect the source code, and give us your feedback. We look forward to your contributions, as well as to constructive discussions about the next version of HTTP at the upcoming IETF meetings!

    Adalberto Foresti
    Senior Program Manager
    Microsoft Open Technologies, Inc.
    A subsidiary of Microsoft Corporation

  • Interoperability @ Microsoft

    Doctrine supports SQL Database Federations for massive scalability on Windows Azure

    • 0 Comments

    Symfony and Doctrine are a popular combination for PHP developers, and now you can take full advantage of these open source frameworks on Windows Azure. We covered in a separate post the basics of getting started with Symfony on Windows Azure, and in this post we’ll take a look at Doctrine’s support for sharding via SQL Database Federations, which is the result of ongoing collaboration between Microsoft Open Technologies and members of the Symfony/Doctrine community.

    SQL Database Federations

    My colleague Ram Jeyaraman covered in a blog post last December the availability of the SQL Database Federations specification.  This specification covers a set of commands for managing federations as objects in a database. Just as you can use SQL commands to create a table or a stored procedure within a database, the SQL Database Federations spec covers how to create, use, or alter federations with simple commands such as CREATE FEDERATION, USE FEDERATION, or ALTER FEDERATION.

    If you’ve never worked with federations before, the concept is actually quite simple. Your database is partitioned into a set of federation members, each of which contains a set of related data (typically group by a range of values for a specified federation distribution key):

     

    This architecture can provide for massive scalability in the data tier of an application, because each federation member only handles a subset of the traffic and new federation members can be added at any time to increase capacity. And with the approach used by SQL Database Federations, developers don’t need to keep track of how the database is partitioned (sharded) across the federation members – the developer just needs to do a USE FEDERATION command and the data layer handles those details without any need to complicate the application code with sharding logic.

    You can find a detailed explanation of sharding in the SQL Database Federations specification, which is a free download covered by the Microsoft Open Specification Promise. Questions or feedback on the specification are welcome on the MSDN forum for SQL Database.

    Doctrine support for SQL Database Federations

    The Doctrine Project is a set of open-source libraries that help ease database development and persistence logic for PHP developers. Doctrine includes a database abstraction layer (DBAL), object relational mapping (ORM) layer, and related services and APIs.

    As of version 2.3 the Doctrine DBAL includes support for sharding, including a custom implementation of SQL Database Federations that’s ready to use with SQL Databases in Windows Azure. Instead of having to create Federations and schema separately, Doctrine does it all in one step. Furthermore, the combination of Symfony and Doctrine gives PHP developers seamless access to blob storage, Windows Azure Tables, Windows Azure queues, and other Windows Azure services.

    The online documentation on the Doctrine site shows how easy it is to instantiate a ShardManager interface (the Doctrine API for sharding functionality) for a SQL Database:

    The Doctrine site also has an end-to-end tutorial on how to do Doctrine sharding on Windows Azure, which covers creation of a federation, inserting data, repartitioning the federation members, and querying the data.

    Doctrine’s sharding support gives PHP developers a simple option for building massively scalable applications and services on Windows Azure. You get the ease and flexibility of Doctrine Query Language (DQL) combined with the performance and durability of SQL Databases on Windows Azure, as well as access to Windows Azure services such as blob storage, table storage, queues, and others.

    Doug Mahugh
    Senior Technical Evangelist
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    Symfony on Windows Azure, a powerful combination for PHP developers

    • 1 Comments

    Symfony, the popular open source web application framework for PHP developers, is now even easier to use on Windows Azure thanks to Benjamin Eberlei’s Azure Distribution Bundle project. You can find the source code and documentation on the project’s GitHub repo.

    Symfony is a model-view-controller (MVC) framework that takes advantage of other open-source projects including Doctrine (ORM and database abstraction layer), PHP Data Objects (PDO), the PHPUnit unit testing framework, Twig template engine, and others. It eliminates common repetitive coding tasks so that PHP developers can build robust web apps quickly.

    Azure Distribution BundleSymfony and Windows Azure are a powerful combination for building highly scalable PHP applications and services, and the Azure Distribution Bundle is a free set of tools, code, and documentation that makes it very easy to work with Symfony on Windows Azure. It includes functionality for streamlining the development experience, as well as tools to simplify deployment to Windows Azure.

    Features that help streamline the Symfony development experience for Windows Azure include changes to allow use of the Symfony Sandbox on Windows Azure, functionality for distributed session management, and a REST API that gives Symfony developers access to Windows Azure services using the tools they already know best. On the deployment side, the Azure Distribution Bundle adds some new commands that are specific to Windows Azure to Symfony’s PHP app/console that make it easier to deploy Symfony applications to Windows Azure:

    • windowsazure:init – initializes scaffolding for a Symfony application to be deployed on Windows Azure
    • windowsazure:package – packages the Symfony application for deployment on Windows Azure

    Benjamin Eberlei, lead developer on the project, has posted a quick-start video that shows how to install and work with the Azure Distribution Bundle. His video takes you through prerequisites, installation, and deployment of a simple sample application that takes advantage of the SQL Database Federations sharding capability built into the SQL Database feature of Windows Azure:

    Whether you’re a Symfony developer already, or a PHP developer looking to get started on Windows Azure, you’ll find the Azure Distribution Bundle to be easy to use and flexible enough for a wide variety of applications and architectures. Download the package today – it includes all of the documentation and scaffolding you’ll need to get started. If you have ideas for making Symfony development on Windows Azure even easier, you can join the project and make contributions to the source code, or you can provide feedback through the project site or right here.

    Symfony and Doctrine are often used in combination, as shown in the sample application mentioned above. For more information about working with Doctrine on Windows Azure, see the blog post Doctrine supports SQL Database Federations for massive scalability on Windows Azure.

    Symfony and Doctrine have a rich history in the open source and PHP communities, and we’re looking forward to continuing our work with these communities to make Windows Azure a big part of the Symfony/Doctrine story going forward!

    Doug Mahugh
    Senior Technical Evangelist
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    Windows Azure SDK for PHP available, including support for Service Bus

    • 0 Comments

    Good news for all you PHP developers out there: I am happy to share with you the availability of Windows Azure SDK for PHP, which provides PHP-based access to the functionality exposed via the REST API in Windows Azure Service Bus. The SDK is available as open source and you can download it here.

    This is an early step as we continue to make Windows Azure a great cloud platform for many languages, including .NET, Java, and PHP. If you’re using Windows Azure Service Bus from PHP, please let us know your feedback on how this SDK is working for you and how we can improve them. Your feedback is very important to us!

    You may refer to Windows Azure PHP Developer Center for related information.

    Openness and interoperability are important to Microsoft, our customers, partners, and developers. We believe this SDK will enable PHP applications to more easily connect to Windows Azure making it easier for applications written on any platform to interoperate with one another through Windows Azure.

    Thanks,
    Ram Jeyaraman
    Senior Program Manager
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    How to make your site faster: use web standards!

    • 0 Comments

    Justin Garret from the IE team published a post on the Exploring IE blog late last week that explains how web standards not only help reduce the cost of development and the complexity of testing across browsers and devices, but also achieves noticeable performance benefits.

    Justin’s post describes results of performance tests run on sites before and after upgrading them to web standards, demonstrating an average of 30% better page load time in IE10. Developers can also read in Justin’s post recommendation on how to upgrade their sites.

    Microsoft’s commitment to web standards involves not just implementing all the stable specifications edited by W3C, but also in actively participating in the definition of these standards by co-chairing the W3C HTML Working Group, proposing specs, sharing early implementations of the spec proposals on HTML5 Labs, demonstrating early implementations in the upcoming products on IE test drive. These are then delivered in release products after getting feedback from developers and users from HTML5 Labs and IE Test drive.

    We at Microsoft Open Technologies, Inc. actively participate in the open standards process, are deeply engaged with the industry in the W3C, are developing and publishing prototypes in collaboration with the IE team, and engage with the community to gather input and feedback.

    Check out Justin's post on the Exploring IE blog to learn more about how to make your site faster using web standards.

     

  • Interoperability @ Microsoft

    Interoperability Goodness at TechEd 2012

    • 0 Comments

    I know there was a flurry of news at the recent TechEd 2012 conference in Orlando, so I wanted to point you to a piece of Interoperability goodness that might have gone unnoticed in the mix: the release of the System Center 2012 – Virtual Machine Manager (VMM) OVF Export/Importtool.

    I recently chatted with my colleague Monica Martin, who is involved in the DTMF work around OVF for MS Open Tech. She gave me a lot of insight into the tool, which uses the Distributed Management Task Force (DMTF)’s Open Virtualization Format (OVF 1.1) standard, enables Interoperability between System Center 2012 Virtual Machine Manager (VMM) and VMware vCenter and Citrix Xen Server.

    The tool allows Microsoft’s System Center 2012 VMM users to import and export a virtual machine in an OVF 1.1 format from VMware’s vCenter and Citrix’s Xen Server.

    The OVF Import/Export tool is a set of cmdlets for use with VMM. Use of OVF promotes portability and interoperability of a virtual machine across Microsoft, VMware and Citrix hypervisors. We’ve gained valuable implementation experience with Citrix and VMware using OVF and successfully tested with vCenter and Xen Server.

    Adding OVF and the OVF Import/Export tool to the cache of advanced infrastructure, configuration and service management capabilities to SC 2012 is another milestone in Microsoft’s plans to deliver ongoing value to our customers and partners.

    The Open Virtualization Format (OVF) is an open standard for packaging and distribution of virtual appliances to run in virtual machines developed in the Distributed Management Task Force (DMTF), Inc. Microsoft and other industry partners are focused on the development of OVF. Microsoft has been involved in OVF development from the outset.

    OVF 1.1 is an international standard important to customers and partners, who are looking for strategies to effectively enable and speed their on-ramp of virtualization technologies in an interoperable way.

    This is another example of how Microsoft is committed to interoperability and openness in the products and services we provide, including our multi-hypervisor and standard-based storage management features in SC 2012.

    We have now taken this even further with the release of the System Center 2012 – Virtual Machine Manager (VMM) OVF Export/Import tool, which can be downloaded from the Microsoft Download Center. More information can be found on TechNet.

    For more information about this tool and other System Center products and solutions, please visit the System Center website.

  • Interoperability @ Microsoft

    Solr and LucidWorks: enterprise search for Windows Azure

    • 1 Comments

    Last week’s Windows Azure release delivered a host of new services for developers, ranging from hybrid cloud capabilities and Linux virtual machine support to OSS technologies delivered as a service from many vendors. Gianugo Rabellino covered the high-level view of all the exciting new offerings, and in this post I’d like to take a closer look at a service that’s likely to become very popular: LucidWorks Cloud for Windows Azure.

    Lucid Imagination, the leading experts in Lucene/Solr technology, has packaged their LucidWorks Enterprise search service in a cloud-friendly way that requires only four quick and simple steps: select a plan, sign up, log in, and start using it. LucidWorks Enterprise is based on Apache Solr, the open-source search platform from the Apache Lucene project, and it includes a variety of enhancements from the search experts at Lucid that make it easy to use Lucene/Solr functionality while preserving the purity of the open source code base and open APIs. There’s a comprehensive REST API for integrating it into your applications and services, and you get all of the functionality that has made Solr and LucidWorks Enterprise so popular: high-performance indexing for a wide range of data sources, flexible searching and faceting, and user-oriented features like auto-complete, spell-checking, and click scoring.

    As covered on the Lucid Imagination web site, there are four levels of service available for LucidWorks Cloud: Micro, Small, Medium and Large. Pick the level that meets your needs, sign up for the service, and you’re ready to start creating collections and searching your content. You can currently search content in web sites, Windows shares, Microsoft SharePoint sites, FTP, and other sources, with Windows Azure blob storage support coming soon. You can even index and search your data from Hadoop if desired. All index data is stored on Windows Azure drives, which offer high availability and reliability, and the Lucid dev operations engineering team can provide expert support for your LucidWorks Cloud environment.

     If you’re new to Solr, check out the free white paper available for download from the Lucid web site, which covers the basics of LucidWorks Enterprise and shows how to use the indexing and searching functionality through the LucidWorks dashboard. Most developers will want to study the API and integrate search tightly into their own software, but you can learn all of the key concepts through the dashboard UI without writing a single line of code.

    One concept worth pointing out here is that Solr isn’t just about searching web sites and HTTP documents. Sure, it does a great job of that, but it can also index content stored in database tables, local file systems, and other sources. There is also an XML-based Solr document format that you can use for importing data directly into the Solr engine, giving developers flexibility for indexing any type of content from any source.

    This new service from Lucid Imagination is great for those who want to get up and running quickly, but there are also developers who will want to take responsibility for all of the details and host Solr or LucidWorks Enterprise themselves. You can download LucidWorks Enterprise and install it, or you can take advantage of the simple Solr installer for Windows Azure that helps you deploy your own Solr instances as Windows Azure cloud services.

    As you can see, there are many options for getting up and running with Solr and LucidWorks. For a simple overview of how easy it is to start using the new LucidWorks Cloud service, check out this Getting Started video that covers how to create a collection, index a web site, and then search that website using the Lucidworks Cloud dashboard. Lucid continues to evolve and invest in supporting the most popular Solr clients, so there will surely be more good news for Lucene/Solr users going forward.

    In a future blog post, we’ll be covering how to use LucidWorks Cloud with popular content management systems such as WordPress and Drupal.

  • Interoperability @ Microsoft

    Windows Azure Command-Line Tool for Mac and Linux

    • 0 Comments

    Yesterday, Bill Laing of the Windows Azure team announced support for virtual machines running the Windows Server operating system as well as Linux distros such as Ubuntu, CentOS, and OpenSUSE. Now you can run existing Linux payloads on Windows Azure virtual machines, with no need to change any of your code. This capability makes Windows Azure a great platform for IaaS deployment of applications that run on Windows or Linux servers. You can find more information about what's new in Windows Azure on Scott Guthrie's blog post today and the MeetWindowsAzure event that he'll be kicking off this afternoon.

    There are two ways to work with the new virtual machine and web site capabilities of Windows Azure: through the management portal, or at the command line. This article covers the concepts behind the command-line tool, but those who prefer to use a GUI can also provision web sites and deploy Windows or Linux virtual machines from the Windows Azure portal. An easy-to-use GUI takes you through every step of the process.

    Many developers prefer the power and flexibility of command-line tools, however, which can be automated via a scripting language. If you’re working exclusively on Windows machines, the Windows PowerShell cmdlets  are your best option, but for mixed environments, the Windows Azure command-line tool for Mac and Linux provides a consistent experience across Linux, Mac OS, and Windows desktops.

    Installation of the command line tool is very simple. If you’re working on a Mac OS X machine, you can use the Mac installer, and for Windows or Linux you’ll just need to install the latest version of Node.js and then type this command.

    npm install azure --global

    That will install the Windows Azure SDK for Node.js, which includes the command-line tool. Alternatively, you can download the command line tools or the Windows PowerShell cmdlets from this download page.

    To verify that you have the tool installed and ready to use, type the command azure --help and you‘ll see the output shown to the right. This screen tells you which version of the tool you’re using, and how to get information about each of the commands.

    The first thing to understand is the basic structure of the commands. In general terms, you type azure followed by a topic (what you’re working with), a verb (what to do), and various optional parameters to provide additional information. Here’s a diagram that provides a general framework for understanding the command-line syntax.

    Some commands have other required command-line parameters in addition to what’s shown here. For more information about specific command syntax, see the reference documentation.

    The command-line tool allows you to provision new web sites and virtual machines, and that activity needs to be associated with a Windows Azure subscription. So before you start using the tool, you’ll need to download a publish settings file from the Windows Azure portal and then import it as a local configuration setting. For more information about how to do this, see the how-to guide How to use the Windows Azure Command-Line Tools for Mac and Linux, which also covers the basics of deploying web sites and virtual machines.

    Let’s take a look at some of the other things you can do with the command-line tool …

     Locations and affinity groups. When you deploy a virtual machine, you must tell Windows Azure the location where you’d like for your virtual machine to be deployed – North Central US, for example. The azure vm location list command provides a list of available locations that you can use.

    You can also use an affinity group to specify the location. You can create your own affinity groups (here’s how) and then use an affinity group instead of a location when you deploy a virtual machine, cloud service, or storage account. The use of an affinity group tells Windows Azure “please host these services as close together as possible,” with a goal of reducing network latency. The azure account affinity-group list command lists your available affinity groups.

    Cloning a customized virtual machine. After you’ve deployed a virtual machine and customized it by installing and configuring software via SSH or other means, you may want to deploy additional instances of that virtual machine that will include your customizations. To do this, stop the virtual machine and use the vm capture <vm-name> <target-image-name> command to capture a cloned copy of it. Then you can deploy new instances of your customized virtual machine through the vm create command.

    Virtual machine data disks. When you deploy a virtual machine, you may want to attach a separate data disk, which is a .vhd file in Windows Azure blob storage that provides additional storage for a virtual machine. The azure vm disk command provides options for creating data disks and attaching them to virtual machines. Use the azure help vm disk command to list the available options.

    Virtual machine endpoints. When you deploy multiple instances of a virtual machine, you need to set up port mapping between the virtual machines and the load balancer. The load balancer uses an internal IP address to route traffic to each virtual machine, and these mappings are defined through the azure vm endpoint create command. In a blog post later this month, we’ll take a hands-on look at the details of configuring multiple virtual machines behind a load balancer.

     Windows Azure cloud services. Although the main focus of the command-line tool is working with virtual machines and web sites (IaaS scenarios), it can also be used to view the cloud services that you have deployed through web roles and worker roles. The azure service list command lists your cloud services, and the azure service delete command deletes a cloud service.

    Working with Linux virtual machines. The command-line tool supports both Linux and Windows operating systems for deployment on virtual machines, and for most of the commands there is no difference between working with Windows and working with Linux. Some differences are inherent in the operating system itself, however. For example, Windows uses RDP whereas Linux uses SSH. The article An Introduction to Linux on Windows Azure provides an overview of what you need to know to take full advantage of Linux virtual machines on Windows Azure.

    Write custom service management tools and workflows in Node.js. You can provision and manage virtual machines from your own code, through the new iassClient module that provides access to the service management API from Node.js. For more information, see the reference documentation.

    As you can see, the Windows Azure command-line tool for Mac and Linux opens a whole new world of possibilities for developers. Working from a Linux or Mac desktop, you can now deploy and manage virtual machines and web sites on Windows Azure. You can also migrate an existing Linux application to Windows Azure without changing a line of code, and then begin taking advantage of powerful Windows Azure services at any time. It’s all about developer choice: your choice of client operating system, server operating system, programming language, frameworks, and tools – all supported by Windows Azure!

    Doug Mahugh
    Senior Technical Evangelist
    Microsoft Open Technologies, Inc.
    A subsidiary of Microsoft Corporation

  • Interoperability @ Microsoft

    Eclipse Plugin for Java Developers on Windows Azure

    • 1 Comments

    I’m pleased to announce the availability of the Windows Azure Plugin for Eclipse with Java (by Microsoft Open Technologies), June 2012 CTP.

    This has been the most ambitious and technically complex update we’ve had, focusing on improving the ease of creating projects, deploying apps to the cloud, and simplifying developers’ programmatic access to various services provided by Windows Azure. This update also includes a set of other enhancements driven by user feedback.

    These are the main additions:

    - New Windows Azure Deployment Project wizard – enables you to select your JDK, Java server, and Java apps right in the improved wizard UI. The list of out-of-the-box server configurations to choose from now includes Tomcat 6, Tomcat 7, GlassFish OSE 3, Jetty 7, Jetty 8, JBoss 6, and JBoss 7 (stand-alone), and it is user-customizable. (This UI improvement is an alternative to dragging and dropping compressed files and copying over startup scripts, which was previously the main approach. That method still works fine but will likely be preferred only for more advanced scenarios now.)

    - Server Configuration role property page – enables you to easily switch the servers and applications associated with your deployment after you create the project, as part of the Role Propertiesdialog box.

    - “Publish to cloud” wizard – an easy way to deploy your project to the Windows Azure cloud directly from Eclipse, automating all the heavy lifting of fetching credentials, signing in, uploading, and so on. (This is a contribution from our Java partner GigaSpaces Technologies Ltd.)

    - Widows Azure Toolbar – provides easy access to several commonly used actions: Run in emulator, Reset emulator, Create cloud package, New Windows Azure Project, Publish to Windows Azure cloud, Unpublish.

    - Componentsproperty page makes it easier for advanced users to set up project dependencies between individual Windows Azure roles in the project and other external resources such as Java application projects, as well as to describe their deployment logic.

    - Package for Windows Azure Libraries for Java (by Microsoft Open Technologies, Inc) consists of all the JAR files needed for programming the Windows Azure APIs, including the Windows Azure Libraries fo Java. It is installed by default when you install the main plugin. You add a reference to just this one Eclipse library from your Java project. You can now also easily embed the entire library in your WAR file at the same time with just a single check box (no need to configure the deployment assembly separately). This package is for users who do not use Maven and would rather not have to download all the JAR files on their own.

    - Instance input endpoint configuration UI– helps enable remote debugging and JMX diagnostics for specific compute instances running in the cloud in scenarios with multi-instance deployments. Users can do this by configuring this new type of Windows Azure endpoint. (Previously, remote debugging could be made to work reliably only for single-instance deployments.)

    - Windows Azure Access Control Services Filter (by Microsoft Open Technologies, Inc) – enables your Java application to seamlessly take advantage of Windows Azure Active Directory Access Control (ACS) authentication using various identity providers (such as Google, Live.com, and Yahoo). You don’t have to write authentication logic yourself, just configure a few options and let the filter do the heavy lifting of enabling users to sign in using ACS. Then just write the code that gives users access to resources based on their identity, as returned to your app by the filter inside the Request object.

    Martin Sawicki

    Principal Program Manager

    Microsoft Open Technologies, Inc.

    A subsidiary of Microsoft Corporation

  • Interoperability @ Microsoft

    Gigaspaces Working with MS Open Tech on Java Tools for Windows Azure

    • 0 Comments

    We’re pleased to announce that GigaSpaces Technologies Ltd, an established leader in helping enterprises move their Java applications to the cloud, has joined Microsoft Open Technologies, Inc. and Persistent Systems Ltd. in the the development work behind the latest version of the Windows Azure Plugin for Eclipse with Java (by Microsoft Open Technologies) - June 2012 CTP.

    GigaSpaces has contributed the “Publish to cloud”wizard to the plugin, enabling Java developers to easily deploy their projects to the Windows Azure cloud directly from within Eclipse, thus eliminating the need for manual uploads via the Windows Azure portal. GigaSpaces has also contributed other new capabilities for Java developers, including:

    • the ability to view the progress of the deployment in an Windows Azure Activity Log view in Eclipse
    • the ability to reconfigure remote desktop access as part of the deployment process
    • the ability to delete previously published deployments

    You can read more about the latest plugin update here, and you can learn how to use the “Publish to cloud” feature here.

    Known for its industry-leading scalable application platforms, GigaSpaces Technologies is the creator of Cloudify, an innovative Open PaaS stack solution that enables on-boarding of mission-critical and big-data applications to the cloud without any code or architectural changes. Cloudify's recipe-based approach provides the flexibility and control required to manage the deployment, scaling, management and high availability of all the tiers of your application. Hundreds of tier-1 organizations worldwide use GigaSpaces technology to enhance IT efficiency and performance, among which are Fortune Global 500 enterprises and ISVs, from many industries spanning financial services, e-commerce, Telco, healthcare, and more.

    Thanks to the team at GigaSpaces for all they’ve done to help streamline and improve the Windows Azure development and deployment experience for Java developers! We look forward to continued collaboration with Gigaspaces in the future.

    Martin Sawicki

    Principal Program Manager

    Microsoft Open Technologies, Inc.

    A subsidiary of Microsoft Corporation

  • Interoperability @ Microsoft

    Migrating HTML5-based applications to Windows Phone overnight with Apache Cordova and jQuery Mobile

    • 4 Comments

    Microsoft Open Technologies, Inc., the Apache Cordova team and the JQueryMobile team recently met with over 20 top PhoneGap/Apache Cordova developers for a Hackathon in San Francisco to gather feedback on building HTML5 applications running on Windows Phone with Apache Cordova and the jQuery Mobile theme for Windows Phone (Metro style). The Windows Phone team also joined the party, asking developers about their HTML5 and Javascript experiences on top of the Windows Phone Web Browser control.

    While some of the developers had backgrounds in start-ups and others were independent developers, every attendee had one thing in common: all of them had PhoneGap/Cordova applications published on Android and/or iPhone platforms. During the event, using Apache Cordova and jQuery Mobile, we helped attendees migrate their HTML5-based applications to Windows Phone. For many of the attendees, this was their first time to work with Windows Phone. The energy at the event was amazing as developers got to experience first-hand the ease of integrating Apache Cordova and jQuery Mobile with Windows Phone.

    You can read the report on the event from Jesse and Steve from the Apache Cordova team here.

    After a few hours of learning Visual Studio Express for Phone, coding and eating pizza, the first demos of HTML5 applications running on Windows Phone started to pop up. Developers saw their applications running on their new Windows Phone devices, which they received as part of the event along with AppHub tokens for them to publish applications on the Windows Phone marketplace.

    Developers from Learnzapp, with no previous experience on Windows Phone development, migrated their Cordova/JQuery Mobile Law School Admission Test application to Windows Phone and applied the jQuery Mobile theme for Windows Phone (Metro style) to their HTML5 controls in only a few hours. Those developers plan to submit the application to the Windows Phone marketplace in the next few days. You can read their own report on the event on their blog. Below is a screenshot of the LSAT application running on an Android device, a Windows Phone and an iPhone.

    image

    Developers from Tiggzi, delivering a cloud based Builder for HTML5, jQuery Mobile and Apache Cordova applications, kicked off the addition of Windows Phone to the list of platforms their tool targets. They announced the added support for Windows Phone earlier this month, only 3 weeks after the event.

    The event was a success not only because developers left the Hackathon with functional HTML5-based Windows Phone applications, after migrating their applications in a single night, but also because experienced developers helped us identify the key aspects of the migration process which will enable making the HTML5 and Javascript development for Windows Phone even better.

    We want to thank everyone who attended the event, and look forward to further engagement with this community. Be sure to take a look at the video below to see a short demo of an HTML5 application development with Apache Cordova, jQuery Mobile and the new jQuery Mobile theme for Windows Phone (Metro style).

    To learn more about HTML5 and JavaScript development for Windows Phone, visit this page where you will find related resource, articles and tutorials.

    Abu Obeida Bakhach
    Program Manager
    Microsoft Open Technologies, Inc.
    A subsidiary of Microsoft Corporation

  • Interoperability @ Microsoft

    MongoDB Experts video series

    • 0 Comments

    MS Open Tech is pleased to announce a new series of videos on Channel 9 that covers MongoDB topics for developers working on Windows Azure and Windows. Each video in the series features insights from one of the MongoDB experts at 10gen, the leader in MongoDB development, support, training and consulting.

    The first three videos in the series have been posted, and more are coming soon. Here’s what has been covered in the first videos in the series …

    imageMongoDB Overview with Jared Rosoff provides a high-level overview of the approach that MongoDB takes for delivering highly scalable read and write operations. If you’re entirely new to MongoDB, this is the place to start. MongoDB is one of many database platforms that are often grouped together as “NoSQL databases,” but each NoSQL database has its own unique philosophy and personality. In this video, you’ll get a feel for MongoDB’s personality.

    imageMongoDB Replica Sets with Sridhar Nanjundeswaran covers the key concept at the heart of MongoDB scalability: replica sets, which are groups of MongoDB servers that can provide high availability and performance even in the face of failures at the network and hardware level. MongoDB replica sets are easy to set up and deploy, and Sridhar sets up a simple replica set from scratch and then shows how it gracefully handles various failover scenarios

    imageMongoDB C#/.NET Driver with Robert Stam is a hands-on look at how to do common database operations in C# through use of the C#/.NET driver from 10gen. Robert is the developer of the driver, and in this video he shows how to create, read, update and delete documents in MongoDB collections.

    MS Open Tech has been working closely with 10gen to improve the MongoDB experience on Windows Azure, and we’re working together on a variety of new initiatives to continue on that path. Future videos will cover the results of that work, as well as advanced topics related to the current videos (for example, Linq support in the C#/.NET driver) and other topics of interest to developers who are working with MongoDB on Windows Azure.

    Stay tuned, and if there are MongoDB/Azure topics you’d be interested in seeing covered in this series please let us know!

  • Interoperability @ Microsoft

    OData submitted to OASIS for standardization

    • 2 Comments

    Citrix, IBM, Microsoft, Progress Software, SAP AG, and WSO2 have submitted a proposal to OASIS to begin the formal standardization process for OData. You can find all the details here, and OData architect Pablo Castro also provides some context for this announcement over on the OData.org blog. It’s an exciting time for the OData community!

    OData is a REST-based web protocol for querying and updating data, and it’s built on standardized technologies such as HTTP, Atom/XML, and JSON. If you’re not already familiar with OData, the OData.org web site is the best place to learn more.

    Many organizations are already working with OData, and it has proven to be a useful and flexible technology for enabling interoperability between disparate data sources, applications, services, and clients. Chris Woodruff has a blog post this week that lists many OData implementations, and as he explained in a post last week, “By having data that is easy to consume and understand organizations can allow their customers and partners (via the developers that build the solutions using one or more of the available OData libraries) to leverage the value of curated data that the organization owns.” Many organizations are already pursuing that vision – as Ralf Handl of SAP AG told us at a recent OData meetup, “my job is relatively simple: I want to put OData into all of our products.”

    We support OData in many Microsoft products and services, and the list is growing longer all the time. This includes OData consumers such as Microsoft Excel (via the free PowerPivot add-in) as well as OData producers such as Microsoft SharePoint, Microsoft SQL Server Reporting Services, and Microsoft Dynamics CRM. Windows Server supports OData, and Windows Azure provides OData support in many areas, including Windows Azure Storage Table Service, Windows Azure Marketplace, and ACS Management Service. We’re also making many Microsoft data sources available in OData format. For example:

    • The OData feed from Microsoft Research can be used to query against publications, projects, events, and other entities.
    • The Windows Azure Marketplace DataMarket offers OData feeds for Business, Government, Health Science, Retail, and many other categories.

     A variety of OSS technologies can benefit from OData support, and our team has delivered tools to make it easy for OSS developers to expose data as OData from a variety of platforms. Earlier this year we announced Open Source OData Tools for MySQL and PHP Developers, including the OData Producer Library for PHP and the OData Connector for MySQL. We’re continuing to work closely with various OSS communities on OData support, and we’ll be releasing information soon on new ways to provide OData feeds from popular OSS frameworks and applications.

    OData’s query syntax is straightforward from a developer’s perspective. For example, here’s a query that you can use in any browser to return the count of the number of products in the sample Northwind database OData feed on OData.org:

            http://services.odata.org/Northwind/Northwind.svc/Products/$count

    In a typical application, that query would be generated behind the scenes, and the returned result would be rendered in a nicely formatted manner as appropriate for the particular application.

    To enable those sorts of scenarios, developers need OData support for the languages, framework, and tools that they’re already using. Many developer tools already offer OData support. Here are a few examples:

    • Microsoft Visual Studio offers comprehensive OData support through WCF Data Services.
    • OData support is provided by OSS SDKs for iPhone, Android, and other frameworks.
    • Telerik has developed a variety of developer tools and services for creating OData consumers and producers.
    • ComponentOne offers OData support in their BarChart and LineChart controls.
    • Validation is a critical step in creating robust OData services, and the Outercurve Foundation provides an OData Service Validation Tool that can be used to test implementations against the OData spec.
    • The OData4j project is an open-source toolkit to help Java developers add OData support to their applications and services.

    As you can see, the OData ecosystem is growing, and awareness of OData is growing with it. At the OData meetup earlier this year, we heard from many people who are finding innovative ways to use OData in their organizations to improve customer service, enable new scenarios, and increase efficiency. Anant Jhingran of APIgee stated in his presentation at the meetup that “if data isn’t your core business, then you should give it away.” It was a provocative statement, and for those who share that philosophy, OData is a great tool for making it easier to share data.

    If you’re interested in implementing OData or contributing to the OData standard, now’s the time to get involved. You can work with the odata.org community to help drive awareness and share implementation experiences, or join the OASIS OData technical committee (OData TC) to contribute to the standard.  The OData TC will be a vibrant and diverse group of people – just like the community who got us here today – working together to open up data sources in a standardized way. As Pablo stated in his blog post, the main value of OData is not any particular design choice, but the fact that enough people agree to the same pattern, thus removing friction from sharing data across independent producers and consumers. The first TC call will be in late July, so there’s still plenty of time to get involved if you’d like to be part of the team that will be helping OData evolve.

    Congratulations to everyone who has worked so hard to get OData to this important step on the journey to standardization! We’re looking forward to working with the community to develop OData into a formal standard through OASIS.

    Doug Mahugh
    Senior Technical Evangelist
    Microsoft Open Technologies, Inc.
    A subsidiary of Microsoft Corporation

  • Interoperability @ Microsoft

    News from MS Open Tech: Initial HTTP Speed+Mobility Open Source Prototype Now Available for Download

    • 3 Comments

    Microsoft Open Technologies, Inc. has just published an initial open source prototype implementation of HTTP Speed+Mobility. The prototype is available for download on html5labs.com, where you will also find pointers to the source code.

    The IETF HTTPbis workgroup met in Paris at the end of March to discuss how to approach HTTP 2.0 in order to meet the needs of an ever larger and more diverse web. It would be hard to downplay the importance of this work: it will impact how billions of devices communicate over the internet for years to come, from low-powered sensors, to mobile phones, to tablets, to PCs, to network switches, to the largest datacenters on the planet.

    Prior to that IETF meeting, Jean Paoli and Sandeep Singhal announced in their post to the Microsoft Interoperability blog that Microsoft has contributed the HTTP Speed+Mobility proposal as input to that conversation.

    The prototype implements the websocket-based session layer described in the proposal, as well as parts of the multiplexing logic incorporated from Google’s SPDY proposal. The code does not support header compression yet, but it will in upcoming refreshes.

    The open source software comprises a client implemented in C# and a server implemented in Node.js running on Windows Azure. The client is a command line tool that establishes a connection to the server and can download a set of web pages that include html files, scripts, and images. We have made available on the server some static versions of popular web pages like http://www.microsoft.com and http://www.ietf.org, as well as a handful of simpler test pages.

    We invite you to inspect the open source code directly in order to familiarize yourself with how everything works; we have also made available a readme file at this location describing the various options available, as well as the meaning of the output returned to the console.

    So, please download the prototype, try it out, and let us know what you think: every developer is a stakeholder in the HTTP 2.0 standardization process. We look forward to hearing your feedback, and to applying it to upcoming iterations of the prototype code.

    Adalberto Foresti
    Senior Program Manager
    Microsoft Open Technologies, Inc.
    A subsidiary of Microsoft Corporation

  • Interoperability @ Microsoft

    Here’s to the first release from MS Open Tech: Redis on Windows

    • 16 Comments

    The past few weeks have been very busy in our offices as we announced the creation of Microsoft Open Technologies, Inc. Now that the dust has settled it’s time for us to resume our regular cadence in releasing code, and we are happy to share with you the very first deliverable from our new company: a new and significant iteration of our work on Redis on Windows, the open-source, networked, in-memory, key-value data store.

    The major improvements in this latest version involve the process of saving data on disk. Redis on Linux uses an OS feature called Fork/Copy On Write. This feature is not available on Windows, so we had to find a way to be able to mimic the same behavior without changing completely the save on disk process so as to avoid any future integration issues with the Redis code.

    The version we released today implements the Copy On Write process at the application level: instead of relying on the OS we added code to Redis so that some data structures are duplicated in such a way that Redis can still serve requests from clients while saving data on disk (thus achieving the same effect of Fork/Copy On Write does automatically on Linux).

    You can find the code for this new version on the new MS Open Tech repository in GitHub, which is currently the place to work on the Windows version of Redis as per guidance from Salvatore Sanfilippo, the original author of the project. We will also continue working with the community to create a solid Windows port.

    We consider this not to be production ready code, but a solid code base to be shared with the community to solicit feedback: as such, while we pursue stabilization, we are keeping the older version as default/stable on the GitHub repository. To try out the new code, please go to the bksavecow branch.

    In the next few weeks we plan to extensively test the code so that developers can use it for more serious testing. In the meantime, we will keep looking at the ‘save on disk’ process to find out if there are other opportunities to make the code perform even better. We will promote the bksavecow branch to master as soon as we (and you!) are confident the code is stable.

    Please send your feedback, file suggestions and issues to our GitHub repository. We look forward to further iterations and to working with the Redis community at large to make the Windows experience even better.

    Claudio Caldato

    Principal Program Manager

    Microsoft Open Technologies, Inc.

    A subsidiary of Microsoft Corporation.

  • Interoperability @ Microsoft

    More news from MS Open Tech: announcing the open source Metro style theme for jQuery Mobile

    • 25 Comments

    Starting today, the Metro style theme for JQuery Mobile, the popular open source mobile user interface framework, is available for download on GitHub and can be used as a NuGet package in Visual Studio.

    The theme enables HTML5 pages to adapt automatically to the Metro design style when rendered on Windows Phone 7.5. The Metro style theme is open source and available for download here. This new Metro style theme’s development was sponsored by Microsoft Open Technologies, Inc. working closely with Sergei Grebnov, an Apache Cordova committer and jQuery Mobile developer.

    The theme looks just gorgeous, doesn’t it?

    clip_image002 clip_image002 clip_image006image

    The CSS and Javascript theme adapts to the current theme used in Windows Phone and applies the right styling to the jQuery Mobile controls.This allows mobile HTML5 web sites and hybrid applications to naturally integrate into the Windows Phone Metro style experience. This offers developers the choice of rapidly integrating the theme into their existing application but also to contribute to this open source project through GitHub.

    You can see an extensive demo of the theme on this page and you can learn more on this site where we are publishing new articles, references and source code sample for developing with Apache Cordova and the Metro style theme for jQuery Mobile.

    This is another milestone in our continuous engagement with the community. Our team has been working closely with the Windows Phone division to support the mobile HTML5 and JavaScript open source communities over the last year to bring popular open source projects to Windows Phone:

    • A few months ago, we sponsored the development of full Windows Phone support for PhoneGap (now Apache Cordova), the open source framework that lets applications be built for iOS, Android, Windows Phone and other mobile platforms using HTML5, CSS and JavaScript.
    • At the same time significant improvements were brought to jQuery Mobile (read more about this in our previous blog post): feedback from the community has been great and was partly responsible for our decision to expand our engagement with jQuery Mobile and sponsor this work.

    We believe it is important for developers to have choices when targeting Windows Phone, and we also want them to be able to deliver a good experience to the users of their applications, especially when making the choice of using Web standards (HTML5, CSS and JavaScript) to target multiple mobile platforms by picking solutions such as Apache Cordova.

    To do so, developers already enjoy a selection of Apache Cordova Plugins that give their application a Windows Phone touch such as Social Share, Bing Map launcher and Live Tile. Now developers can use the new open source Metro style theme for jQuery Mobile to give their mobile apps and websites the Metro style look and feel, and offer the final users an experience similar to the one they get with native applications.

    As usual we are very interested in hearing from developers and gathering feedback about the experience of developing HTML5-based applications and websites on Windows Phone. Let us know what other features, tools and frameworks you’d like to see.

    Abu Obeida Bakhach
    Program Manager
    Microsoft Open Technologies, Inc.
    A subsidiary of Microsoft Corporation

  • Interoperability @ Microsoft

    Announcing one more way Microsoft will engage with the open source and standards communities

    • 42 Comments

    JeanOpenTechI am really excited to be able to share with you today that Microsoft has announced a new wholly owned subsidiary known as Microsoft Open Technologies, Inc., to advance the company’s investment in openness – including interoperability, open standards and open source.

    My existing Interoperability Strategy team will form the nucleus of this new subsidiary, and I will serve as President of Microsoft Open Technologies, Inc.

    The team has worked closely with many business groups on numerous standards initiatives across Microsoft, including the W3C’s HTML5, IETF’s HTTP 2.0, cloud standards in DMTF and OASIS, and in many open source environments such as Node.js, MongoDB and Phonegap/Cordova.

    We help provide open source building blocks for interoperable cloud services and collaborate on cloud standards in DMTF and OASIS; support developer choice of programming languages to enable Node.js, PHP and Java in addition to .NET in Windows Azure; and work with the PhoneGap/Cordova and jQuery Mobile and other open source communities to support Windows Phone.

    It is important to note that Microsoft and our business groups will continue to engage with the open source and standards communities in a variety of ways, including working with many open source foundations such as Outercurve Foundation, the Apache Software Foundation and many standards organizations. Microsoft Open Technologies is further demonstration of Microsoft’s long-term commitment to interoperability, greater openness, and to working with open source communities.

    Today, thousands of open standards are supported by Microsoft and many open source environments including Linux, Hadoop, MongoDB, Drupal, Joomla and others, run on our platform.

    The subsidiary provides a new way of engaging in a more clearly defined manner. This new structure will help facilitate the interaction between Microsoft’s proprietary development processes and the company’s open innovation efforts and relationships with open source and open standards communities.

    This structure will make it easier and faster to iterate and release open source software, participate in existing open source efforts, and accept contributions from the community. Over time the community will see greater interaction with the open standards and open source worlds.

    As a result of these efforts, customers will have even greater choice and opportunity to bridge Microsoft and non-Microsoft technologies together in heterogeneous environments.

    I look forward to sharing more on all this in the months ahead, as well as to working not only with the existing open source developers and standards bodies we work with now, but with a range of new ones.

    Thanks,

    Jean

  • Interoperability @ Microsoft

    BuildNewGames.com to help developers write cross-browser code

    • 0 Comments

    BuildNewGames.com, a new site to make building web games easier for developers using HTML5, CSS3 and JavaScript, is now live!

    Along with a new partnership with Bocoup, Microsoft announced @ JSConf the launch of this new site.

    You can read the post from Justin Garret, Senior Product Manager in the IE team, announcing the partnership and the new site launch.

    Over the next few months, the site will feature 50 tutorials ranging from the coding basics of games all the way to how to make money across a range of platforms.  Follow @buildnewgames or @IE for the latest.

    Developers want to be able to write code that works reliably in all modern browsers, including ie10/9, Chrome and Firefox, along with mobile browsers, resulting in a complex test matrix and higher development costs. Through standards bodies leadership and practical learning, Microsoft wants to help Web developers have an easier time targeting various browsers at once, allowing them to concentrate on innovating and delivering an outstanding Web and gaming experience to final users.

    BuildNewGames.com already features technical articles on Animation, Compositing, Graphics, Mobile, SVG, Sprites, Tools, WebSockets.

    Developing games is becoming lots of fun again!

  • Interoperability @ Microsoft

    Speed and Mobility: An Approach for HTTP 2.0 to Make Mobile Apps and the Web Faster

    • 14 Comments

    This week begins face to face meetings at the IETF on how to approach HTTP 2.0 and improve the Internet. How the industry moves forward together on the next version of HTTP – how every application and service on the web communicates today – can positively impact user experience, operational and environmental costs, and even the battery life of the devices you carry around.

    As part of this discussion of HTTP 2.0, Microsoft will submit to the IETF a proposal for “HTTP Speed+Mobility." The approach we propose focuses on all the web’s end users – emphasizing performance improvements and security while at the same time accounting for the important needs of mobile devices and applications.

    Why HTTP 2.0?

    Today’s HTTP has historical limitations based on what used to be good enough for the web. Because of this, the HTTPbis working group in the Internet Engineering Task Force (IETF) has approved a new charter to define HTTP “2.0” to address performance limitations with HTTP. The working group’s explicit goal is to keep compatibility with existing applications and scenarios, specifically to preserve the existing semantics of HTTP.

    Why this approach?

    Improving HTTP starts with speed. There is already broad consensus about the need to make web browsing much faster.

    We think that apps—not just browsers—should get faster too. More and more, apps are how people access web services, in addition to their browser.

    Improving HTTP should also make mobile better. For example, people want their mobile devices to have better battery life. HTTP 2.0 can help decrease the power consumption of network access. Mobile devices also give people a choice of networks with different costs and bandwidth limits. Embedded sensors and clients face similar issues. HTTP 2.0 can make this better.

    This approach includes keeping people and their apps in control of network access. Specifically, the client remains in control over the content that it receives from the web. This extends a key attribute of the existing HTTP protocol that has served the Web well. The app or browser is in the best position to assess what the user is currently doing and what data is already locally available. This approach enables apps and browsers to innovate more freely, delivering the most relevant content to the user based on the user’s actual needs.

    We think that rapid adoption of HTTP 2.0 is important. To make that happen, HTTP 2.0 needs to retain as much compatibility as possible with the existing Web infrastructure. Awareness of HTTP is built into nearly every switch, router, proxy, load balancer, and security system in use today. If the new protocol is “HTTP” in name only, upgrading all of this infrastructure would take too long. By building on existing web standards, the community can set HTTP 2.0 up for rapid adoption throughout the web.

    Done right, HTTP 2.0 can help people connect their devices and applications to the Internet fast, reliably, and securely over a number of diverse networks, with great battery life and low cost.

    How?

    The HTTP Speed+Mobility proposal starts from both the Google SPDY protocol (a separate submission to the IETF for this discussion) and the work the industry has done around WebSockets.

    SPDY has done a great job raising awareness of web performance and taking a “clean slate” approach to improving HTTP to make the Web faster. The main departures from SPDY are to address the needs of mobile devices and applications.

    Looking ahead

    We are looking forward to a vigorous, open discussion within the IETF around the design of HTTP 2.0. We are excited by the promise of an HTTP 2.0 that will serve the Internet for decades to come. As the effort progresses, we will continue to provide updates on this blog. Consistent with our other web standards engagements, we will also provide early implementations of the HTTP 2.0 specification on the HTML5 Labs site.

    - Sandeep Singhal, Group Program Manager, Windows Core Networking

    - Jean Paoli, General Manager, Interoperability Strategy

  • Interoperability @ Microsoft

    New Interoperability Solutions for SQL Server 2012

    • 0 Comments

    I am excited to share some great news about how we are opening up the SQL Server data platform even further with expanded interoperability support through new tools that allow customers to modernize their infrastructure while maximizing existing investments and extending virtually any data anywhere.

    The SQL Server team today introduced several tools that enable interoperability with SQL Server 2012.

    These tools help developers to build secure, highly available and high performance applications for SQL Server in .NET, C/C++, Java and PHP, on-premises and in the cloud.

    These new tools include a Microsoft SQL Server 2012 Native Client, a SQL Server ODBC Driver for Linux, backward compatibility with ADO.Net and the Microsoft JDBC Driver 4.0 and PHP Driver 3.0.

    You can find more information on all this goodness on the SQL Server blog here.

  • Interoperability @ Microsoft

    SAG Awards Drupal Website Moves to Windows Azure

    • 0 Comments

    The success of the recent Screen Actors Guild (SAG) Awards ceremony was buoyed by the move of its Drupal-based website hosted on internal Linux servers to one hosted on Windows Azure.

    The SAG Awards site is a highly visible, high-traffic website running on Drupal. Hosting it on Azure provides a scalable, public cloud environment for SAG team. They can tune up or down the compute and storage requirements according to expected website loads, thereby getting a more scalable, manageable and cost-effective solution for running their site.

    SAG also gets the benefits of PaaS – no need to manage the operating system patches, virtual machine images, network topology etc. This is particularly useful for SAG as the site has stable traffic for nine months, but which spikes for the three months from when award nominations open to the night of the event itself.

    The SAG Awards site was previously hosted on internal Linux boxes. In previous years, performance was negatively impacted by site outages and slow performance during peak-usage days, with SAG having to consistently upgrade their hardware to meet demand for those days. That upgraded hardware was then not optimally used during the rest of the year.

    The usage pattern for the SAG Awards site fluctuates, but spikes between November and February when the site is used for SAG award nominations in early November to the actual announcement of nominations in in mid-December. Peak usage is on the night of the awards ceremony where multiple uploads of pictures, news articles, and site visits happen.

    What is even more impressive is that both visits and page views almost doubled on the night of the event. In 2011, some 222,816 people visited the site and 434,743 pages were viewed, while this year there were some 325,303 site visits and 789,310 page views, reflecting the stability and performance of the site on Windows Azure.

    Microsoft started working with the SAG Awards team in May 2011, when their CIO Erin Griffin joined the Interoperability Executive Council (IEC) - founded by Microsoft in 2006 with a goal of identifying the industry’s greatest areas of need and to work together to create solutions - and attended a council meeting.

    In September Mike Story, SAG’s chief architect, attended an IEC work stream meeting and asked for Microsoft’s support in porting the site to Azure. The Business Platform Division’s Customer Experience (CAT) team, the Interoperability group and Windows Azure all started working with SAG in early October and, on December 20, 2011, the site went live on Windows Azure.

    “We moved to Windows Azure after looking at the services it offered,” said Erin Griffin, CIO at SAG. “Understanding the best usage scenario for us took time and effort, but with help from Microsoft, we successfully moved our site to Windows Azure and the biggest traffic day for us went off with flying colors.”

    This is just one real world outcome from the IEC, which has counseled Microsoft on many interoperability topics and introduced a number of real world scenarios for discussion. The IEC, working together with Microsoft, has developed a number of solutions for these scenarios, with this one for the SAG Awards being the latest.

    Curt Peterson, Microsoft’s Principal Group Program Manager, BPD Customer Experience, notes that the success of Sunday’s SAG Awards ceremony underscores how Windows Azure is a scalable, open Cloud platform ready for production use. “We are committed to making it easier for all our customers to use cloud computing on their terms with Windows Azure,” he says.

  • Interoperability @ Microsoft

    Open Source OData Tools for MySQL and PHP Developers

    • 4 Comments

    To enable more interoperability scenarios, Microsoft has released today two open source tools that provide support for the Open Data Protocol (OData) for PHP and MySQL developers working on any platform.

    The growing popularity of OData is creating new opportunities for developers working with a wide variety of platforms and languages. An ever increasing number of data sources are being exposed as OData producers, and a variety of OData consumers can be used to query these data sources via OData’s simple REST API.

    In this post, we’ll take a look at the latest releases of two open source tools that help PHP developers implement OData producer support quickly and easily on Windows and Linux platforms:

    • The OData Producer Library for PHP, an open source server library that helps PHP developers expose data sources for querying via OData. (This is essentially a PHP port of certain aspects of the OData functionality found in System.Data.Services.)
    • The OData Connector for MySQL, an open source command-line tool that generates an implementation of the OData Producer Library for PHP from a specified MySQL database.

    These tools are written in platform-agnostic PHP, with no dependencies on .NET.

    OData Producer Library for PHP

    figure1

    Last September, my colleague Claudio Caldato announced the first release of the Odata Producer Library for PHP, an open-source cross-platform PHP library available on Codeplex. This library has evolved in response to community feedback, and the latest build (Version 1.1) includes performance optimizations, finer-grained control of data query behavior, and comprehensive documentation.

    OData can be used with any data source described by an Entity Data Model (EDM). The structure of relational databases, XML files, spreadsheets, and many other data sources can be mapped to an EDM, and that mapping takes the form of a set of metadata to describe the entities, associations and properties of the data source. The details of EDM are beyond the scope of this blog, but if you’re curious here’s a simple example of how EDM can be used to build a conceptual model of a data source.

    The OData Producer Library for PHP is essentially an open source reference implementation of OData-relevant parts of the .NET framework’s System.Data.Services namespace, allowing developers on non-.NET platforms to more easily build OData providers. To use it, you define your data source through the IDataServiceMetadataProvider (IDSMP) interface, and then you can define an associated implementation of the IDataServiceQueryProvider (IDSQP) interface to retrieve data for OData queries. If your data source contains binary objects, you can also implement the optional IDataServiceStreamProvider interface to handle streaming of blobs such as media files.

    Once you’ve deployed your implementation, the flow of processing an OData client request is as follows:

    1. The OData server receives the submitted request, which includes the URI to the target resource and may also include $filter, $orderby, $expand and $skiptoken clauses to be applied to the target resource.
    2. The OData server parses and validates the headers associated with the request.
    3. The OData server parses the URI to resource, parses the query options to check their syntax, and verifies that the current service configuration allows access to the specified resource.
    4. Once all of the above steps are completed, the OData Producer for PHP library code is ready to process the request via your custom IDataServiceQueryProvider and return the results to the client.

    These processing steps are the same in .NET as they are in the OData Producer Library for PHP, but in the .NET implementation a LINQ query is generated from the parsed request. PHP doesn’t have support for LINQ, so the producer provides hooks which can be used to generate the PHP expression by default from the parsed expression tree. For example, in the case of a MySQL data source, a MySQL query expression would be generated.

    The net result is that PHP developers can offer the same querying functionality on Linux and other platforms as a .NET developer can offer through System.Data.Services. Here are a few other details worth nothing:

    • In C#/.NET, the System.Linq.Expressions namespace contains classes for building expression trees, and the OData Producer Library for PHP has its own classes for this purpose.
    • The IDSQP interface in the OData Producer Library for PHP differs slightly from .NET’s IDSQP interface (due to the lack of support for LINQ in PHP).
    • System.Data.Services uses WCF to host the OData provider service, whereas the OData Producer Library for PHP uses a web server (IIS or Apache) and urlrewrite to host the service.
    • The design of Writer (to serialize the returned query results) is the same for both .NET and PHP, allowing serialization of either .NET objects or PHP objects as Atom/JSON.

    For a deeper look at some of the technical details, check out Anu Chandy’s blog post on the OData Producer Library for PHP or see the OData Producer for PHP documentation available on Codeplex.

    OData Connector for MySQL

    The OData Producer for PHP can be used to expose any type of data source via OData, and one of the most popular data sources for PHP developers is MySQL. A new code generator tool, the open source OData Connector for MySQL, is now available to help PHP developers implement OData producer support for MySQL databases quickly and simply.

    The OData Connector for MySQL generates code to implement the interfaces necessary to create an OData feed for a MySQL database. The syntax for using the connector is simple and straightforward:

    php MySQLConnector.php /db=mysqldb_name /srvc=odata_service_name /u=db_user_name /pw=db_password /h=db_host_name

    figure2The MySQLConnector generates an EDMX file containing metadata that describes the data source, and then prompts the user for whether to continue with code generation or stop to allow manual editing of the metadata before the code generation step.

    EDMX is the Entity Data Model XML format, and an EDMX file contains a conceptual model, a storage model, and the mapping between those models. In order to generate an EDMX from a MySQL database, the OData Connector for MySQL needs to be able to do database schema introspection, and it does this through the Doctrine DBAL (Database Abstraction Layer). You don’t need to understand the details of EDMX in order to use the OData Connector for MySQL, but if you’re curious see the .edmx File Overview article on MSDN.

    If you’re familiar with EDMX and wish to have very fine-grained control of the exposed OData feeds, you can edit the metadata as shown in the diagram, but this step is not necessary. You can also set access rights for specific entities in the DataService::InitializeService method after the code has been generated, as described below.

    If you stopped the process to edit the EDMX, one additional command is needed to complete the generation of code for the interfaces used by the OData Producer Library for PHP:

    php MySQLConnector.php /srvc=odata_service_name

    Note that the generated code will expose all of the tables in the MySQL database as OData feeds. In a typical production scenario, however, you would probably want to fine-tune the interface code to remove entities that aren’t appropriate for OData feeds. The simplest way to do this is to use the DataServiceConfiguration object in the DataService::InitializeService method to set the access rights to NONE for any entities that should not be exposed. For example, you may be creating an OData provider for a CMS, and you don’t want to allow OData queries against the table of users, or tables that are only used for internal purposes within your CMS.

    For more detailed information about working with the OData Connector for MySQL, refer to the user guide available on the project site on Codeplex.

    These tools are open-source (BSD license), so you can download them and start using them immediately at no cost, on Linux, Windows, or any PHP platform. Our team will continue to work to enable more OData scenarios, and we’re always interested in your thoughts. What other tools would you like to see available for working with OData?

  • Interoperability @ Microsoft

    Beta of Windows Phone Toolkit for Amazon Web Services released

    • 6 Comments

    I am pleased to announce the beta release of the Windows Phone Toolkit for Amazon Web Services (AWS). Built by Microsoft as an open source project, this toolkit provides developers with a speed dial that lets them quickly connect and integrate Windows Phone applications with AWS (S3, SimpleDB, and SQS Cloud Services)

    To create cloud-connected mobile applications, developers want to have choice and be able to reuse their assets and skills. For developers familiar with AWS, whether they’ve been developing for Android, iOS or any other technology, this toolkit will allow them to comfortably port their applications to the Windows Phone Platform.

    Terry Wise, Director of Business Development for Amazon Web Services, welcomes the release of the Windows Phone Toolkit for Amazon Web Services to the Developer community.

    “Our approach with AWS is to provide developers with choice and flexibility to build applications the way they want and give them unlimited storage, bandwidth and computing resources, while paying only for what they use. We welcome Windows Phone developers to the AWS community and look forward to providing customers with new ways to build and deploy Windows Phone applications,” he says.

    Jean Paoli, General Manager of Interoperability Strategy at Microsoft, adds that Windows Phone was engineered from the get-go to be a Cloud-friendly phone.

    “The release of the Windows Phone Toolkit for AWS Beta proves that Microsoft’s goal of building a Cloud-friendly phone is true across vendor boundaries. It literally takes minutes to create a Cloud-ready application in C# with this toolkit. We look forward to this toolkit eventually resulting in many more great apps in the rapidly growing Windows Phone marketplace,” he said.

    Developers can download the toolkit , along with the complete source code under the Apache license. A Getting Started guide can be found on the Windows Phone Interoperability Bridges site along with other resources.

    And as always your feedback on how to improve this beta is welcome!

  • Interoperability @ Microsoft

    Microsoft at Node Summit

    • 2 Comments

    We are excited to be attending and participating at Node Summit in San Francisco this week.

    Among those Microsoft staffers on site are Server & Tools Corporate Vice President Scott Guthrie - who participated on a panel about Platform as a Service this morning and also gave a keynote address - and Gianugo Rabellino, the Senior Director for Open Source Communities, who was on a panel discussing the importance of cross-platform.

    You can read more about Scott's keynote on the Windows Azure blog here.

    As you may know, in December Microsoft announced that it was adding support for Node.js to the Windows Azure platform, which allows developers to easily take advantage of the powerful capabilities of Windows Azure with simple tools and a new open source SDK.

    As this work continues inside of Microsoft as well as with the Node.js community and our partner ecosystem, new and exciting capabilities are coming available allowing Node.js developers to have great experiences on the Windows platform.

    Today, during his keynote, Scott Guthrie demonstrated how easy it is to get up and running with Node.js on Windows and Windows Azure, while our partners at Cloud9 showcased new tooling experiences that provide even greater flexibility to Node.js for developers who want to build for Windows Azure.

    Microsoft has been closely partnering with Joyent for some time now to port Node.js to Windows. We have built an IO abstraction library with them that can be used to make the code run on both Linux and Windows.

    We also recently released the Windows Azure SDK for Node.js as open source, available on Github. These libraries are the perfect complement to our recently announced contributions to Node.js and provide a better Node.js experience on Windows Azure. The Windows Azure Developer Center provides documentation, tutorial, samples and how-to guides to get started with Node.js on Windows Azure.

    The Joyent team also recently updated the Node Package Manager for Windows (NPM) code to allow use of NPM on Windows. NPM is an essential tool for Node.js developers so now having support for it on Windows we have a better development experience on Windows.

    We are also working with the Joyent team on improving the development experience by leveraging the power of Microsoft Development tools and documentation that will make easier for developers to use Node.js APIs.

    And, relatedly, we have also been working closely with 10Gen and the MongoDB community in the past few months, and MongoDB already runs on Windows Azure. If you’re using the popular combination of Node.js and MongoDB, a simple straightforward install process will get you started on Windows Azure. You can learn more here.

    Our interest in, and support for Node.js is just one of the ways in which Windows Azure is continuing on its roadmap of embracing Open Source Software tools developers know and love, by working collaboratively with the open source community to build together a better cloud that supports all developers and their need for interoperable solutions based on developer choice.

    As Microsoft continues to provide incremental improvements to Windows Azure, we remain committed to working with developer communities.

    We also clearly understand that there are many different technologies that developers may want to use to build applications in the cloud: they want to use the tools that best fit their experience, skills, and application requirements, and our goal is to enable that choice.

    All of this delivers on our ongoing commitment to provide an experience where developers can build applications on Windows Azure using the languages and frameworks they already know, enable greater customer flexibility for managing and scaling databases, and making it easier for customers to get started and use cloud computing on their terms with Windows Azure.

  • Interoperability @ Microsoft

    Windows Azure Libraries for Java Available, including support for Service Bus

    • 0 Comments

    Good news for all you Java developers out there: I am happy to share with you the availability of Windows Azure libraries for Java that provide Java-based access to the functionality exposed via the REST API in Windows Azure Service Bus.

    You can download the Windows Azure libraries for Java from GitHub.

    This is an early step as we continue to make Windows Azure a great cloud platform for many languages, including .NET and Java.  If you’re using Windows Azure Service Bus from Java, please let us know your feedback on how these libraries are working for you and how we can improve them. Your feedback is very important to us!

    You may refer to Windows Azure Java Developer Center for related information.

    Openness and interoperability are important to Microsoft, our customers, partners, and developers and we believe these libraries will enable Java applications to more easily connect to Windows Azure, in particular the Service Bus, making it easier for applications written on any platform to interoperate with each another through Windows Azure.

    Thanks,

    Ram Jeyaraman

    Senior Program Manager, Microsoft’s Interoperability Group

  • Interoperability @ Microsoft

    Open Source OData Library for Objective-C Project Moves to Outercurve Foundation

    • 0 Comments

    As Microsoft continues to deliver on its commitment to Interoperability, I have good news on the Open Source Software front: today, the OData Library for Objective-C project was submitted to the Outercurve Foundation’s Data, Languages, and Systems Interoperability gallery.

    This means that OData4ObjC, the OData client for iOS, is now a full, community-supported Open Source project.

    The Open Data Protocol (OData) is a web protocol for communications between client devices and RESTful web services, simplifying the building of queries and interpreting the responses from the server. It specifies how a web service can state its semantics such that a generic library can express those semantics to an application, meaning that applications do not need to be custom-written for a single source.

    The Outercurve Foundation already hosts 19 OSS projects and, as Gallery Manager Spyros Sakellariadis notes in his blog post, this is the gallery’s second OData project, the first being the OData Validation project contributed last August.

    “With this new assignment, we expect to involve open source community developers even more in the enhancement of seminal OData libraries,” he said.

    Microsoft Senior Program Manager for OData Arlo Belshee notes in his blog post that the Open Sourcing of the OData client library for Objective C will enable first-class support of this important platform. “Combined with exiting support for Android (Odata4j, OSS and Windows Phone (in the odata-sdk by Microsoft), this release provides strong, uniform support for all major phones,” he said.

    In assigning ownership of the code to the Outercurve Foundation, the project leads are opening it up for community contributions and support. “They firmly believe that the direction and quality of the project are best managed by users in the community, and are eager to develop a broad base of contributors and followers,” Belshee said.

    As Microsoft continues to build and provide Interoperability solutions, Sakellariadis thanked the Open Source communities for their continued support, noting that together “we can all contribute to achieving a goal of device and cloud interoperability, of true openness.”

  • Interoperability @ Microsoft

    Full Support for PhoneGap on Windows Phone is Now Complete!

    • 1 Comments

    Congratulations to all the people involved in the PhoneGap community for the recent release of version 1.3 of their HTML5 open source mobile framework.

    This release includes many new features, and you can find more details here. You may remember that we announced back in Sept that Microsoft was helping to bring Windows Phone support in PhoneGap: I am happy to say we can now check
    this box!

    We’re also pleased to note that all features in PhoneGap 1.3 are now supported for Windows Phone, as you can see on their site here.

    Also, beyond the core PhoneGap features, developers can enjoy a selection of PhoneGap plugins that support social networks - including Facebook, LinkedIn, Windows Live and Twitter - and a solid integration into Visual Studio
    Express for Windows Phone.

    We have also developed further plugins to give HTML5 developers a feel for Windows Phone’s unique features like Live Tile Update and Bing Maps Search.

    Please check out Jesse MacFadyen’s blog, PhoneGap’s dev lead, on his experiences developing PhoneGap on Windows Phone.

    For more technical details of using the framework, see Glen and Jesse’s technical walk thru blogs. For a quick a spin of what PhoneGap and Visual Studio allow you to do, see this WP7 and Android camera app created in 3 minutes! Bits are located here; plugins are here.

    Looking ahead:

    As mentioned in PhoneGap’s announcement blog post, the next PhoneGap 1.4 release will be from the Cordova incubation project at Apache.  We at Microsoft are proud to be members of this project and to offer technical resources.  We welcome the involvement of Adobe, IBM and RIM and look forward to collaboratively growing PhoneGap at its new home in Apache while helping evolve an open web for any device.

    Microsoft’s commitment to HTML5 in IE9 has been instrumental in achieving this level of support. We are also building on our HTML5 investment through initiatives like bringing jQuery Mobile support as we outlined few
    weeks ago
    . Partnering with open source communities to bring this level of openness continues to be an important goal here at Microsoft.

    So, stay tuned for more news on our support for popular mobile open source frameworks on WP7.5!

    Abu Obeida Bakhach

    Interoperability Strategy Program Manager

  • Interoperability @ Microsoft

    Azure + Java = Cloud Interop: New Channel 9 Video with GigaSpaces Posted

    • 1 Comments

    Today Microsoft is hosting the Learn Windows Azure broadcast event to demonstrate how easy it is for developers to get started with Windows Azure. Senior Microsoft executives like Scott Guthrie, Dave Campbell, Mark Russinovich and others will show how easy it is to build scalable cloud applications using Visual Studio.  The event is be broadcasting live and will also be available on-demand.

    For Java developers interested in using Windows Azure, one particularly interesting segment of the day is a new Channel 9 video with GigaSpaces. Their Cloudify offering helps Java developers easily move to their applications, without any code or architecture changes, to Windows Azure

    This broadcast follows yesterday’s updates to Windows Azure around an improved developer experience, Interoperability, and scalability. A significant part of that was an update on a wide range of Open Source developments on Windows Azure, which are the latest incremental improvements that deliver on our commitment to working with developer communities so that they can build applications on Windows Azure using the languages and frameworks they already know.

    We understand that developers want to use the tools that best fit their experience, skills, and application requirements, and our goal is to enable that choice. In keeping with that, we are extremely happy to be delivering new and improved experiences for popular OSS technologies such as Node.js, MongoDB, Hadoop, Solr and Memcached on Windows Azure.

    You can find all the details on the full Windows Azure news here, and more information on the Open Source updates here.

  • Interoperability @ Microsoft

    Openness Update for Windows Azure

    • 0 Comments

    As Microsoft’s Senior Director of Open Source Communities, I couldn’t be happier to share with you today an update on a wide range of Open Source developments on Windows Azure.

    As we continue to provide incremental improvements to Windows Azure, we remain committed to working with developer communities. We’ve spent a lot of time listening, and we have heard you loud and clear.

    We understand that there are many different technologies that developers may want to use to build applications in the cloud. Developers want to use the tools that best fit their experience, skills, and application requirements, and our goal is to enable that choice.

    In keeping with that goal, we are extremely happy to be delivering new and improved experiences for Node.js, MongoDB, Hadoop, Solr and Memcached on Windows Azure.

    This delivers on our ongoing commitment to provide an experience where developers can build applications on Windows Azure using the languages and frameworks they already know, enable greater customer flexibility for managing and scaling databases, and making it easier for customers to get started and use cloud computing on their terms with Windows Azure.

    Here are the highlights of today’s announcements:

    • We are releasing the Windows Azure SDK for Node.js as open source, available immediately on Github. These libraries are the perfect complement to our recently announced contributions to Node.js and provide a better Node.js experience on Windows Azure. Head to the Windows Azure Developer Center for documentation, tutorial, samples and how-to guides to get you started with Node.js on Windows Azure.
    • We will also be delivering the Node package manager for Windows (npm) code to allow use of npm on Windows for simpler and faster Node.js configuration and development. Windows developers can now use NPM to install Node modules and take advantage of its automated handling of module dependencies and other details.
    • To build on our recent announcement about Apache Hadoop, we are making available a limited preview of the Apache Hadoop based distribution service on Windows Azure.  This enables Hadoop apps to be deployed in hours instead of days, and includes Hadoop Javascript libraries and powerful insights on data through the ODBC driver and Excel plugin for Hive. Read more about this on the Windows Azure team blog. If you are interested in trying this preview, please complete the form here with details of your Big Data scenario.  Microsoft will issue an access code to select customers based on usage scenarios.
    • For all of you NoSQL fans, we have been working closely with 10Gen and the MongoDB community in the past few months, and if you were at at MongoSV last week you have already seen MongoDB running on Windows Azure. Head out to the 10Gen website to find downloads, documentation and other document-oriented goodies. If you’re using the popular combination of Node.js and MongoDB, a simple straightforward install process will get you started on Windows Azure. Learn more here.
    • For Java developers, take a look at the updated Java support, including a new and revamped Eclipse plugin. The new features are too many to list for this post, but you can count on a much better experience thanks to new and exciting functionality such as support for sticky sessions and configuration of remote Java debugging. Head over to the Windows Azure Developer Center to learn more.
    • Does your application need advanced search capabilities? If so, the chances are you either use or are evaluating Solr, and so the good news for you is that we just released a set of code tools and configuration guidelines to get the most out of Solr running on Windows Azure. We invite developers to try out the tools, configuration and sample code for Solr tuned for searching commercial and publisher sites. The published guidance showcases how to configure and host Solr/Lucene in Windows Azure using multi-instance replication for index-serving and single-instance for index generation with a persistent index mounted in Windows Azure storage.
    • Another great example of OSS on Windows Azure is the use of Memcached server, the popular open-source caching technology, to improve the performance of dynamic web applications. Maarten Balliauw recently blogged about his MemcacheScaffolder, which simplifies management of Memcached servers on the Windows Azure platform. That blog post is only focused on PHP, but the same approach can be used by other languages supported by Memcached as well.
    • Scaling data in the Cloud is very important. Today, the SQL Azure team made SQL Azure Federation available.  This new feature provides built-in support for data sharding (horizontal partitioning of data) to elastically scale-out data in the cloud. I am thrilled to announce that concurrent with the release of this new feature, we have released a new specification called SQL Database Federations, which describes additional SQL capabilities that enable data sharding (horizontal partitioning of data) for scalability in the cloud, under the Microsoft Open Specification Promise. With those additional SQL capabilities, the database tier can provide built-in support for data sharding to elastically scale-out data in the cloud, as covered in Ram Jeyaraman’s post on this blog.

    In addition to all this great news, the Windows Azure experience has also been significantly improved and streamlined. This includes simplified subscription management and billing, a guaranteed free 90-day trial with quick sign-up process, reduced prices, improved database scale and management, and more. Please see the Windows Azure team blog post for insight on all the great news.

    As we enter the holiday season, I’m happy to see Windows Azure continuing on its roadmap of embracing OSS tools developers know and love, by working collaboratively with the open source community to build together a better cloud that supports all developers and their need for interoperable solutions based on developer choice.

    In conclusion, I just want to stress that we intend to keep listening, so please send us your feedback. Rest assured we’ll take note!

  • Interoperability @ Microsoft

    SQL Database Federations: Enhancing SQL to enable Data Sharding for Scalability in the Cloud

    • 0 Comments

    I am thrilled to announce the availability of a new specification called SQL Database Federations, which describes additional SQL capabilities that enable data sharding (horizontal partitioning of data) for scalability in the cloud.

    The specification has been released under the Microsoft Open Specification Promise. With these additional SQL capabilities, the database tier can provide built-in support for data sharding to elastically scale-out the data. This is yet another milestone in our Openness and Interoperability journey.

    As you may know, multi-tier applications scale-out their front and middle tiers for elastic scale-out. With this model, as the demand on the application varies, administrators add and remove new instances of the front end and middle tier nodes to handle the workload.

    However, the database tier in general does not yet provide built-in support for such an elastic scale-out model and, as a result, applications had to custom build their own data-tier scale-out solution. Using the additional SQL capabilities for data sharding described in the SQL Database Federations specification the database tier can now provide built-in support to elastically scale-out the data-tier much like the middle and front tiers of applications. Applications and middle-tier frameworks can also more easily use data sharding and delegate data tier scale-out to database platforms.

    Openness and interoperability are important to Microsoft, our customers, partners, and developers, and so the publication of SQL Database Federations specification under the Microsoft Open Specification Promise will enable applications and middle-tier frameworks to more easily use data sharding, and also enable database platforms to provide built-in support for data sharding  in order to elastically scale-out the data.

    Also of note: The additional SQL capabilities for data sharding described in the SQL Database Federations specification are now supported in Microsoft SQL Azure via the SQL Azure Federation feature.

    Here is an example that uses Microsoft SQL Azure to illustrate the use of the additional SQL capabilities for data sharding described in the SQL Database Federations specification.

    -- Assume the existence of a user database called sales_db. Connect to sales_db and create a federation called orders_federation to scale out the tables: customers and orders. This creates the federation represented as an object in the sales_db database (root database for this federation) and also creates the first federation member of the federation.

    CREATE FEDERATION orders_federation(c_id BIGINT RANGE)
    GO

    -- Deploy schema to root, create tables in the root database (sales_db)

    CREATE TABLE application_configuration(…)
    GO

    -- Connect to the federation member and deploy schema to the federation member

    USE FEDERATION orders_federation(c_id=0) …
    GO

    -- Create federated tables: customers and orders

    CREATE TABLE customers (customer_id BIGINT PRIMARY KEY, …) FEDERATED ON (c_id = customer_id)
    GO

    CREATE TABLE orders (…, customer_id BIGINT NOT NULL) FEDERATED ON (c_id = customer_id)
    GO

    -- To scale out customer’s orders, SPLIT the federation data into two federation members

    USE FEDERATION ROOT …
    GO

    ALTER FEDERATION orders_federation SPLIT AT(c_id=100)
    GO

    -- Connect to the federation member that contains the value ‘55’

    USE FEDERATION orders_federation(c_id=55) …
    GO

    -- Query the federation member that contains the value ‘55’

    UPDATE orders SET last_order_date=getutcdate()…
    GO

    I am confident that you will find the additional SQL capabilities for data sharding described in the SQL Database Federations specification very useful as you consider scaling-out the data-tier of your applications. We welcome your feedback on the SQL Database Federations specification.

    Thanks,

    Ram Jeyaraman

    Senior Program Manager, Microsoft’s Interoperability Group

  • Interoperability @ Microsoft

    HTML5 Labs Prototype Update for W3C Media Capture API

    • 1 Comments

    Today, the Internet Explorer blog posted an interesting update of an HTML5Labs prototype of the W3C Media Capture API.

     A usable and standardized API for media capture means Web sites and apps will be able to access these features in a common way across all browsers in the future.

    You can read the full post on the IE blog.

  • Interoperability @ Microsoft

    Preview Release of the SQL Server ODBC Driver for Linux Hits the Streets

    • 0 Comments

    Microsoft's SQL Server team yesterday announced the availability of a preview release of the SQL Server ODBC Driver for Linux, which allows native developers to access Microsoft SQL Server from Linux operating systems.

    For customers with native applications on multi-platform, the existing, reliable and enterprise-class ODBC for Windows driver (a.k.a. SQL Server Native Client, or SNAC) has been ported to the Linux platform.

    You can download the driver here.

    "In this release, the SQL Server ODBC Driver for Linux will be a 64-bit driver for Red Hat Enterprise Linux 5. We will support SQL Server 2008 R2 and SQL Server 2012 with this release of the driver. Notable driver features (in addition to what you would expect in an ODBC driver) include support for the Kerberos authentication protocol, SSL and client-side UTF-8 encoding. This release also brings proven and effective tools and the BCP and SQLCMD utilities to the Linux world,"said Shekhar Joshi, a Senior Program Manager on the Microsoft SQL Server ODBC Driver For Linux team.

    This is another example of Microsoft and the SQL team's commitment to interoperability. You can read Shekhar's full blog post here, while additional information on the first release of Microsoft ODBC Driver for Linux can be found here.

  • Interoperability @ Microsoft

    Prototypes of JavaScript Globalization & Math, String, and Number extensions

    • 0 Comments

    As the HTML5 platform becomes more fully featured, web applications become richer, and scenarios that require server side interaction for trivial tasks become more tedious.  This makes deficits in the capabilities of JavaScript as a runtime come into focus.

    Microsoft is committed to advancing the JavaScript standard. Through active participation in the Ecma TC39 working group, we have endorsed and pushed for the completion of proposed standards which provide extensions to the intrinsic Math, Number, and String libraries and introduce support for Globalization. We shared the first version of prototypes for the libraries at the standards meeting on the Microsoft campus in July and are shared our Globalization implementation at the standards meeting last week at Apple’s Cupertino campus. In addition, we are also releasing these reference implementations so that the JavaScript community can provide feedback on applying their use in practice.

    What’s in this drop

    This drop includes extensions to the Math, Number, and String built-in libraries:

    Math

    String

    Number

    cosh, sinh, tanh

    startsWith, endsWith

    isFinite

    acosh, asinh, atanh

    contains

    isNaN

    log1p, log2, log10

    Repeat

    isInteger

    sign

    toArray

    toInteger

    trunc

    reverse

     

     

    To illustrate, a simple code sample using some of these functions is included below:


    var aStr = "24-";
    var aStrR = aStr.reverse();
    var num = aStrR * 1;
    if (Number.isInteger(num)) {
    console.log("The sign of " + num + " is " + Math.sign(num));
    };

    This drop also includes an implementation of the evolving Globalization specification. Globalization is the software discipline that makes sure that applications can deal correctly with changes in number and date formats, for example. It’s a part of the localization of an application to run in a local language. With this library, you can show date and numbers in the specified locale and specify collation properties for the purposes of sorting and searching in other languages. You can also set standard date and number formats to use alternate calendars like the Islamic calendar or formats to show currency as a Chinese Yuan. Again, a code sample illustrates below:

    var nf = new Globalization.NumberFormat(localeList, {
    style : "currency",
    currency : "CNY",
    currencyDisplay: "symbol",
    maxmimumFractionDigit: 1
    })

    nf.format(100); // "¥100.00"


    var dtf = new Globalization.DateTimeFormat(
    new Globalization.LocaleList(["ar-SA-u-ca-islamic-nu-latin"]), {
    weekday : "long",
    })


    dtf.format() // today's date
    dtf.format(new Date("11/15/2011")); // "الثلاثاء, ١٢ ١٩ ٣٢"

    How to get the bits

    The prototypes should install automatically if you view the Intrinsics Extensions demo and the Globalization demo. Or to install the prototype, run the MSIs found here.

    Note that as with all previous releases of HTML5 labs, this is an unsupported component with an indefinite lifetime. This should be used for evaluation purposes only and should not be used for production level applications.

    Providing Feedback

    We’ve created a couple of sample applications so you can see what this functionality enables.  Once you’ve installed the bits, view the Intrinsics Extensions demo and the Globalization demo to see the APIs in action. 

    As usual, we encourage you to play with the sample apps, download the prototype, and develop your own app to see how it feels. Once you’ve tried it out, let us know if you have any feedback or suggestions. We look forward to improving JavaScript and making it ever easier to build great web applications using standard APIs.

    Thanks for your interest!

    Claudio Caldato, Adalberto Foresti – Interoperability Strategy Team

     

  • Interoperability @ Microsoft

    jQuery Mobile Open Source Framework Support for Windows Phone

    • 0 Comments

    Hello web and mobile developers!

    As you probably noticed, jQuery Mobile version 1.0 was announced this week. We are pleased to use this exciting occasion to reinforce our commitment to supporting popular open source mobile frameworks.

    Of the most recent activities, I want to highlight the work done to supporting PhoneGap by adding support for Windows Phone 7.5 (Mango), and now we are moving up the stack to improve support of jQuery Mobile on Windows Phone 7.5.

    As you probably know, jQuery Mobile framework is a Javascript HTML5-based user interface system for mobile device platforms, built on the jQuery and jQuery UI foundation.

    While today’s version 1 and the recent RC releases contain many features, we wanted to take a minute and highlight the collaboration we started with the jQuery Mobile team. In the last few weeks we have focused our attention on supporting Kin Blas and others in the community to improving the performance on Windows Phone 7.5.

    In particular, as the RC3 blog published earlier this week outlines, Windows Phone performance has improved quite dramatically as shown by the two showcase apps:

    • 226% improvement in rendering the form gallery, bringing it down from 5 to 2.2 seconds
    • 20x improvement in rendering the complex 400 item listview, from 60 seconds to 3 seconds

    The jQuery team has additional performance optimization tips for Windows Phone in the change log that saves additional perf time in certain scenarios.

    We are pretty encouraged with this progress, and will continue working with community to bring higher levels of performance and support for jQuery features to Windows Phone... stay tuned, and congratulations again to the jQuery Mobile Team!

    Abu Obeida Bakhach

    Interoperability Strategy Program Manager

  • Interoperability @ Microsoft

    First Stable Build of Node.js on Windows Released

    • 0 Comments

    Great news for all Node.js developers wanting to use Windows: today we reached an important milestone - v0.6.0 – which is the first official stable build that includes Windows support.

    This comes some four months after our June 23rd announcement that Microsoft was working with Joyent to port Node.js to Windows. Since then we’ve been heads down writing code.

    Those developers who have been following our progress on GitHub know that there have been Node.js builds with Windows support for a while, but today we reached the all-important v0.6.0 milestone.

    This accomplishment is the result of a great collaboration with Joyent and its team of developers. With the dedicated team of Igor Zinkovsky, Bert Belder and Ben Noordhuis under the leadership of Ryan Dahl, we were able to implement all the features that let Node.js run natively on Windows.

    And, while we were busy making the core Node.js runtime run on Windows, the Azure team was working on iisnode to enable Node.js to be hosted in IIS. Among other significant benefits, Windows native support gave Node.js significant performane improvements, as reported by Ryan on the Node.js.org blog.

    Node.js developers on Windows will also be able to rely on NPM to install the modules they need for their application. Isaac Shlueter from the Joyent team is also currently working on porting NPM on Windows, and an early experimental version is already available on GitHub. The good news is that soon we’ll have a stable build integrated in the Node.js installer for Windows.

    So stay tuned for more news on this front.

    Claudio Caldato,

    Principal Program Manager, Interoperability Strategy Team

     

  • Interoperability @ Microsoft

    Windows Gets Eclipse Platform Improvements

    • 0 Comments

    Today, David Green at Tasktop posted a blog about the latest Eclipse platform improvements for Windows. As part of Tasktop’s ongoing partnership with Microsoft, they’ve been working hard to bring two more Eclipse platform improvements for Windows this year: Desktop Search and Glass.

    You can read more about both of these improvements here.

    We look forward to continuing to work with both Tasktop and the Eclipse community going forward, and would love to hear from you about new features you would like to see in the future. Feel free to let David know about these at david.green@tasktop.com.

    Thanks!

    Martin Sawicki

    Principal Program Manager: Interoperability

  • Interoperability @ Microsoft

    W3Conf: Get up to Speed on the Modern Open Web Platform

    • 0 Comments

    Are you a Web developer, designer or just interested in the space? Well, if you are, you really don’t want to miss W3Conf.

    W3Conf is the W3C's first ever 2-day conference for developers and designers, and is uniquely focused on cutting edge technologies that work today across browsers.

    It is being held in Redmond, Washington November 15-16 2011 and it’s packed with top-notch presentations by leading experts in the Web industry on HTML5, CSS3, graphics, accessibility, multimedia, APIs and more.

    I’m participating in the “Browsers and Standards: Where the Rubber Hits the Road” panel discussion, along with Tantek Çelik from Mozilla, Google’s Chris Wilson and Divya Manian from Opera.

    Microsoft is proud to be the host sponsor of the event, joined by AT&T, Adobe, and Nokia.

    There are several ways to experience this conference: you can register to attend in person; videos of the presentations (with English captioning) will be streamed live over the Web; and recordings will be archived and made freely available for future reference. Note that the Early Bird registration conference and hotel discount expires on October 31. 

    You can find all the details on the schedule, speakers, and the technologies themselves on the conference web site, which demonstrates the features enabled in modern browsers and authoring tools to make attractive, interactive, and accessible websites using emerging standards from W3C and other bodies. In other words, the site itself “eats the open web dog food.”

    Along with my colleagues on the Interoperability Team, I believe this will be a great event, and encourage you to attend virtually or in person.

    I look forward to seeing you there!

    Paul Cotton

    Co-Chair: HTML Working Group

Page 2 of 4 (341 items) 1234