August, 2012

  • Interoperability @ Microsoft

    Windows Azure Plugin for Eclipse with Java - August 2012 Preview

    • 0 Comments

    Gearing up for back to school, the Microsoft Open Technologies Inc. team has been busy updating the Windows Azure Plugin for Eclipse with Java.

    This August 2012 Preview update includes some feedback-driven usability enhancements in existing features along with number additional bug fixes since the July 2012 Preview. The principal enhancements are the following:

    • Inside the Windows Azure Access Control Service Filter:
      • Option to embed the signing certificate into your application’s WAR file to simplify cloud deployment
      • Option to create a new self-signed certificate right from the ACS filter wizard UI
    • Inside the Windows Azure Deployment Project wizard (and the role’s Server Configuration property page):
      • Automatic discovery of the JDK location on your computer (which you can override if necessary)
      • Automatic detection of the server type whose installation directory you select

    You can learn more about the plugin on the Windows Azure Dev Center.

    To find out how to install, go here.

    Martin Sawicki
    Principal Program Manager
    Microsoft Open Technologies, Inc.
    A subsidiary of Microsoft Corporation

  • Interoperability @ Microsoft

    Microsoft at DrupalCon Munich next week

    • 0 Comments

    Microsoft at DrupalCon is becoming a tradition. After having partnered closely with the Drupal community to make Drupal available on Windows, Microsoft teams are continuing this engagement and are eager to meet Drupal developers in Munich next week.

    If you are going to the event, don’t miss the various panels and sessions Microsoft attendees will participate:

    And of course, you should stop by the booth to say hi!

    If you’re not in Germany and can’t attend DrupalCon, we encourage you to follow @Gracefr and @Brian_Swan on Twitter. They’ll provide insights on the cool things happening there. Don’t miss Brian’s blog that offers great technical information on Drupal on Windows Azure and other related topics: a must read!

  • Interoperability @ Microsoft

    Microsoft Releases New Dev Tools Compiled With Open Source Code

    • 0 Comments

    Jason Zander blogged about new releases of Microsoft’s developer tools today – tools that include many contributions from the open source community with the MS Open Tech Hub on CodePlex.

    The OSS community helped build out the source code for ASP.NET MVC 4, Web API, Web Pages 2 and Entity Framework 5 – key components in the new releases of Visual Studio 2012, Team Foundation Server 2012, and .NET Framework 4.5. Through CodePlex, developers outside Microsoft submitted patches and code contributions that the MS Open Tech Hub development team reviewed for potential inclusion in these products. I described this process in more detail last month, More of Microsoft’s App Development Tools Goes Open Source.

    Today’s news had an additional cool factor. As Jason highlighted in his blog, “Developing great apps for Windows 8 is an important goal of this release. Therefore, in coordination with today’s developer tools releases, you’ll notice that the final version of Windows 8 has released to the web as well.”

    There is a ton of great resources on these tools that you can check out and download today. The ASP.net website is a great place to start. I also recommend my friend Scott Hanselman’s new videos.

    Microsoft’s partner-centric approach has been with the company since the very beginning. Today’s milestone shows that all developers can contribute to and benefit from Microsoft’s open platforms in the future.

    Gianugo Rabellino
    Senior Director Open Source Communities
    Microsoft Open Technologies, Inc.
    A subsidiary of Microsoft Corporation

  • Interoperability @ Microsoft

    Using the Cloudant Data Layer for Windows Azure

    • 0 Comments

    If you need a highly scalable data layer for your cloud service or application running on Windows Azure, the Cloudant Data Layer for Windows Azure may be a great fit. This service, which was announced in preview mode in June and is now in beta, delivers Cloudant’s “database as a service” offering on Windows Azure.

    From Cloudant’s data layer you’ll get rich support for data replication and synchronization scenarios such as online/offline data access for mobile device support, a RESTful Apache CouchDB-compatible API, and powerful features including full-text search, geo-location, federated analytics, schema-less document collections, and many others. And perhaps the greatest benefit of all is what you don’t get with Cloudant’s approach: you’ll have no responsibility for provisioning, deploying, or managing your data layer. The experts at Cloudant take care of those details, while you stay focused on building applications and cloud services that use the data layer.

    You can do your development in any of the many languages supported on Windows Azure, such as .NET, Node.JS, Java, PHP, or Python. In addition, you’ll get the benefits of Windows Azure’s CDN (Content Delivery Network) for low-latency data access in diverse locations. Cloudant pushes your data to data centers all around the globe, keeping it close to the people and services who need to consume it.

    For a free trial of the Cloudant Data Layer for Windows Azure, create a new account on the signup page and select “Lagoon” as your data center location.

    For an example of how to use the Cloudant Data Layer, see the tutorial “Using the Cloudant Data Layer for Windows Azure,” which takes you through the steps needed to set up an account, create a database, configure access permissions, and develop a simple PHP-based photo album application that uses the database to store text and images:

    clip_image002

    The sample app uses the SAG for CouchDB library for simple data access. SAG works against any Apache CouchDB database, as well as Cloudant’s CouchDB-compatible API for the data layer.

    My colleague Olivier Bloch has provided another great example of using existing CouchDB libraries to simplify development when using the Cloudant Data Layer. In this video, he demonstrates how to put a nice Windows 8 design front end on top of the photo album demo app:

    clip_image004

    This example takes advantage of the couch.js library available from the Apache CouchDB project, as well as the GridApp template that comes with Visual Studio 2012. Olivier shows how to quickly create the app running against a local CouchDB installation, then by simply changing the connection string the app is running live against the Cloudant data layer running on Windows Azure.

    The Cloudant data layer is a great example of the new types of capabilities – and developer opportunities – that have been created by Windows Azure’s support for Linux virtual machines. As Sam Bisbee noted in Cloudant’s announcement of the service, “The addition of Linux-based virtual machines made it possible for us to offer the Cloudant Data Layer service on Azure.”

    If you’re looking for a way to quickly build apps and services on top of a scalable high-performance data layer, check out what the Cloudant Data Layer for Windows Azure has to offer!

    Doug Mahugh
    Senior Technical Evangelist
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    MS Open Tech is hiring!

    • 1 Comments

    Do you have a passion for interoperability, open source, and open standards? If you’re an experienced developer, program manager, technical diplomat, or evangelist who can help our team build technical bridges between Microsoft and
    non-Microsoft technologies, check out the blog post by Gianugo Rabellino over on the Port 25 blog today. We’re hiring, with open positions you can apply to right now. We’d love to hear from you!

    Doug Mahugh
    Senior Technical Evangelist
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    Customizable, Ubiquitous Real Time Communication over the Web (CU-RTC-Web)

    • 3 Comments

    UPDATE: See our latest W3C WebRTC Working Group blog post on 01-17-2013 http://aka.ms/WebRTCPrototypeBlog describing our new CU-RTC-Web prototype that you can download on HTML5 Labs.

    From:

    Matthew Kaufman - Inventor of RTMFP, the most widely used browser-to-browser RTC protocol on the web
    Principal Architect, Skype, Microsoft Corp.

    Martin Thomson
    Senior Architect, Skype, Microsoft Corp.

    Jonathan Rosenberg - Inventor of SIP and SDP offer/answer
    GM Research Product & Strategy, Skype, Microsoft Corp.

    Bernard Aboba
    Principal Architect, Lync, Microsoft Corp.

    Jean Paoli
    President, Microsoft Open Technologies, Inc.

    Adalberto Foresti
    Senior Program Manager, Microsoft Open Technologies, Inc.

     

     

    Today, we are pleased to announce Microsoft’s contribution of the CU-RTC-Web proposal to the W3C WebRTC working group.

    Thanks in no small part to the exponential improvements in broadband infrastructure over the last few years, it is now possible to leverage the digital backbone of the Internet to create experiences for which dedicated media and networks were necessary until not too long ago.

    Inexpensive, real time video conferencing is one such experience.

    The Internet Engineering Task Force and the World Wide Web Consortium created complementary working groups to bring these experiences to the most familiar and widespread application used to access the Internet: the web browser. The goal of this initiative is to add a new level of interactivity for web users with real-time communications (Web RTC) in the browser.

    While the overarching goal is simple to describe, there are several critical requirements that a successful, widely adoptable Web RTC browser API will need to meet:

    • Honoring key web tenets – The Web favors stateless interactions which do not saddle either party of a data exchange with the responsibility to remember what the other did or expects. Doing otherwise is a recipe for extreme brittleness in implementations; it also raises considerably the development cost which reduces the reach of the standard itself.
    • Customizable response to changing network quality – Real time media applications have to run on networks with a wide range of capabilities varying in terms of bandwidth, latency, and packet loss.  Likewise these characteristics can change while an application is running. Developers should be able to control how the user experience adapts to fluctuations in communication quality.  For example, when communication quality degrades, the developer may prefer to favor the video channel, favor the audio channel, or suspend the app until acceptable quality is restored.  An effective protocol and API should provide developers with the tools to tailor the application response to the exact needs of the moment.
    • Ubiquitous deployability on existing network infrastructure – Interoperability is critical if WebRTC users are to communicate with the rest of the world with users on different browsers, VoIP phones, and mobile phones, from behind firewalls and across routers and equipment that is unlikely to be upgraded to the current state of the art anytime soon.
    • Flexibility in its support of popular media formats and codecs as well as openness to future innovation – A successful standard cannot be tied to individual codecs, data formats or scenarios. They may soon be supplanted by newer versions that would make such a tightly coupled standard obsolete just as quickly. The right approach is instead to support multiple media formats and to bring the bulk of the logic to the application layer, enabling developers to innovate.

    While a useful start at realizing the Web RTC vision, we feel that the existing proposal falls short of meeting these requirements. In particular:

    • No Ubiquitous deployability: it shows no signs of offering real world interoperability with existing VoIP phones, and mobile phones, from behind firewalls and across routers and instead focuses on video communication between web browsers under ideal conditions. It does not allow an application to control how media is transmitted on the network. On the other hand, implementing innovative, real-world applications like security consoles, audio streaming services or baby monitoring through this API would be unwieldy, assuming it could be made to work at all. A Web RTC standard must equip developers with the ability to implement all scenarios, even those we haven’t thought of.
    • No fit with key web tenets: it is inherently not stateless, as it takes a significant dependency on the legacy of SIP technology, which is a suboptimal choice for use in Web APIs. In particular, the negotiation model of the API relies on the SDP offer/answer model, which forces applications to parse and generate SDP in order to effect a change in browser behavior. An application is forced to only perform certain changes when the browser is in specific states, which further constrains options and increases complexity. Furthermore, the set of permitted transformations to SDP are constrained in non-obvious and undiscoverable ways, forcing applications to resort to trial-and-error and/or browser-specific code. All of this added complexity is an unnecessary burden on applications with little or no benefit in return.

     

    The Microsoft Proposal for Customizable, Ubiquitous Real Time Communication over the Web

    For these reasons, Microsoft has contributed the CU-RTC-Web proposal that we believe does address the four key requirements above.

    • This proposal adds a real-time, peer-to-peer transport layer that empowers web developers by having greater flexibility and transparency, putting developers directly in control over the experience they provide to their users.
    • It dispenses with the constraints imposed by unnecessary state machines and complex SDP and provides simple, transparent objects.
    • It elegantly builds on and integrates with the existing W3C getUserMedia API, making it possible for an application to connect a microphone or a camera in one browser to the speaker or screen of another browser. getUserMedia is an increasingly popular API that Microsoft has been prototyping and that is applicable to a broad set of applications with an HTML5 client, including video authoring and voice commands.

    The following diagram shows how our proposal empowers developers to create applications that take advantage of the tremendous benefits offered by real-time media in a clear, straightforward fashion.

    image

    We are looking forward to continued work in the IETF and the W3C, with an open and fruitful conversation that converges on a standard that is both future-proof and an answer to today’s communication needs on the web. We would love to get community feedback on the details of our CU-RTC-Web proposal document and we invite you to stay tuned for additional content that we will soon publish on http://html5labs.com in support of our proposal.

  • Interoperability @ Microsoft

    HTTP/2.0 makes a great step forward in Vancouver, but this is just the beginning!

    • 0 Comments

    From:

    Henrik Frystyk Nielsen
    Principal Architect, Microsoft Open Technologies, Inc.

    Rob Trace
    Senior Program Manager Lead, Microsoft Corporation

    Gabriel Montenegro
    Principal Software Development Engineer, Microsoft Corporation

     

     

    We just came back from the IETF meeting in Vancouver, where the HTTP working group was meeting to decide on the way forward for HTTP/2.0. We are very happy with the discussions and overall outcomes as reflected in the meeting minutes and as summarized by the Chair, Mark Nottingham. At the meeting, the working group clarified the direction for HTTP/2.0 and began to draft a new charter. The group agreed that seven key areas need deep, data-driven discussion as part of the HTTP/2.0 specification process, and the resulting standard will not be backward compatible with any existing proposals (SPDY, HTTP Speed+Mobility, and Network-Friendly HTTP Upgrade). The charter calls for a proposed completion date for the standard of November 2014. In other words, while we are excited about where we are, it is clear that we are just at the beginning of the process toward HTTP 2.0.

    Seven Key areas under discussion

    The meeting outlined clearly the need for discussions and consensus over seven key technical areas such as Compression, Mandatory TLS, and Client Pull/Server Push. This list of issues is aligned with the position that Microsoft’s Henrik Frystyk Nielsen outlined in an earlier message to the HTTP discussion list (see excerpts below). Overall, we believe there needs to be robust discussions about how we bring together the best elements of the current SPDY, HTTP Speed+Mobility, and Network-Friendly HTTP Upgrade proposals.

      Area

      Opinion that seems to prevail

      1. Compression

      SPDY or Friendly

      2. Multiplexing

      SPDY

      3. Mandatory TLS

      Speed+Mobility

      4. Negotiation

      Friendly or Speed+Mobility

      5. Client Pull/Server Push

      Speed+Mobillity

      6. Flow Control

      SPDY

      7. WebSockets

      Speed+Mobility

     

    HTTP/2.0 specification must be data-driven

    We are particularly gratified to see this language in the proposed charter:

    It is expected that HTTP/2.0 will:
    * Substantially and measurably improve end-user perceived latency in most cases, over HTTP/1.1 using TCP.

    This supports Microsoft’s position that the HTTP update must be data-driven to ensure that it provides the desired benefits for users. . The SPDY proposal has done a good job of raising awareness of the opportunities to improve Web performance.

    Almost equal performance between SPDY and HTTP 1.1

    To compare the performance of SPDY with HTTP 1.1 we have run tests comparing download times of several public web sites using a controlled tested study. The test uses publically available software run with mostly default configurations while applying all the currently available optimizations to HTTP 1.1. You can find a preliminary report on the test results here: http://research.microsoft.com/apps/pubs/?id=170059. The results mirror other data (http://www.guypo.com/technical/not-as-spdy-as-you-thought) that indicate mixed results with SPDY performance.

    Our results indicate almost equal performance between SPDY and HTTP 1.1 when one applies all the known optimizations to HTTP 1.1. SPDY's performance improvements are not consistent and significant. We will continue our testing, and we welcome others to publish their results so that HTTP 2.0 can choose the best changes and deliver the best possible performance and scalability improvements compared to HTTP 1.1.

    We discussed those results in Vancouver and it was great to see the interest that this research received from the community on the IETF mailing list and on Twitter.

    Existing proposals will change a lot – No backward compatibility

    In light of the discussions and the proposed charter, HTTP2.0 will undoubtedly not be backward compatible with any of the current proposals (SPDY, Speed+Mobility, Friendly); in fact, we expect that it might differ in substantial ways from each of these proposals. Consequently, we caution implementers against embracing unstable versions of the specification too eagerly. The proposed charter calls for an IETF standard by November 2014.

    We are happy that the working group decided, for practical reasons, to use the text from http://datatracker.ietf.org/doc/draft-mbelshe-httpbis-spdy/ as a starting point. The discussions around the previously cited seven design elements will deeply modify this text . As the Chair wrote, “It’s important to understand that SPDY isn’t being adopted as HTTP/2.0” . This is in line with the Microsoft approach: Our HTTP Speed+Mobility proposal starts from both the Google SPDY protocol (a separate submission to the IETF for this discussion) and the work the industry has done around WebSockets, and the main departures from SPDY are to address the needs of mobile devices and applications.

    Looking ahead

    We’re excited for the web to get faster, more stable, and more capable. HTTP/2.0 is an important part of that progress, and we look forward to an HTTP/2.0 that meets the needs of the entire web, including browsers, apps, and mobile devices.

    Henrik Frystyk Nielsen, Gabriel Montenegro and Rob Trace

    Message to the IETF mailing list from Henrik

    Dear All,

    We remain committed to the HTTP/2.0 standards process and look forward to seeing many of you this week at the IETF meeting in Vancouver to continue the discussion.  In the spirit of open discussion, we wanted to share some observations in advance of the meeting and share the latest progress from prototyping and testing.

    There are currently three different proposals that the group is working through:

       * SPDY (http://tools.ietf.org/html/draft-mbelshe-httpbis-spdy),
       * HTTP Speed+Mobility (http://tools.ietf.org/html/draft-montenegro-httpbis-speed-mobility),
       * Network-Friendly HTTP Upgrade (http://tools.ietf.org/html/draft-tarreau-httpbis-network-friendly).

    The good news is that everyone involved wants to make the Web faster, more scalable, more secure, and more mobile-friendly, and each proposal has benefits in different areas that the discussion can choose from.

    --- A Genuinely Faster Web ---

    The SPDY proposal has been great for raising awareness of Web performance. It takes a "clean slate" approach to improving HTTP.

    To compare the performance of SPDY with HTTP/1.1 we have run tests comparing download times of several public web sites using a controlled tested study. The test uses publically available software run with mostly default configurations while applying all the currently available optimizations to HTTP/1.1. You can find a preliminary report on the test results here: http://research.microsoft.com/apps/pubs/?id=170059. The results mirror other data (http://www.guypo.com/technical/not-as-spdy-as-you-thought) that indicate mixed results with SPDY performance.

    Our results indicate almost equal performance between SPDY and HTTP/1.1 when one applies all the known optimizations to HTTP/1.1. SPDY's performance improvements are not consistent and significant. We will continue our testing, and we welcome others to publish their results so that HTTP/2.0 can choose the best changes and deliver the best possible performance and scalability improvements compared to HTTP/1.1.

    --- Taking the Best from Each ---

    Speed is one of several areas of improvement. Currently, there's no clear consensus that any one of the proposals is the clear choice or even starting point for HTTP/2.0 (based on our reading the Expressions of Interest and discussions on this mailing list. A good example of this is the vigorous discussion around mandating TLS encryption (http://tools.ietf.org/html/rfc5246) for HTTP/2.0.

    We think a good approach for HTTP/2.0 is to take the best solution for each of these areas from each of the proposals.  This approach helps us focus the discussion for each area of the protocol. Of course, this approach would still allow the standard to benefit from the extensive knowledge gained from implementing existing proposals.

    We believe that the group can converge on consensus in the following areas, based on our reading of the Expressions of Interest, by starting from the different proposals.

    ------------------|------------------
    Area              | Opinion that
                      | seems to prevail
    ------------------|------------------
    1. Compression    | SPDY or Friendly
    ------------------|------------------
    2. Multiplexing   | SPDY
    ------------------|------------------
    3. Mandatory TLS  | Speed+Mobility
    ------------------|------------------
    4. Negotiation    | Friendly or
                      |   Speed+Mobility
    ------------------|------------------
    5. Client Pull/   | Speed+Mobility
          Server Push |
    ------------------|------------------
    6. Flow Control   | SPDY
    ------------------|------------------
    7. WebSockets     | Speed+Mobility
    ------------------|------------------

    Below, we discuss each HTTP/2.0 element and the current consensus that appears to be forming within the Working Group.

    1. Compression

    Compression is simple to conceptualize and implement, and it is important. Proxies and other boxes in the middle on today's Web often face problems with it. The HTTP/2.0 discussion has been rich but with little consensus.

    Though some studies suggest that SPDY's header compression approach shows promise, other studies show this compression to be prohibitively onerous for intermediary devices. More information here would help us make sure we're making the Web faster and better.

    Also, an entire segment of implementers are not interested in compression as defined in SPDY.  That's a challenge because the latest strawman for the working group charter (http://lists.w3.org/Archives/Public/ietf-http-wg/2012JulSep/0784.html) states that the "resulting specification(s) are expected to be meet these goals for common existing deployments of HTTP; in particular, ... intermediation (by proxies, Corporate firewalls, 'reverse' proxies and Content Delivery Networks)."

    We think the SPDY or Friendly proposals is a good starting point for progress.

    2. Multiplexing

    All three proposals define similar multiplexing models. We haven't had substantial discussion on the differences. This lack of discussion suggests that there is rough consensus around the SPDY framing for multiplexing.

    We think that the SPDY proposal is a good starting point here and best captures the current consensus.

    3. Mandating Always On TLS

    There is definitely no consensus to mandate TLS for all Web communication, but some major implementers have stated they will not to adopt HTTP/2.0 unless the working group supports a "TLS is mandatory" position. A very preliminary note from the chair (http://lists.w3.org/Archives/Public/ietf-http-wg/2012JulSep/0601.html) states that there is a lack of consensus for mandating TLS.

    We think the Speed+Mobility proposal is a good starting point here as it provides options to turn TLS on (or not).

    4. Negotiation

    Only two of the proposals actually discuss how different endpoints agree to use HTTP/2.0.

    (The SPDY proposal does not specify a negotiation method. Current prototype implementations use the TLS-NPN (http://tools.ietf.org/html/draft-agl-tls-nextprotoneg) extension.  While the other proposals use HTTP Upgrade to negotiate HTTP/2.0, some parties have expressed non-support for this method as well.)

    We think either of the Friendly or Speed+Mobility proposals is a good starting point because they are the only ones that have any language in this respect.

    5. Client Pull and Server Push

    There are tradeoffs between a server push model and a client pull model. The main question is how to improve performance while respecting bandwidth and client caches.

    Server Push has not had the same level of implementation and experimentation as the other features in SPDY. More information here would help us make sure we're making the Web faster and better.

    We think the Speed+Mobility proposal is a good starting point here, suggesting that this issue may be better served in a separate document rather than tied to the core HTTP/2.0 protocol.

    6. Flow Control

    There has only been limited discussion in the HTTPbis working group on flow control. Flow Control offers a lot of opportunity make the Web faster as well as to break it; for example, implementations need to figure out how to optimize for opposing goals (like throughput and responsiveness) at the same time.

    The current version of the SPDY proposal specifies a flow control message with many settings are that are not well-defined. The Speed+Mobilty proposal has a simplified flow control model based on certain assumptions. More experimentation and information here would help us make sure we're making the Web faster and better.

    We think that the SPDY proposal is a good starting point here.

    7. WebSockets

    We see support  for aligning HTTP/2.0 with a future version of WebSockets, as suggested in the introduction of the Speed+Mobility proposal.

    --- Moving forward ---

    We're excited for the Web to get faster, more stable, and more capable, and HTTP/2.0 is an important part of that.

    We believe that bringing together the best elements of the current SPDY, HTTP Speed+Mobility, and Network-Friendly HTTP Upgrade proposals is the best approach to make that happen.

    Based on the discussions on the HTTPbis mailing list, we've suggested which proposals make the most sense to start from for each of the areas that HTTP/2.0 is addressing. Each of these areas needs more prototyping and experimentation and data. We're looking forward to the discussion this week.

    Sincerely,

    Henrik Frystyk Nielsen

    Principal Architect, Microsoft Open Technologies, Inc.

    Gabriel Montenegro

    Principal Software Development Engineer, Microsoft Corporation

    Rob Trace

    Senior Program Manager Lead, Microsoft Corporation

    Adalberto Foresti

    Senior Program Manager, Microsoft Open Technologies, Inc.”

  • Interoperability @ Microsoft

    Windows Azure Storage plugin for WordPress

    • 2 Comments

    The Windows Azure Storage Plugin for WordPress was updated today to use the new Windows Azure SDK for PHP. The plugin comes with a comprehensive user guide, but for a quick overview of what it does and how to get started, see Brian Swan’s blog post. Cory Fowler also has some good information on how to contribute to the plugin, which is an MS Open Tech open-source project hosted on the SVN repo of the WordPress Plugin Directory.

    This plugin allows you to use Windows Azure Storage Service to host the media files for a WordPress blog. I use WordPress on my personal blog where I write mostly about photography and sled dogs, so I installed the plugin today to check it out. The installation is quick and simple (like all WordPress plugins, you just need to copy the files into a folder under your wp-content/plugins folder), and the only setup required is to point it at a storage account in your Windows Azure subscription. Brian’s post has all the details.

    The plugin uses the BlobRestProxy class exposed by the PHP SDK to store your media files in Windows Azure blob storage:

    Once the plugin is installed, you don’t need to think about it – it does everything behind the scenes, while you stay focused on the content you’re creating. If you’re writing a blog post in the WordPress web interface, you’ll see a new button for Windows Azure Storage, which you can use to upload and insert images into your post:

    Brian’s post covers the details of how to upload media files through the plugin’s UI under the new button.

    If you click on the Add Media icon (clip_image001) instead, you can add images from the Media Library, which is also stored in your Windows Azure storage account under the default container (which you can select when configuring the plugin).

    If you use Windows Live Writer (as I do), you don’t need to do anything special at all to take advantage of the plugin. When you publish from Live Writer the media files will automatically be uploaded to the default container of your storage account, and the links within your post will point to the blobs in that container as appropriate.

    To the right is a blog post I created that takes advantage of the plugin. I just posted it from Live Writer as I usually do, and the images are stored in the wordpressmedia container of my dmahughwordpress storage account, with URLs like this one:

    http://dmahughwordpress.blob.core.windows.net/wordpressmedia/2012/08/DSC_7914.jpg

    Check it out, and let us know if you have any questions. If you don’t have an Azure subscription, you can sign up for a free trial here.

  • Interoperability @ Microsoft

    OSCON photos are here! Thanks, Julian!

    • 0 Comments

    Hey OSCON friends, drum roll please … our Microsoft-sponsored photographer and joyful open source geek extraordinaire Julian Cash has posted your photos from our booth, along with a fun video, on his JC Event Photo OSCON Event page and his Facebook page. Find the photo you love from among his shots of you, give it a right-click, “save picture as…” in your favorite format, and it’s all yours. Copy the photo to your favorite social media sites and send a copy to Mom – I did!

    On behalf of the MS Open Tech evangelism team, thanks to everyone who spent time with us at OSCON. I blogged earlier about the new friends we made and the cool conversations we all had about emerging technologies, but I think it’s time to simply say that a picture is a worth a thousand words …

    DSC_2938-Edit

    Clockwise from top right: Gianugo Rabellino, “Grazie!”— Olivier Bloch, “Merci!” — Doug Mahugh and Robin Bender Ginn, “Thank you! Thank you!”

  • Interoperability @ Microsoft

    Node.js script for releasing a Windows Azure blob lease

    • 1 Comments

    This post covers a workaround for an issue that may affect you if you’re deploying Windows Azure virtual machines from VHDs stored in Windows Azure blob storage. The issue doesn’t always occur (in fact, our team hasn’t been able to repro it), and it will be fixed soon. If you run into the issue, you can use any one of several workarounds covered below.

    Blob leases are a mechanism provided by Windows Azure for ensuring that only one process has write access to a blob. As Steve Marx notes in his blog post on the topic, “A lease is the distributed equivalent of a lock. Locks are rarely, if ever, used in distributed systems, because when components or networks fail in a distributed system, it’s easy to leave the entire system in a deadlock situation. Leases alleviate that problem, since there’s a built-in timeout, after which resources will be accessible again.”

    In the case of VHD images stored as blobs, Windows Azure uses a lease to ensure that only one virtual machine at a time has the VHD mounted in a read/write configuration. In certain cases, however, we’ve found that the lease may not expire correctly after deleting the virtual machine and deleting the disk or OS image associated with the VHD. This can cause a lease conflict error message to occur when you try to delete the VHD or re-use it later in a different virtual machine.

    If you’re affected by this issue, you can explicitly break the lease that has not expired, or you can make a copy of the VHD and use that copy for provisioning a new virtual machine. Craig Landis has posted instructions on the Windows Azure community forum for how to do this from Windows machines; he also covers the same techniques in a separate post addressing a variation on the issue.

    For those who are managing Windows Azure virtual machines from Linux or Mac desktops, our team has developed a Node.js script that can be used to break a lease if needed. Here are the steps to follow for installing and running the script:

    1. Verify through the Windows Azure management portal that the VHD is not actually in use. Craig’s forum post provides guidance on how to do this.

    2. If you don’t have the Windows Azure command line tool for Mac and Linux installed, you can get it by installing the Windows Azure SDK for Node.js. SDK installation instructions for Windows, Mac, and Linux can be found on the Windows Azure Node.js Developer Center.

    3. Download and import your Windows Azure publish settings file, as covered under “Manage your account information and publish settings” in the command line tool documentation.

    4. Copy the the breakLease.js file (available here) to the node_modules/azure-cli subfolder under your Node.js global modules folder. You can find your global modules folder with the npm ls –g command. For example, on my Windows machine that command returns c:\Users\dmahugh\AppData\Roaming\npm, so I need to copy the script to c:\Users\dmahugh\AppData\Roaming\npm\node_modules\azure-cli.

    After you’ve completed those setup steps, you can break a blob lease by running the script with a single parameter, the URL of the blob:

    > node breakLease.js <absolute-url-to-blob>

    The script prints out information about the steps it takes to break the lease:

    image

    That’s all there is to it. As I mentioned earlier, this workaround is only needed in certain cases until the underlying cause has been fixed. Please let us know if you run into any issues using this script.

Page 1 of 1 (10 items)