December, 2012

  • Interoperability @ Microsoft

    How to develop for Windows Phone 8 on your Mac

    • 14 Comments

    UPDATE 01/07/13: added instructions to enable Hyper-V in Parallels Desktop VM

     

    Interested in developing apps for Windows Phone 8, but you are developing on a Mac? No problem...check out the guide below to find a variety of options.

    First you should consider whether to build native WP8 applications or Web applications. Applications will run directly on the phone platform, and will deliver advanced performance and a fully integrated experience to the end user. Web applications developed with HTML5 and JavaScript will take advantage of the Web standards support of Internet Explorer 10 and the cross platform nature of HTML5 applications.There is a lot of debate about which way to go, native app or Web app with HTML5, and I would say that the answer is… it depends. In this post, I will try to present the main options to go one way or the other based on the assumption that you have a Mac and want to stick to it Smile.

    WP8 application development on a Mac

    To build applications for Windows Phone, you need Visual Studio 2012 and the WP8 SDK. There is a free version that bundles these two and that allows you to do pretty much all you need to build and publish an application to the Windows Phone store:

    • Write and debug code in an advanced code editor
    • Compile to app package
    • Test the application in an emulator leveraging advanced features
    • Connecting and deploying to an actual device and do cross-debugging, and performance analysis
    • … and these are only the basic features available, there are plenty more!

    Visual Studio 2012 runs on Windows 8 and Windows 7 but the Windows Phone emulator relies on Hyper-V, which comes only in Windows 8 64 bit. So basically, you need to have a Windows 8 64 bit install if you want to leverage the emulator, and you need a way to have Hyper-V enabled in your Windows 8 install.

    Using a recent Macintosh, you have a couple of options to run Windows 8:

    1. Run Windows 8 on your Mac natively using Boot Camp
    2. Run Windows 8 in a virtual environment using software like VMWare Fusion 5 or Parallels Desktop 8 for your Mac

    There is plenty of documentation online on how to set up the environments for both options to get Windows to run on your Mac, and you can also find details on MSDN here.

    Boot Camp

    If you want to go the Boot Camp way, once you have set up Windows 8, you can go ahead and follow the default instructions to download and install the WP8 SDK.

    VMWare Fusion 5 or Parallels Desktop 8

    If you want to use VMWare Fusion or Parallels and still be able to use the WP8 Emulator, here are the steps you need to follow:

    • Install VMWare Fusion 5 or Parallels Desktop 8if you don’t have it yet
    • Download Windows 8 64 bits ISO:
      • you can find the evaluation version on the evaluation center here.
      • If you want the retail version then it is a little tricky on a Mac as there is no way to download the retail iso directly. The trick consists in installing the evaluation version of Windows 8 on a VMware Fusion VM or Parallels following the below instructions, then from Windows 8, run the Windows 8 setup (a link is available in the first lines of the email you will receive after the purchase of Windows 8) that will offer the option of downloading the retail ISO after entering you product key as described on this article.
    • Create a new VM setting up the below parameters:
      • On WMWare Fusion 5:
        • ensure that you have the following settings (be sure to check the “Enable hypervisor applications in this Virtual machine” option):

    1

        • Important:
          • Hyper-V requires at least 2 cores to be present.
          • The Windows Phone Emulator will use 258MB or 512MB virtual memory, therefore, don’t be shy with memory assigned to the VM and assign at least 2 GB.
          • In the advanced settings, ensure you have selected “Preferred virtualization engine: Intel VT-x with EPT” option
        • Modify the .vmx file to add or modify the following settings:
          • hypervisor.cpuid.v0 = "FALSE"
          • mce.enable = "TRUE"
          • vhv.enable = "TRUE"
      • On Parallels Desktop 8:
        • Ensure that you have the following settings for the new VM (go into VM Settings>General>CPUs):

    Parrallels

        • Still in settings, you need to enable virtualization pass-thru support in Options>Optimizations>Nested Virtualization

    Screen Shot 2013-01-04 at 3.58.43 PM

     

    • Install Windows 8 on your VMware Fusion or Parallels Desktop VM (you can find plenty of guides online on how to install a VM from an ISO)
    • Once Windows 8 is installed, download and install the WP8 SDK.

    4

    The SDK install will setup the Hyper-V environment and will set things up for you to be able to use the Emulator within the VMWare Fusion or Parallels Desktop image.

    on VMware Fusion… on Parallels Desktop…

    Screen Shot 2012-12-04 at 1.48.09 PM

    Screen Shot 2013-01-04 at 4.04.35 PM 

    You are now set to build, debug and test WP8 applications. You can start your development and debugging by leveraging the emulator and its tools, and you can consider using an actual Windows Phone 8 device, plugging it in your Mac, and setting things up so that the USB device shows up in the VM.

    You can find extensive information on how to use Visual Studio 2012 for Windows Phone 8 development, along with its emulator, and how to publish an application, get samples, as well as everything a developer needs here.

    WP8 Web applications development on a Mac

    Here we are talking about two different things:

    • Development for mobile websites that will render well in the Windows Phone 8 browser.
    • HTML5 application development using the Web Browser control hosted by a native application, model that is used by frameworks and tools such as Apache Cordova (a.k.a. PhoneGap), also known as hybrid applications.

    Windows 8 offers a “native HTML5/JS” model that allows you to develop applications in HTML5 and JavaScript that will execute directly on top of the application platform, but we will not discuss this model here as Windows Phone 8 proposes a slightly different model for HTML5 and JS applications development.

    On Windows Phone 8, in both cases mentioned above, the HTML5/JavaScript/CSS code will be rendered and executed in the same Internet Explorer 10 engine on Windows Phone 8. This means that whether you are writing a mobile website, or a PhoneGap type application, you can do so on your usual tool or editor all the way down to the debugging and testing phases.

    While you can do a lot of debugging in a Web browser for your HTML5/JS code, you will need to do actual tests and debugging on the actual platform (WP8 Emulator or/and actual device). Even if you are using Web standards, you need to consider that the level of support might not be the same on all platforms. And if you are using third party code, you also need to ensure that the code doesn’t contain platform specific elements so that things will run correctly. For example, you need to get rid of any dependencies on WebKit specifics.

    Making sure your Web code is not platform specific

    When writing this code, you need to consider the various platforms that your mobile Web application will be used. Obviously the less specifics there are for each of the platforms, the better for you as a developer! Good news is that HTML5 support is getting better and better across modern mobile browsers. IE10 in Windows Phone 8 is no exception and brings extended standards support, hardware acceleration and great rendering performances. You can take a look at the following site directly from your Windows Phone 8 device to check that out: www.atari.com/arcade

    5140_clip_image008_295E204E

    To learn more on how to make sure your mobile Web code will render well on Internet Explorer 10 on Windows Phone 8 as well as on other modern mobile browsers, you can read this article.

    Testing and debugging your Web application for WP8 on a Mac

    Once you have clean HTML5 code that runs and renders well in a Web browser, you will need to test it on IE10 on a Windows Phone 8 device or emulator.

    In the IE10 desktop, there are powerful debugging tools (“F12”), which is not the case on Windows Phone 8. One of the recommended ways to do advanced debugging is to leverage the “F12” debugging capabilities on IE10 Desktop in order to cover most if not all of the debugging and testing cases for your mobile Web application for Windows Phone 8. For a Mac, you will need to look into the various options to install a Windows 8 virtual machine, which are mentioned in the beginning of this article, and load your code in Internet Explorer 10 within Windows 8. Once IE is launched, press the "F12" key or go to the settings menu and select “F12 Developer tools.”

    1667_clip_image010_223EE3D6

    In the debugging tool at the bottom, you can then change the User agent setting and the resolution from the “Tools” menu to match what IE10 on Windows Phone 8 exposes.

    4314_clip_image012_490CFA16

    Once you have done these tests on Internet Explorer 10 desktop, you can deploy and test on an actual Windows Phone 8 device or on the emulator (see previous chapters on how to set things up to make the emulator work on a Mac).

    Now what?

    With these steps you should be set to start developing and deploying Windows Phone 8 applications from your Mac.

    But there are certainly other tips and tricks that you will figure out and you may already know. We would love to hear from you to make this post even more useful for developers wishing to expand their reach to the Windows Phone 8 platform. Do not hesitate to comment on this post with your suggestions, ideas, tips…

  • Interoperability @ Microsoft

    MS Open Tech Contributes Support for Windows ETW and Perf Counters to Node.js

    • 0 Comments

    Here’s the latest about Node.js on Windows. Last week, working closely with the Node.js core team, we checked into the open source Node.js master branch the code to add support for ETW and Performance Counters on Windows. These new features will be included in the new V0.10 when it is released. You can download the source code now and build Node.js on your machine if you want to try out the new functionality right away.

    Developers need advanced debugging and performance monitoring tools. After working to assure that Node.js can run on Windows, our focus has been to provide instrumentation features that developers can use to monitor the execution of Node applications on Windows. For Windows developers this means having the ability to collect Event Tracing for Windows ® (ETW) data and use Performance Counters to monitor application behavior at runtime. ETW is a general-purpose, high-speed tracing facility provided by the Windows operating system. To learn more about ETW, see the MSDN article Improve Debugging And Performance Tuning With ETW.

    ETW

    With ETW, Node developers can monitor the execution of node applications and collect data on key metrics to investigate and performance and other issues. One typical scenario for ETW is profiling the execution of the application to determine which functions are most expensive (i.e. the functions where the application spends the most time). Those functions are the ones developers should focus on in order to improve the overall performance of the application.

    In Node.js we added the following ETW events, representing some of the most interesting metrics to determine the health of the application while it is running in production:

    • NODE_HTTP_SERVER_REQUEST: node.js received a new HTTP Request
    • NODE_HTTP_SERVER_RESPONSE: node.js responded to an HTTP Request
    • NODE_HTTP_CLIENT_REQUEST: node.js made an HTTP request to a remote server
    • NODE_HTTP_CLIENT_RESPONSE: node.js received the response from an HTTP Request it made
    • NODE_NET_SERVER_CONNECTION: TCP socket open
    • NODE_NET_STREAM_END: TCP Socket close
    • NODE_GC_START: V8 starts a new GC
    • NODE_GC_DONE: V8 finished a GC

    For Node.js ETW events we also added some additional information about the JavaScript track trace at the time the ETW event was generated. This is important information that the developer can use to determine what code has been executed when the event was generated.

    Flamegraphs

    Most Node developers are familiar with Flamegraphs, which are a simple graphical representation of where time is spent during application execution. The following is an example of a Flamegraph generated using ETW.

    clip_image002

    For Windows developers we built the ETWFlamegrapth tool (based on Node.js) that can parse etl files, the log files that Windows generates when ETW events are collected. The tool can convert the etl file to a format that can be used with the Flamegraph tool that Brendan Gregg created.

    To generate a Flamegraph using Brendan’s tool, you need to follow the simple instructions listed in the ETWFlamegraph project page on Github. Most of the steps involve processing the ETW files so that symbols and other information are aggregated into a single file that can be used with the Flamegraph tool.

    ETW relies on a set of tools that are not installed by default. You’ll either need to install Visual Studio (for instance, Visual Studio 2012 installs the ETW tools by default) or you need to install the latest version of the Windows SDK tools. For Windows 7 the SDK can be found here.

    To capture stack traces:

    1. xperf -on Latency -stackwalk profile
    2. <run the scenario you want to profile, ex node.exe myapp.js>
    3. xperf -d perf.etl
    4. SET _NT_SYMBOL_PATH=srv*C:\symbols*http://msdl.microsoft.com/downloads/symbols
    5. xperf -i perf.etl -o perf.csv -symbols

    To extract the stack for process node.exe and fold the stacks into perf.csv.fold, this includes all information about function names that will be shown in the Framegraph.

    node etlfold.js perf.csv node.exe. (etlfold.js is the file found in the ETWFlamegraph project on GitHub).

    Then run the flamegraph script (requires perl) to generate the svg output:

    flamegraph.pl perf.csv.fold > perf.svg

    If the Node ETW events for JavaScript symbols are available then the procedure becomes the following.

    1. xperf -start symbols -on NodeJS-ETW-provider -f symbols.etl -BufferSize 128
    2. xperf -on Latency -stackwalk profile
    3. run the scenario you want to profile.
    4. xperf -d perf.etl
    5. xperf -stop symbols
    6. SET _NT_SYMBOL_PATH=srv*C:\symbols*http://msdl.microsoft.com/downloads/symbols
    7. xperf -merge perf.etl symbols.etl perfsym.etl
    8. xperf -i perfsym.etl -o perf.csv -symbols

    The remaining steps are the same as in the previous example.

    Note: for more advanced scenarios where you may want to have stack traces that include the Node.js core code executed at the time the event is generated, you need to include node.pdb (the debugging information file) in the symbol path so the ETW tools can resolve and include them in the Framegraph.

    PerfCounters

    In addition to ETW, we also added Performance Counters (PerfCounters). Like ETW, Performance counters can be used to monitor critical metrics at runtime, the main differences being that they provide aggregated data and Windows provides a great tool to display them. The easiest way to work with PerfCounters is to use the Performance monitor console but PerfCounters are also used by System Center and other data center management applications. With PerfCounters a Node application can be monitored by those management applications, which are widely used for instrumentation of large cloud and enterprise-based applications.

    In Node.js we added the following performance counters, which mimic very closely the ETW events:

    • HTTP server requests: number of incoming HTTP requests
    • HTTP server responses: number of responses
    • HTTP client requests: number of HTTP requests generated by node to a remote destination
    • HTTP client responses: number of HTTP responses for requests generated by node
    • Active server connections: number of active connections
    • Network bytes sent: total bytes sent
    • Network bytes received: total bytes received
    • %Time in GC: % V8 time spent in GC
    • Pipe bytes sent: total bytes sent over Named Pipes.
    • Pipe bytes received: total bytes received over Named Pipes.

    All Node.js performance counters are registered in the system so they show up in the Performance Monitor console.

    clip_image003

    While the application is running, it’s easy to see what is happening through the Performance Monitor console:

    clip_image004

    The Performance Monitor console can also display performance data in a tabular form:

    clip_image005

    Collecting live performance data at runtime is an important capability for any production environment. With these new features we have given Node.js developers the ability to use a wide range of tools that are commonly used in the Windows platform to ensure an easier transition from development to production.

    More on this topic very soon, stay tuned.

    Claudio Caldato
    Principal Program Manager Lead
    Microsoft Open Technologies, Inc.

  • Interoperability @ Microsoft

    Open source release from MS Open Tech: Pointer Events initial prototype for WebKit

    • 0 Comments

    From:

    Adalberto Foresti, Principal Program Manager, Microsoft Open Technologies, Inc.
    Scott Blomquist, Senior Development Engineer, Microsoft Open Technologies, Inc.

    It’s great to see that the W3C Pointer Events Working Group has expanded its membership and published the first working draft last week in the process to standardize a single input model across all types of devices. To further contribute to the technical discussions, today Microsoft Open Technologies, Inc., published an early open source HTML5 Labs Pointer Events prototype of the W3C Working Draft for WebKit. We want to work with the WebKit developer community to enhance this prototype. Over time, we want this prototype to implement all the features that will be defined by the W3C Working Group’s Pointer Events specification. The prototype will help with interoperability testing with Internet Explorer.

    The Web today is fragmented into sites designed for only one type of input. The goal of a Pointer Events standard is to help Web developers to only need to code to one pointer input model across all types of devices and to have that code work across multiple browsers. Google, Microsoft, Mozilla, Nokia and Zynga are among the industry members working to solve this problem in the W3C Pointer Events WG.

    Microsoft submitted the Pointer Events specification to the W3C just three months ago. The working group is using Microsoft’s Member submission as a starting point for the specification, which is based on the APIs available today in IE10 on Windows 8 and Windows Phone 8.

    Our team developed this Pointer Events prototype of the W3C Working Draft for WebKit as a starting point for testing interoperability between Internet Explorer and WebKit in this space. As we have done in the past on HTML5 Labs, the prototype intends to inform discussions and provide information grounded on implementation experience. Please provide feedback on this initial implementation in the comments of this blog and in the WebKit mailing lists. We also would love to get some advice on how/when to submit this patch to the main WebKit trunk.

    Overall, we believe that we are on a solid path forward in this standardization process. In a short time, we have a productive working group, a first W3C Working Draft specification, and an early proof of concept for WebKit that should provide valuable insights. We’re looking forward to working closely with the community to develop this open source code in WebKit so we can start testing interoperability with Internet Explorer.

  • Interoperability @ Microsoft

    Breaking news: HTML 5.0 and Canvas 2D specification’s definition is complete!

    • 1 Comments

    imageToday marks an important milestone for Web development, as the W3C announced the publication of the Candidate Recommendation (CR) version of the HTML 5.0 and Canvas 2D specifications.

    This means that the specifications are feature complete: no new features will be added to the final HTML 5.0 or the Canvas2D Recommendations. A small number of features are marked “at risk,” but developers and businesses can now rely on all others being in the final HTML 5.0 and Canvas 2D Recommendations for implementation and planning purposes. Any new features will be rolled into HTML 5.1 or the next version of Canvas 2D.

    It feels like yesterday when I was publishing a previous post on HTML5 progress toward a standard, as HTML5 reached "Last Call" status in May 2011. The W3C set an ambitious timeline to finish HTML 5.0, and this transition shows that it is on track. That makes me highly confident that HTML 5.0 can reach Recommendation status in 2014.

    The real-world interoperability of many HTML 5.0 features today means that further testing can be much more focused and efficient. As a matter of fact, the Working Group will use the “public permissive” criteria to determine whether a feature that is implemented by multiple browsers in an interoperable way can be accepted as part of the standard without expensive testing to verify.

    Work in this “Candidate Recommendation” phase will focus on analyzing current HTML 5.0 implementations, establishing priorities for test development, and working with the community to develop those tests. The WG will also look into the features tagged as “at risk” that might be moved to HTML 5.1 or the next version of Canvas2D if they don’t exhibit a satisfactory level of interoperability by the end of the CR period.

    At the same time, work on HTML 5.1 and the next version of Canvas2D are underway and the W3C announced first working drafts that include features such as media and graphics. This work is on a much faster track than HTML5 has been, and 5.1 Recommendations are expected in 2016. The HTML Working Group will consider several sources of suggested new features for HTML 5.1. Furthermore, HTML 5.1 could incorporate the results of various W3C Community Groups such as the Responsive Images Community Group or the WHATCG. HTML 5.1 will use the successful approach that the CSS 3.0 family of specs has used to define modular specifications that extend HTML’s capabilities without requiring changes to the underlying standard. For example, the HTML WG already has work underway to standardize APIs for Encrypted Media Extensions, which would allow services such as Netflix to stream content to browsers without plugins, and Media Source Extensions to facilitate streaming content in a way that adapts to the characteristics of the network and device.

    Reaching Candidate Recommendation further indicates the high level of collaboration that exists in the HTML WG. I would especially like to thank the W3C Team and my co-chairs, Sam Ruby (IBM) and Maciej Stachowiak (Apple), for all their hard work in helping to get to CR. In addition, the HTML WG editorial team lead by Robin Berjon deserves a lot of credit for finalizing the CR drafts and for their work on the HTML 5.1 drafts.

    /paulc

    Paul Cotton, Microsoft Canada
    W3C HTML Working Group co-chair

  • Interoperability @ Microsoft

    Lowering the barrier of entry to the cloud: announcing the first release of Actor Framework from MS Open Tech (Act I)

    • 0 Comments

    From:

    Erik Meijer, Partner Architect, Microsoft Corp.

    Claudio Caldato, Principal Program Manager Lead, Microsoft Open Technologies, Inc.

     

    There is much more to cloud computing than running isolated virtual machines, yet writing distributed systems is still too hard. Today we are making progress towards easier cloud computing as ActorFX joins the Microsoft Open Technologies Hub and announces its first, open source release. The goal for ActorFx is to provide a non-prescriptive, language-independent model of dynamic distributed objects, delivering a framework and infrastructure atop which highly available data structures and other logical entities can be implemented.

    ActorFx is based on the idea of the Actor Model developed by Carl Hewitt, and further contextualized to managing data in the cloud by Erik Meijer in his paper that is the basis for the ActorFx project − you can also watch Erik and Carl discussing the Actor model in this Channel9 video.

    What follows is a quick high-level overview of some of the basic ideas behind ActorFx. Follow our project on CodePlex to learn where we are heading and how it will help when writing the new generation of cloud applications.

    ActorFx high-level Architecture

    At a high level, an actor is simply a stateful service implemented via the IActor interface. That service maintains some durable state, and that state is accessible to actor logic via an IActorState interface, which is essentially a key-value store.

    image

     

    There are a couple of unique advantages to this simple design:

    • Anything can be stored as a value, including delegates.  This allows us to blur the distinction between state and behavior – behavior is just state.  That means that actor behavior can be easily tweaked “on-the-fly” without recycling the service representing the actor, similar to dynamic languages such as JavaScript, Ruby, and Python.
    • By abstracting the IActorState interface to the durable store, ActorFx makes it possible to “mix and match” back ends while keeping the actor logic the same.  (We will show some actor logic examples later in this document.)

    ActorFx Basics

    The essence of the ActorFx model is captured in two interfaces: IActor and IActorState.

    IActorState is the interface through which actor logic accesses the persistent data associated with an actor, it is the interface implemented by the “this” pointer.

    public interface IActorState
        {
            void Set(string key, object value);
            object Get(string key);
            bool TryGet(string key, out object value);
            void Remove(string key);
            Task Flush(); // "Commit"
        }
    

    By design, the interface is an abstract key-value store.  The Set, Get, TryGet and Remove methods are all similar to what you might find in any Dictionary-type class, or a JavaScript object.  The Flush() method allows for transaction-like semantics in the actor logic; by convention, all side-effecting IActorState operations (i.e., Set and Remove) are stored in a local side-effect buffer until Flush() is called, at which time they are committed to the durable store (if the implementation of IActorState implements that).

    The IActor interface

    An ActorFx actor can be thought of as a highly available service, and IActor serves as the computational interface for that service.  In its purest form, IActor would have a single “eval” method:

    public interface IActor
        {
            object Eval(Func<IActorState, object[], 
    object> function, object[] parameters); }

    That is, the caller requests that the actor evaluate a delegate, accompanied by caller-specified parameters represented as .NET objects, against an IActorState object representing a persistent data store.  The Eval call eventually returns an object representing the result of the evaluation.

    Those familiar with object-oriented programming should be able to see a parallel here.   In OOP, an instance method call is equivalent to a static method call into which you pass the “this” pointer.  In the C# sample below, for example, Method1 and Method2 are equivalent in terms of functionality:

    class SomeClass
        {
            int _someMemberField;
    
            public void Method1(int num)
            {
                _someMemberField += num;
            }
    
            public static void Method2(SomeClass thisPtr, int num)
            {
                thisPtr._someMemberField += num;
            }
        }
    

    Similarly, the function passed to the IActor.Eval method takes an IActorState argument that can conceptually be thought of as the “this” pointer for the actor.  So actor methods (described below) can be thought of as instance methods for the actor.

    Actor Methods

    In practice, passing delegates to actors can be tedious and error-prone.  Therefore, the IActor interface calls methods using reflection, and allows for transmitting assemblies to the actor:

    public interface IActor
        {
            string CallMethod(string methodName, string[] parameters);
            bool AddAssembly(string assemblyName, byte[] assemblyBytes);
        }
    

    Though the Eval method is still an integral part of the actor implementation, it is no longer part of the actor interface (at least for our initial release).  Instead, it has been replaced in the interface by two methods:

    • The CallMethod method allows the user to call an actor method; it is translated internally to an Eval() call that looks up the method in the actor’s state, calls it with the given parameters, and then returns the result.
    • The AddAssembly method allows the user to transport an assembly containing actor methods to the actor.

    There are two ways to define actor methods:

    (1)   Define the methods directly in the actor service, “baking them in” to the service.

    myStateProvider.Set(
    "SayHello"
    ,
    (
    Func<IActorState, object[], object
    >)
       
    delegate(IActorState astate, object
    [] parameters)
        {
            
    return "Hello!"
    ;
         });

    (2)   Define the methods on the client side.

            [ActorMethod]
            public static object SayHello(IActorState state, object[] parameters)
            {
                return "Hello!";
            }
    

           

    You would then transport them to the actor “on-the-fly” via the actor’s AddAssembly call.

    All actor methods must have identical signatures (except for the method name):

    • They must return an object.
    • They must take two parameters:
      • An IActorState object to represent the “this” pointer for the actor, and
      • An object[] array representing the parameters passed into the method.

    Additionally, actor methods defined on the client side and transported to the actor via AddAssembly must be decorated with the “ActorMethod” attribute, and must be declared as public and static.

    Publication/Subscription Support

    We wanted to be able to provide subscription and publication support for actors, so we added these methods to the IActor interface:

    public interface IActor
        {
            string CallMethod(string clientId, int clientSequenceNumber,
    string methodName, string[] parameters); bool AddAssembly(string assemblyName, byte[] assemblyBytes); void Subscribe(string eventType); void Unsubscribe(string eventType); void UnsubscribeAll(); }

    As can be seen, event types are coded as strings.  An event type might be something like “Collection.ElementAdded” or “Service.Shutdown”.  Event notifications are received through the FabricActorClient.

    Each actor can define its own events, event names and event payload formats.  And the pub/sub feature is opt-in; it is perfectly fine for an actor to not support any events.

    A simple example: Counter

    If you wanted your actor to support counter semantics, you could implement an actor method as follows:

            [ActorMethod]
            public static object IncrementCounter(IActorState state, 
    object[] parameters) { // Grab the parameter var amountToIncrement = (int)parameters[0]; // Grab the current counter value int count = 0; // default on first call object temp; if (state.TryGet("_count", out temp)) count = (int)temp; // Increment the counter count += amountToIncrement; // Store and return the new value state.Set("_count", count); return count; }

    Initially, the state for the actor would be empty.

    After an IncrementCounter call with parameters[0] set to 5, the actor’s state would look like this:

    Key

    Value

    “_count”

    5

     

     

     

    After another IncrementCounter call with parameters[0] set to -2, the actor’s state would look like this:

    Key

    Value

    “_count”

    3

     

     

     

    Pretty simple, right? Let’s try something a little more complicated.

    Example: Stack

    For a slightly more complicated example, let’s consider how we would implement a stack in terms of actor methods.  The code would be as follows:

            [ActorMethod]
            public static object Push(IActorState state, object[] parameters)
            {
                // Grab the object to push
                var pushObj = parameters[0];
     
                // Grab the current size of the stack
                int stackSize = 0; // default on first call
                object temp;
                if (state.TryGet("_stackSize", out temp)) stackSize = (int)temp;
    
                // Store the newly pushed value
                var newKeyName = "_item" + stackSize;
                var newStackSize = stackSize + 1;
                state.Set(newKeyName, pushObj);
                state.Set("_stackSize", newStackSize);
    
                // Return the new stack size
                return newStackSize;
            }
    
            [ActorMethod]
            public static object Pop(IActorState state, object[] parameters)
            {
                // No parameters to grab
    
                // Grab the current size of the stack
                int stackSize = 0; // default on first call
                object temp;
                if (state.TryGet("_stackSize", out temp)) stackSize = (int)temp;
    
                // Throw on attempt to pop from empty stack
                if (stackSize == 0) throw new InvalidOperationException(
    "Attempted to pop from an empty stack"); // Remove the popped value, update the stack size int newStackSize = stackSize - 1; var targetKeyName = "_item" + newStackSize; var retrievedObject = state.Get(targetKeyName); state.Remove(targetKeyName); state.Set("_stackSize", newStackSize); // Return the popped object return retrievedObject; } [ActorMethod] public static object Size(IActorState state, object[] parameters) { // Grab the current size of the stack, return it int stackSize = 0; // default on first call object temp; if (state.TryGet("_stackSize", out temp)) stackSize = (int)temp; return stackSize; }

    To summarize, the actor would contain the following items in its state:

    • The key “_stackSize” whose value is the current size of the stack.
    • One key “_itemXXX” corresponding to each value pushed onto the stack.

     

    After the items “foo”, “bar” and “spam” had been pushed onto the stack, in that order, the actor’s state would look like this:

    Key

    Value

    “_stackSize”

    3

    “_item0”

    “foo”

    “_item1”

    “bar”

    “_item2”

    “spam”

     

     

     

     

    A pop operation would yield the string “spam”, and leave the actor’s state looking like this:

    Key

    Value

    “_stackSize”

    2

    “_item0”

    “foo”

    “_item1”

    “bar”

     

     

    The Actor Runtime Client

    Once you have actors up and running in the Actor Runtime, you can connect to those actors and manipulate them via use of the FabricActorClient.  This is the FabricActorClient’s interface:

    public class FabricActorClient
        {
            public FabricActorClient(Uri fabricUri, Uri actorUri, bool useGateway);
            public bool AddAssembly(string assemblyName, byte[] assemblyBytes, 
    bool replaceAllVersions = true); public Object CallMethod(string methodName, object[] parameters); public IDisposable Subscribe(string eventType,
    IObserver<string> eventObserver); }

    When constructing a FabricActorClient, you need to provide three parameters:

    • fabricUri: This is the URI associated with the Actor Runtime cluster on which your actor is running.  When in a local development environment, this is typically “net.tcp://127.0.0.1:9000”. When in an Azure environment, this would be something like “net.tcp://<yourDeployment>.cloudapp.net:9000”.
    • actorUri: This is the URI, within the ActorRuntime, that is associated with your actor.  This would be something like “fabric:/actor/list/list1” or “fabric:/actor/adhoc/myFirstActor”.
    • useGateway: Set this to false when connecting to an actor in a local development environment, true when connecting to an Azure-hosted actor.

    The AddAssembly method allows you to transport an assembly to the actor.  Typically that assembly would contain actor methods, effectively add behavior to or changing the existing behavior of the actor.  Take note that the “replaceAllVersions” parameter is ignored.

    What’s next?

    This is only the beginning of a journey. The code we are releasing today is an initial basic framework that can be used to build a richer set of functionalities that will make ActorFx a valuable solution for storing and processing data on the cloud. For now, we are starting with a playground for developers who want to explore how this new approach to data storage and management on the cloud can become a new way to see old problems. We will keep you posted on this blog and you are of course more than welcome to follow our Open Source projects on our MSOpenTech CodePlex page. See you there!

  • Interoperability @ Microsoft

    New MS Open Tech Prototype of the HTTP/2.0 initial draft in an Apache HTTP server module

    • 0 Comments

    We continue to see good momentum within the HTTP/2.0 Working Group (IETF 85 meeting) toward identifying suitable technical answers for the seven key areas of discussion, which we had identified back in August, including an update to the HTTP/2.0 Flow Control Principles draft, which Microsoft co-authored with Google and Ericsson.

    Through our continuing support of the HTTP/2.0 standardization through code, we have made some updates to our prototypes and just posted them on HTML5 Labs. We have moved from the Node.js implementation used server-side by our earlier prototypes to a modified implementation of an existing Apache module for which we are making available in the associated patch.

    In this latest iteration, we have made three changes in particular to advance discussions on the HTTP/2.0 initial draft and thinking around interoperable implementations:

    Negotiation: we have improved upon our initial implementation of the protocol upgrade that we released last month, supporting the scenario where the server does not accept a protocol upgrade.

    Flow Control: our prototype uses an infinite Window Update size that is effectively the simplest possible implementation and can be expected to be chosen for many real-world deployments, e.g. by specialized devices for the “Internet of things.”

    Server push: we have implemented a behavior on the client that resets connections upon receipt of unrequested data from the server. This is particularly important where push might be especially unwelcome on mobile/low bandwidth connections.

    This iteration continues to demonstrate our ongoing commitment to the HTTP/2.0 standardization process. Throughout this journey, we have honored the tenets that we stated in earlier updates:

    • Maintain existing HTTP semantics.
    • Maintain the integrity of the layered architecture.
    • Use existing standards when available to make it easy for the protocol to work with the current Web infrastructure.
    • Be broadly applicable and flexible by keeping the client in control of content.
    • Account for the needs of modern mobile clients, including power efficiency, support for HTTP-based applications, and connectivity through tariffed networks.

    These tenets will continue to inform the direction of both our proposals to the IETF and of our engineering efforts.

    Please try out the prototype, give us feedback and we’ll keep you posted on next steps in the working group. We will also follow up soon with test data resulting from our work on this code.

    As we have stated throughout this process, we’re excited for the Web to get faster and more capable. HTTP/2.0 is an important part of that progress and we look forward to improving on the HTTP/2.0 initial draft in collaboration with our fellow working group participants and the Web community at large as we aim for an HTTP/2.0 that meets the needs of the entire Web, including browsers, apps, and mobile devices.

    Adalberto Foresti
    Principal Program Manager
    Microsoft Open Technologies, Inc.
    A subsidiary of Microsoft Corporation

  • Interoperability @ Microsoft

    Microsoft Open Technologies releases Windows Azure support for Solr 4.0

    • 2 Comments

    Microsoft Open Technologies is pleased to share the latest update to the Windows Azure self-deployment option for Apache Solr 4.0.

    Solr 4.0 is the first release to use the shared 4.x branch for Lucene & Solr and includes support for SolrCloud functionality. SolrCloud allows you to scale a single index via replication over multiple Solr instances running multiple SolrCores for massive scaling and redundancy.

    To learn more about Solr 4.0, have a look at this 40 minute video covering Solr 4 Highlights, by Mark Miller of LucidWorks from Apache Lucene Eurocon 2011.

    To download and install Solr on Windows Azure visit our GitHub page to learn more and download the SDK.

    Another alternative for implementing the best of Lucene/Solr on Windows Azure is provided by our partner LucidWorks. LucidWorks Search on Windows Azure delivers a high-performance search solution that enables quick and easy provisioning of Lucene/Solr search functionality without any need to install, manage or operate Lucene/Solr servers, and it supports pre-built connectors for various types of enterprise data, structured data, unstructured data and web sites.

  • Interoperability @ Microsoft

    Windows Azure Authentication module for Drupal using WS-Federation

    • 0 Comments

    At Microsoft Open Technologies, Inc., we’re happy to share the news that Single Sign-on of Drupal Web sites hosted on Windows Azure with Windows Live IDs and / or Google IDs is now available.  Users can now log in to your Drupal site using Windows Azure's WS-Federation-based login system with their Windows Live or Google ID. Simple Web Tokens (SWT) are supported and SAML 2.0 support is currently planned but not yet available.

    Setup and configuration is easy via your Windows Azure account administrator UI.  Setup details are available via the Drupal project sandbox here.  Full details of setup are here.

    Under the hood, WS-Federation is used to identify and authenticate users and identity providers.  WS-Federation extends WS-Trust to provide a flexible Federated Identity architecture with clean separation between trust mechanisms (In this windows Live and Google), security token formats (In this case SWT), and the protocol for obtaining tokens. 

    The Windows Azure Authentication module acts as a relying party application to authenticate users. When downloaded, configured and enabled on your Drupal Web site, the module:

    -Makes a request via the Drupal Web site for supported identity providers

    -Displays a list of supported identity providers with Authentication links

    -Provides return URL for authentication, parsing and validating the returned SWT

    -Logs the user in or directs the user to register

  • Interoperability @ Microsoft

    Using LucidWorks on Windows Azure (Part 2 of a multi-part MS Open Tech series)

    • 0 Comments

     

    LucidWorks Search on Windows Azure delivers a high-performance search service based on Apache Lucene/Solr open source indexing and search technology. This service enables quick and easy provisioning of Lucene/Solr search functionality on Windows Azure without any need to manage and operate Lucene/Solr servers, and it supports pre-built connectors for various types of enterprise data, structured data, unstructured data and web sites.

    In June, we shared an overview of the LucidWorks Search service for Windows Azure, and in our first post in this series we provided more detail on features and benefits. For this post, we’ll start with the main feature of LucidWorks – quickly creating a LucidWorks instance by selecting LucidWorks from the Azure Marketplace and adding it to an existing Azure Instance. It takes a few clicks and a few minutes.

    Signing up

    LucidWorks Search is listed under applications in the Windows Azure Marketplace. To set up a new instance of LucidWorks on Windows Azure, just click on the Learn More button:

    image

    That takes you to the LucidWorks Account Signup Page. From here, you select a plan, based on the type of storage being used and the number of documents to index. There are currently four plans available: Micro, which has no monthly fee, Small and Medium, which have pre-set fees, and Large, which is negotiated directly with LucidWorks based on several parameters. All of the account levels have fees for overages, and the option to move to the next tier is always available via the account page.

    The plans are differentiated on document limits in indexes, the number of queries that can be performed per month, the frequency that indexes are updated, and index targets. Index targets are the types of content that can be indexed – for a Micro, only Websites can be indexed, for small and large, files, RDBMS, and XML content can also be indexed. For large instances ODBC data drivers can be used to make content available to indexes.

    image

    Once the plan is selected, enter your information, including Billing Information:

    image

    Once the payment is processed (Or in the case of Micro, no payment), a new instance is generated and you’re redirected to an account page, and invited to start building collections!

    Configuration

    image

    In the next part of the series we’ll cover setting up collections in more detail, for now let’s cover the account settings and configuration. Here’s the main screen for collections:

    image

    The first thing you see is the Access URL options. You can access your collections via Solr or REST API, and here’s where you get the predefined URL for either. When you drill down into the collections you see a status screen first:

    image

    This shows you the index size and stats about modification, queries per second, and updates per second, displayable by the last hour, day or week. This screen is also where you can see the most popular queries.

    Data Sources

    If you were managing external data sources, here’s where you configure them, via the Manage Data Sources button.

    image

    From here you can select a new data source from the drop-down. The list in this drop-down is as of this writing, and may change over time – check here for more information on currently supported data sources.

    Indexing

    The Indexing Settings are the next thing to manage in your LucidWorks on Azure account. Here’s the Indexing UI:

    image

    Indexing Settings

    De-duplication manages how duplicate documents are handled. (As we discussed in our first post, any individual item that is indexed and/or searched is called a document.) Off ignores duplicates, Tag identifies duplicates with a unique tag, and Overwrite replaces duplicate documents with new documents when they are indexed. Remember that de-duplication only applies to the indexes of data, not the data itself – only the indexed reference to the document is de-duplicated – so duplicates will still exist in the source data even if data in the indexes has been de-duplicated. Duplicates are determined based on key fields that you set in the fields editing UI.

    Default Field Type is used for setting the type of data for fields whose type LucidWorks cannot determine using its built-in algorithms.

    Auto-commit and Auto-soft commit settings determine when the index will be updated. Max time is how long to wait before committing, and max docs is how many documents are collected before a commit. Soft commits are used for real time searching, while regular commits manage the disk-stored indexes.

    Activities manage the configuration of indexes, suggested autocomplete entries, and user result click logging.

    Full documentation of indexing settings can be found here.

    Field Settings

    Field Settings allow configuration of each field in the index. Fields displayed below are automatically defined by data extraction and have been indexed:

    image

    Field types defined by LucidWorks have been optimized for most types of content, and should not generally be changed. The other settings need to be configured once the index has run and defined your fields:

    image

    For example, a URL field would be a good candidate for de-duplication, and you may want to index it for autocomplete as well. You can also indicate on Field Settings whether you want to display URLs in search results. Here is full documentation of Field Settings.

    Other Indexing Settings

    Dynamic Fields are almost the same as fields, but are created or modified when the index is created. For example, adding a value before or after a field value, or adding one or more fields together to form a single value.

    Field Types is where you add custom field types in addition to the default field types created by your LucidWorks installation.

    Schedules is where you add and view schedules for indexing.

    Querying

    Querying Settings is where you can edit the configuration for how queries are conducted:

    image

     

    The Default Sort sets results to be sorted by relevance, date, or random.

    There are four Query Parsers available out of the Box for LucidWorks; a custom LucidWorks parser, as well as standard Lucene, dismax and extended dismax. More information on the details of each parser is available here.

    Unsupervised feedback resubmits the query using the top 5 results of the initial query to improve results.

    This is also where you configure the rest of your more familiar query behavior, like where stop words will be used, auto complete, and other settings, the full details of which are here.

    Next up: Creating custom Web site Search using LucidWorks.

    In the next post in the series, we’ll demonstrate setting up a custom Web site that integrated LucidWorks Search, and the configuration settings we use to optimize search for that site. After that, in future posts we’ll discuss tips and tricks for working with specific types of data in Lucidworks.

    Brian Benz
    Senior Technical Evangelist
    Microsoft Open Technologies, Inc.

Page 1 of 1 (9 items)