• mwinkle.blog

    Tracking Objects and Querying For Them


    A sample that I like to give during WF talks is that the tracking service can be used to track things at an event level ("Workflow started") as well as things at a very granular level ("What is the Purchase Order amount?"). 

    I wanted to put together a sample showing these, and you can find the file on netfx3.com

    In order to track things, we first need to specify the tracking profile.  This tells the workflow runtime which events and pieces of data we are interested in collecting while the workflow executes.  I think that the xml representation of the tracking profile is pretty readable, but there is a tool that ships in the Windows SDK designed to make this even easier.  The tool can be found after extracting the workflow samples to a directory under \Technologies\Applications\TrackingProfileDesigner.  This tool will let you open up a workflow by pointing at its containing assembly and then design a tracking profile.  It will deploy the tracking profile to the database for you, but I borrowed some code from another sample that shows the same functionality.  The tool allows you to specify workflow level events as well as activity level events, and allows you to designate what information you would want to extract and specify the annotation as well.

    The output of the designer is an xml file, which we will edit further by hand.  The important parts to look at are the tracking points, that is, where and when do we extract what data.


                      <Type>TrackingObjectsSample.CreditCheck, TrackingObjectsSample, Version=, Culture=neutral, PublicKeyToken=null</Type>
                  <Annotation>data extracted from activity</Annotation>

    Here we specify the tracking location, in this case the CreditCheck activity, as well as the events we wish to listen for, and finally what data we want to extract.  I like to point out in the extracts section that we can take advantage of dot-notation in order to traverse an object hierarchy to get to specific pieces of data we wish to extract.  If we don't get down to the simple types, the workflow runtime will attempt to serialize the object and store it as a binary blob (so make sure you mark the types you want tracked as serializable).  This is what is being done with IncomingOrder above.  This will allow us to bring back IncomingOrder, but we won't be able to query against that blog, hence why we might extract OrderTotal in order to generate a report of high value orders.

    When we track the information it will get stored to SQL (assuming we are using the SqlTrackingService).  There are a number of views we could use to build queries on top of.  The one I want to point out is the vw_TrackingDataItem view which contains all of the tracking data items (so it is well named :-) ) The FieldTypeId references back to vw_Type where you will see the different type of objects and their respective assemblies which are being tracked.  This lets the consume know what type of object they may need to instantiate if they wish to consume the object.  But what about querying on that object in plain old SQL?  Well, there is a column in the view designed for that.  The Data_Str column will show a string representation of the tracked item, so in the case of numbers and strings and other basic types, we will be able to query.  In the case of complex types, the name of the type will appear.  This is basicly the ToString of the object being tracked.  The Data_Blob column contains the binary represenation of the objects in order to restore them into objects inside of .NET code.

    So, back to the task at hand, the sample.  A simple workflow has been created that takes an Order object.  The workflow doesn't do much, although the last code activity modifies a value of the Order object.  We do this to show that the changed values can be tracked over time.  The workflow runtime uses the SqlTrackingService, and before this all starts the tracking profile is inserted into the database using the InsertTrackingProfile method that is used in the SDK (I grabbed it from the UserTrackPoints sample: \Technologies\Tracking\UserTrackPoints).  The workflow then executes.  Clicking on the "Get Tracking Info" button will then use the SqlTrackingQuery to get a SqlTrackingWorkflowInstance, an object representation of all of the available tracking information.  We then iterate over the various event sources in the SqlTrackingWorkflowInstance such as the workflow events and the activity events and place them into a TreeView.  This could easily be extend to include user track points as well.  The following bits of code which do this are below:


    SqlTrackingQuery query = new SqlTrackingQuery(connectionString);
    SqlTrackingWorkflowInstance sqlTrackingWorkflowInstance;
    if (query.TryGetWorkflow(new Guid(WorkflowInstanceIdLabel.Text),
                out sqlTrackingWorkflowInstance))
         TreeNode tn1 = treeView1.Nodes.Add(string.Format("Activity Events ({0})", 
         foreach (ActivityTrackingRecord activityTrackingRecord in sqlTrackingWorkflowInstance.ActivityEvents)
            // do something here

    It is useful to place a breakpoint in the foreach loop in order to inspect the different aspects of the ActivityTrackingRecord and the other tracking objects that live inside of SqlTrackingWorkflowInstance.  By looking at the TrackingDataItem 's that are placed in ActivityTrackingRecord.Body we can find the field name as well as its value, which could be an object.  By browsing around in the break mode, the debugger will take care and allow you to move through the serialized object which will be present inside the TrackingDataItem.Data.

    Link to sample code

  • mwinkle.blog

    WF4, WCF and AppFabric Sessions @ PDC09


    I’m taking this week off to catch up on everything I haven’t done the last two months and to celebrate the Thanksgiving holiday here in the US.

    PDC was a blast!  It was incredibly awesome to meet so many folks interested in WF and talking with others about how they are using (or plan to use) WF4.  I’ll be following up with a more detailed post on my talk, including demos and code, but I wanted to give a summary of the talks that came from my team at this PDC.  Below is the diagram that breaks down some of the “capabilities” of AppFabric and I have color coded them for the various talks that we gave.  All of the PDC Sessions are available online here.


    WF Talks

    WCF Talks

    AppFabric Talks

    There’s a ton of great content up at the PDC site, plenty to sit back and enjoy! 

  • mwinkle.blog

    VS 2008 Beta 2 Shipped : 0 to Workflow Service in 60 seconds


    So, per Soma's blog, this great Channel9 video, and a bunch of other places, VS 2008 Beta 2 is available for download (go here).

    Others are covering their favorite feature in depth, I want to cover one of mine: the WCF test client, which I will show by way of creating a Workflow Service application.

    Real quick, for those of you who didn't read the readme file, I know sometimes you just forget, there is an important note regarding how to get  this to work (out of the box you will probably get an exception in svcutil.exe).

    Running a WCF Service Library results in svcutil.exe crashing and the test form not working

    Running a WCF Service Library starts the service in WcfSvcHost and opens a test form to debug operations on the service.  On the Beta2 build this results in crash of svcutil.exe and the test form doesn’t work due to a signing problem.

    To resolve this issue

    Disable strong name signing for svcutil.exe by opening a Visual Studio 2008 Beta2 Command Prompt. At the command prompt run: sn –Vr “<program files>\Microsoft SDKs\Windows\v6.0A\Bin\SvcUtil.exe”  (replace <program files> with your program files path – ex: c:\Program Files)

    Fire up VS 2008, create a new Sequential Workflow Service Library project:


    This creates a basic Sequential workflow with a Receive activity


    It also creates an app.config and IWorkflow1.cs


    IWorkflow1.cs contains the contract our service is going to implement:

    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.ServiceModel;
    namespace WFServiceLibrary1
        // NOTE: If you change the interface name "IWorkflow1" here, 
        // you must also update the reference to "IWorkflow1" in App.config.
        public interface IWorkflow1
            string Hello(string message);

    Now, we can modify this as needed, or we can delete it and create the contract as part of the Receive activity, see my previous post here on the topic.

    Return to the workflow and take a quick look at the properties of the Receive activity, and note that the parameters for the method (message and (returnValue)) have already been promoted and bound as properties on the workflow, that saves us a quick step or two:


    Drop a code activity in the Receive shape, and double click to enter some code:


    private void codeActivity1_ExecuteCode(object sender, EventArgs e)
        returnValue = String.Format("You entered '{0}'.", inputMessage);

    Now, we're pretty much there, but let's take a quick look at the app.config

    <service name="WFServiceLibrary1.Workflow1" behaviorConfiguration="WFServiceLibrary1.Workflow1Behavior">
          <add baseAddress="http://localhost:8080/Workflow1" />
      <endpoint address=""
                contract="WFServiceLibrary1.IWorkflow1" />
      <endpoint address="mex" 
                contract="IMetadataExchange" />

    We're going to use the wsHttpContextBinding, which you can think of as the standard wsHttpBinding with the addition of the Context channel to the channel stack.  Also note, we can right click the config and open it in the WCF config editor, slick!


    Let's hit F5.  We build, do a little bit of processing and up pops the WCF test client.  You may also note this little pop up from your task tray:


    What's this, the "autohosting" of your service, just like you get with ASP.NET on a machine.  This saves me the trouble of having to write a host as well as my service when I just want to play around a bit.  The test client looks like this:


    Double click on the hello operation and fill in a message to send to the service:


    Clicking "Invoke" will invoke the service, which will soon return with the value we hope to see.  Sure enough, after a bit of chugging along, this returns:


    Finally, let's hit the XML tab to see what's in there, and we see it is the full XML of the request and the response.  There's an interesting little tidbit in the header of the response:

    <s:Envelope xmlns:s="http://www.w3.org/2003/05/soap-envelope" 
        <a:Action s:mustUnderstand="1" u:Id="_2">
        <a:RelatesTo u:Id="_3">urn:uuid:3f5b7eb5-cc35-4b01-b345-92f6edf728d7</a:RelatesTo>
        <Context u:Id="_4" xmlns="http://schemas.microsoft.com/ws/2006/05/context">

    This is the context token that lets me know how to continue conversing with this workflow.  In the test client, I can't find a way to attach it to a subsequent request, meaning we can't use the test client for testing multiple steps through a workflow, but this new feature lets me get up and running, verify connectivity, and be able to set breakpoints and debug my workflow service, which is pretty cool.

    I've posted the following video on c9 as a screencast, which I will try to do with my subsequent blog postings.

  • mwinkle.blog

    Sharing Functionality Between Designers (WF4 EditingContext Intro Part 2)


    This part 2 of my 6 part series on the EditingContext.



    We will need a custom activity, EmptyOne and designer called InteractWithServiceDesigner. 

    using System.Activities;
    using System.ComponentModel;
    namespace blogEditingContext
        public sealed class EmptyOne : CodeActivity
            // Define an activity input argument of type string
            public InArgument<string> Text { get; set; }
            // If your activity returns a value, derive from CodeActivity<TResult>
            // and return the value from the Execute method.
            protected override void Execute(CodeActivityContext context)
                // Obtain the runtime value of the Text input argument
                string text = context.GetValue(this.Text);

    What We Will See

    The designers for Foo will leverage a new service in order to display a list of database tables.  We will also need to publish this service to the editing context, and handle the fact that we don’t know who might publish it (or when it might be published).  Note that in VS, there is no way to inject services except by having an activity designer do it.   In a rehosted app, the hosting application could publish additional services (see part 4) that the activities can consume.  In this case though, we will use the activity designer as our hook.

    Publishing a Service

    Let’s look at the designer for Foo (as Foo is our generic, and relatively boring activity).

    <sap:ActivityDesigner x:Class="blogEditingContext.InteractWithServicesDesigner"
            <ListBox Height="100" Name="listBox1" Width="120" />
            <Button Name="button1" Click="Button_Click">Publish Service</Button>

    Not much to this, except a drop down list that is currently unbound (but a name is provided).  Also note that there is a button that says to “publish the service”.  Let’s first look at the code for the button click

    private void Button_Click(object sender, RoutedEventArgs e)
        if (!this.Context.Services.Contains<ISampleService>())
            this.Context.Services.Publish<ISampleService>(new SampleServiceImpl());

    What are we doing here?  We first check if this service is already published using the Contains method.  We can do this because ServiceManager implements IEnumerable<Type>.

    [update, finishing this sentence.]  One could also consume the service using  GetService<TResult>.  You may also note that there is a GetRequiredService<T>.  This is a call that we know won’t return null, as the services we are requesting must be there for the designer to work.  Rather than returning null, this will throw an exception. Within the designer, we generally think of one service as required:

    Let’s look at the definition of the service.  Here you can see that we are using both an interface and then providing an implementation of that interface.  You could just as easily use an abstract class, or even a concrete class, there is no constraint on the service type.

    using System.Collections.Generic;
    namespace blogEditingContext
        public interface ISampleService
            IEnumerable<string> GetDropdownValues(string DisplayName);
        public class SampleServiceImpl : ISampleService
            public IEnumerable<string> GetDropdownValues(string DisplayName)
                return new string[]  { 
                    DisplayName + " Foo", 
                    DisplayName + " Bar",
                    "Baz " + DisplayName
                } ;

    If there is not a service present, we will publish an instance of one.  This becomes the singleton instance for any other designer that may request it.  Right now, we have a designer that can safely publish a service.  Let’s look at consuming one

    Consuming a Service

    Let’s look at some code to consume the service.  There are two parts to this.  One is simply consuming it, which we already saw above in discussing GetService and GetRequiredService .  The second is hooking into the notification system to let us know when a service is made available.  In this case, it’s a little contrived, as the service isn’t published until the button click, but it’s good practice to use the subscription mechanism as we make no guarantees on ordering, or timing of service availability.

    Subscribing to Service

    Here, using the Subscribe<TServiceType> method, we wait for the service to be available.  The documentation summarizes this method nicely:

    Invokes the provided callback when someone has published the requested service. If the service was already available, this method invokes the callback immediately.

    In the OnModelItemChanged method, we will subscribe and hook a callback.  The callback’s signature is as follows:

    public delegate void SubscribeServiceCallback<TServiceType>(
        TServiceType serviceInstance

    As you can see, in this callback, the service instance is provided, so we can query it directly. You may ask, “why not in Intialize?”  well, there are no guarentees that the editing context will be available at that point. We could either subscribe to context being made available, or just use ModelItemChanged:

    protected override void OnModelItemChanged(object newItem)
        if (!subscribed)
                servInstance =>
                    listBox1.ItemsSource = servInstance.GetDropdownValues(this.ModelItem.Properties["DisplayName"].ComputedValue.ToString());
                    button1.IsEnabled = false;
            subscribed = true; 

    This wraps a basic introduction to the ServiceManager type and how to leverage it effectively to share functionality in designers.

    Let’s look at a before and after shot in the designer:

    Before & After

    before after



    What about Items?

    Items follow generally the same Get, Subscribe, and publish pattern, but rather than publish, there is a SetValue method.  If you have “just data” that you would like to share between designers (or between the host and the designer) an Item is the way to go about that.  The most commonly used item we’ve seen customers use is the Selection item in order to be able to get or set the currently selected model item.

    That’s our tour of basic publish and subscribe with Services and Items.



    [updated 12/22/2009 @ 10:23 am to finish an unclear sentence about GetService<>]

    [updated 12/22/2009 @ 8:50pm : Link to download sample code is here]

    Attachment(s): blogEditingContext.zip

  • mwinkle.blog

    Advanced Workflow Service Talk (Demo 4 of 4)


    When we start doing this two way style of messaging, we now open up to start modeling some interesting business problems.  In the previous post, you'll note that I did not include the code, because I mentioned we needed to be more clever in scenarios where we listen in parallel. 

    First, a brief diversion into how the Receive activity works.  Everybody remembers the workflow queues, the technology that underlies all communication between a host and a workflow instance.  The Receive activity works by creating a queue that the WorkflowServiceHost (specifically the WorkflowOperationInvoker) will use to send the message received off the wire into the workflow.  Now, the Receive activity normally just creates a queue that is named the same as the operation the Receive activity is bound to.  However, if we have two Receive activities listening for the same operation at the same time, no longer is a single queue useful to route responses back as we want to route to the correct Receive activity instance. 

    There is property on the Receive activity called ContextToken.  Normally this is null in the simple case.  However, when we want our Receive activity to operate in parallel, we need to indicate that it needs to be smarter when it creates a queue. 


    By setting this property (you can just type in a name, and then select the common owner all of the parallel receive's share.  This will cause the Receive activity to create a queue named [OperationName] +[ConversationId], the conversation ID takes the form of a GUID, and is the second element inside a context token. 

    The sample that I show for this talk is simply the conversations sample inside the SDK.  This is the sample to check out to understand all sorts of interesting ways to use the context tokens to model your processes.


    Conversations Sample Architecture


    Now, there are two conversation patterns here.  One is the one shown above, which I refer to as an n-party conversation where n is fixed at design time.  We can accomplish this with the parallel activity.  The other is where n is arbitrary (imagine you send out to business partners stored in the database).  The way to do this is to use the Replicator activity.  The Replicator is a little known gem shipped in 3.0 that essentially gives you "ForEach" semantics.  But, by flipping the ExecutionType switch to parallel, I now get the behavior of a parallel, but operating with an arbitrary n branches.

    So, in order to enable conversations, we need to tell our receive activity to be a little smarter about how it generates its queue name, and then we simply follow the duplex pattern we discussed in the last two posts.  Once we do that, we're in good shape to start modeling some more interesting communication patterns between multiple parties. 

    Where can we go from here?

    We can just make the patterns more interesting.  One interesting one would be the combination of the long running work with cancellation and a Voting activity in order to coordinate the responses and allow for progress to be made when some of the branches complete (if I have 3 yes votes, I can proceed).  The power of building composite activities is that it gives me a uniform programming model (and a single threaded one to boot) in order to handle the coordination of different units of work.  Get out there and write some workflows :-)

  • mwinkle.blog

    Using the Rules Engine outside of Workflow


    Moustafa has put together a fantastic sample application that shows how you can use the Rules Engine in Windows Workflow Foundation to execute rulesets against any target object.  In this example, the ruleset prepopulates fields in a Windows Form, validates entries as well as performs calculations.  This example shows the application obtaining the ruleset from an external database, allowing an easy way to separate the logic expressed in the ruleset from the compiled code that makes up the form.  In this way you can alter the specifics of the business logic of your application, say a pricing policy, without diving into the code of the application.

    If you're interested in learning more about the Rules Engine, these other samples are available on the community site.

    Also, Jurgen has written an excellent article on MSDN discussing the Rules Engine.

  • mwinkle.blog

    You say XAML, I say XOML, PoTAYto, PoTAHto, let's call the whole thing off


    With all due respect to George and Ira Gershwin, I have a quick question for the readers of this blog.  In V1, we have an interesting scenario is talked about frequently, and that's the file extension of our xml form of workflow. 

    When we debuted at PDC05, there existed an XML representation of the workflow which conformed to a schema that the WF team had built, and it was called XOML.  Realizing that WPF was doing the same thing to serialize objects nicely to XML, we moved to that (XAML), but the file extensions had been cast in stone due to VS project setups.  So, we had XAML contained in a XOML file.

    Is this a problem for you?  I could see three possible solutions in the future <insert usual disclaimer, just gathering feedback>:

    • XOML -- we have a legacy now, let's not change it
    • XAML -- it's XAML, so change the file extension to match it (and introduce an overload to the XAML extension, which for now is associated with WPF)
    • something else, say .WFXAML -- this reflects the purpose, is unique to declarative workflows and doesn't have any weird connotations (What does xoml stand for???).

    Is this an issue?  Is this something you would like to see changed?  Do any of these solutions sound like a good idea, bad idea, etc?

    Thanks, we appreciate your feedback :-)

  • mwinkle.blog

    DinnerNow, the other thing shipping


    One of the things my team has been working on for the last few months has finally been released.  It started with a discussion in James' office and has come pretty far from there.  You may have seen it at some of the launch events, but unlike a lot of traditional demos, we're releasing all of the code with a great installer that David Aiken wrote in order to make sure all of the moving pieces and parts get put in the right place.

    What is it?

    DinnerNow demo
    DinnerNow demo

    Check out the video on soapbox

    Pieces and Parts, what do you mean by that?

    The only thing we couldn't get into v1 of this thing is the ability for it to flush toilets remotely.  Never fear though, we are working hard to get this into a future release :-)  Here's a list of the technologies that we have in this thing:

    • Vista and Longhorn Server platform API's (things like the transactional file system)
    • IIS 7 modules
    • ASP.NET Ajax extensions
    • Linq
    • Windows Communication Foundation
      • Queued Services using MSMQ
      • "Normal" WS-* web services
      • POX and RSS over WCF
    • Windows Workflow Foundation (bunch of details on this to follow)
      • State Machine and Sequential
      • Correlation
      • Use of the ReplicatorActivity to execute in parallel
      • Designer Rehosting
      • Communication between parent and child workflows
    • Windows Presentation Foundation (even us server guys figured out a way to make it look pretty)
    • Windows Powershell (David's life wouldnt' be complete if it wasn't in there)
      • Powershell commandlets that query the workflow tracking store!
    • Windows CardSpace
    • .NET Compact Framework (because we don't want our mobile folks to feel left out)
    • Incredibly cool installation experience (we worked really hard to make sure all of the above pieces are configured properly)

    Where can I get it?

    Check out the website and you can get the code from codeplex.  If you find something you don't like, file a work item in codeplex, it leads right into our TFS system.

    What now?

    Go, get the code and install it on a Vista or Longhorn Server machine.  Let us know what you think!  We're going to be pushing out some additional information in the upcoming weeks, let us know what you think!

  • mwinkle.blog

    WF4 ViewStateService


    A comment posted by Notre asked for some more details about view state and attached property services, so I thought I would dive into those next.  I will follow-up in a subsequent post on the AttachedPropertyService, as there is a little bit more going on there.


    Why do I care about viewstate?  Well, usually it is because we want to write something down and store it for later that is not required for runtime.  A common example of viewstate is the position of nodes within a flowchart.  While not required to execute the flowchart, they are required to effectively view the flowchart. 

    Where to write them down?

    This was a question that caused a fair amount of debate on the team.  There are basically two places to write down things like view state in a file-based world. 

    1. In the source document itself
    2. In a document that stays close to the source (usually referred to as a sidecar file)

    We had customers asking for both.  The motivation for the first is that for things like flowchart, where I may always care about the visualization representation, I want to keep that metadata around and only deal with one element.  For the second, it is motivated by the reason that we want a clean source document that only describes the minimal artifact to run.  Now, there are certainly many stops along the spectrum (for instance, we might always want to keep annotations or source comments in the source document, and put positioning elsewhere).  For VS2010, we landed with a unified API to use, and we write in the source document.  This is something that is likely to change in future releases, as it does make things like textual diffs rather painful.

    So, that’s why we want to use it.

    How do we use it?

    We are going to create a simple activity designer that lets me write down a comment.

    A few simple steps:

    1. Create a new VS Project, let’s create an Activity Library
    2. Add a Designer to that activity library
    3. Add an attribute to the activity pointing to the designer
    4. Add a new WorkflowConsoleApp Project
    5. Build


    Now, let’s go and make our activity designer a little interesting.

    Let’s add a text box and a button.  We’ll make the text of the button something obvious like “commit comment” The XAML for the activity designer looks like this:

    <sap:ActivityDesigner x:Class="simpleActivity.CommentingDesigner"
            <StackPanel Name="stackPanel1" VerticalAlignment="Top" >
                <TextBox  Name="commentBlock"   />
                <Button Content="Load View State" Name="loadViewState" Click="loadViewState_Click" />
                <Button Content="Commit View State"  Name="button1" Click="button1_Click" />

    Now, let’s add some code to the button (and to the initialization of the form)

    ViewStateService has a few useful methods on it.  I want to call out a subtle difference.  You will see StoreViewState and StoreViewStateWithUndo.  The primary distinction as the name implies is that one will simply write the view state down and will bypass the undo/redo stack.  This is for view state like an expanded/collapsed view.  You don’t really want ctl-z to simply flip expanded versus collapsed for you.  But for something like flowchart, where changing some of the viewstate, like position, might be such a thing that you want support for undoing the action.  That’s the primary difference.

    So, our code for the button looks like this:

    private void button1_Click(object sender, RoutedEventArgs e)
        ViewStateService vss = this.Context.Services.GetService<ViewStateService>();
        vss.StoreViewStateWithUndo(this.ModelItem, "comment", commentBlock.Text);

    Now, on load, we want to be able to populate the value, so we will use the RetrieveViewState method in order to extract this.

    private void loadViewState_Click(object sender, RoutedEventArgs e)
        ViewStateService vss = this.Context.Services.GetService<ViewStateService>();
        commentBlock.Text = vss.RetrieveViewState(this.ModelItem, "comment") as string;

    Now, let’s go back to our workflow project and put an instance of this activity on the surface:


    Let’s add some viewstate information and commit it.  Now let’s look at the XAML:

    <s4:NotRealInterestingActivity Text="{x:Null}" sap:VirtualizedContainerService.HintSize="200,99">
        <scg3:Dictionary x:TypeArguments="x:String, x:Object">
          <x:String x:Key="comment">basic comment</x:String>

    Ok, now, to show that we can pull this in, let’s change the text in the xaml and then reload our designer.


    You can muck around with ctl-z to see that this does get handled correctly via the undo. 

    The other important thing to note is that this takes an object, so your viewstate is not limited to strings, you can have more full featured objects if you’d like. Finally, the ViewStateService also has a ViewStateChanged you can subscribe to in order to handle, dispatch, and react to view state changes in the designer.

  • mwinkle.blog

    Host Provided Capabilities (WF4 EditingContext Intro Part 3)


    This part 3 of my 6  part series on the EditingContext.

  • Introduction
  • Sharing Functionality between Designers 
  • Host provided capabilities  (you are here)
  • Providing callbacks for the host
  • Subscription/Notification engine
  • Inspection, Default Services and Items

    EditingContext is used by our primary hosting application, Visual Studio, to provide concrete implementations of certain services.  The example that we will talk about here is the IExpressionEditorService.  Now, one thing that we would have really liked to have done in vs2010 is to provide a way to use the intellisense enabled VB editor that you see within VS inside a rehosted app.  For various reasons, we were not able to ship with that dependency in the .NET framework.  However, we needed a mechanism for our primary host to have the intellisense experience (and similarly, you could build an experience, or maybe we’ll ship one in the future for rehosted apps). 

    Let’s look at the design of IExpressionEditorService:


    Name Description
    CloseExpressionEditors Closes all the active expression editors.
    CreateExpressionEditor Overloaded. Creates a new expression editor.
    UpdateContext Updates the context for the editing session.

    Inside the ExpressionTextBox control, when the control needs to render the editor, it has code that looks like the following (note, if it can’t find an instance, it skips and just uses a plain old text box):

    if (this.Context.Services.GetService<IExpressionEditorService>() != null)
          return this.Context.Services.GetService<IExpressionEditorService>().CreateExpressionEditor(...)

    Using the following overload of CreateExpressionEditor:

    IExpressionEditorInstance CreateExpressionEditor(
        AssemblyContextControlItem assemblies,
        ImportedNamespaceContextItem importedNamespaces,
        List<ModelItem> variables,
        string text

    Now, what happens is that inside the code that we ship in the Microsoft.VisualStudio.Activities.Addin.dll, there is both a concrete implementation of this type, as well as the code which will publish an instance of this to the editing context.  Remember, this is the same thing that you can do in your app for a these host provided services.  In a subsequent post, I will get into more details of what are the ones that the designer has built in (like IExpressionEditorService). 

  • mwinkle.blog

    Advanced Workflow Services Talk (Demo 3 of 4)


    So, we've seen in part 1 how to manage context, we saw in part 2 how we can take that basic knowledge to do duplex messaging.  Once we start doing duplex work, there are some interesting patterns, and the first one is one that we like to call "long running work".  Why are we interested in this?  Well, as you probably know, the execution of a workflow is single threaded (this is a feature, not a bug).  We also don't have a mechanism to force the workflow to be "pinned" in memory.  What this means is that things like the asynchronous programming model  (APM), can't be used, since there isn't a guarantee that there will be something to call back when we are done.  What this means is that the send activity can not take advantage of the APM to be more thread friendly.

    We may want to do things in parallel, like this


    If each of these branches takes 3 seconds, the whole of this workflow will complete in about 9 seconds.  The general expectation is that in parallel, this would happen at the length of the longest branch + some minor delta for overhead.  The trouble is, APM programming is tricky, especially relative to the layout above.

    In order to model APM style service calls, but allowing for the service operations to be extremely long running, where extremely is defined as "long enough to where I would want to be able to persist."  The approach then is to model this as disjoint send and receive activities.


    One intermediate step is to simply use one way messaging, but the problem there is that in a lot of cases, I'm looking for some information being sent back to me. 

    I'll hold off on the code for the above, the fact we are listening in parallel for the same operation requires us to be a little more clever.

    Let's look first at our contract, and then our service implementation:

       1:  namespace Long_Running_Work
       2:  {
       3:      [ServiceContract]
       4:      public interface ILongRunningWork
       5:      {
       6:          [OperationContract]
       7:          string TakeAWhile(int i);
       9:          [OperationContract(IsOneWay = true)]
      10:          void OneWayTakeAWhile( int i);
      12:          [OperationContract(IsOneWay = true)]
      13:          void TakeAWhileAndTellMeLater(IDictionary<string,string> contextToken, int i);
      14:      }
      17:      [ServiceContract]
      18:      public interface IReverseContract
      19:      {
      20:          [OperationContract(IsOneWay = true)]
      21:          void TakeAWhileAndTellMeLaterDone(string s);
      22:      }
      24:  }

    And now for the implementation of these;

       1:  namespace Long_Running_Work
       2:  {
       3:     public class Service1 : ILongRunningWork
       4:      {
       6:          public Service1()
       7:          {
       9:          }
      11:          #region ILongRunningWork Members
      13:          public string TakeAWhile(int i)
      14:          {
      15:              Console.WriteLine("Starting TakeAWhile");
      16:              System.Threading.Thread.Sleep(new TimeSpan(0, 0, 3));
      17:              return i.ToString();
      18:          }
      22:          public void OneWayTakeAWhile( int i)
      23:          {
      24:              Console.WriteLine("Starting One Way TakeAWhile");
      25:              System.Threading.Thread.Sleep(new TimeSpan(0, 0, 3));
      26:              Console.WriteLine("Ending One Way TakeAWhile");
      29:          }
      32:          public void TakeAWhileAndTellMeLater(IDictionary<string, string> context, int i)
      33:          {
      34:              Console.WriteLine("Received the context Token");
      35:              System.Threading.Thread.Sleep(new TimeSpan(0, 0, 3));
      36:              Console.WriteLine("Need to Message Back Now {0}", i.ToString());
      37:              // could investigate a more useful pooling of these if we 
      38:              // really wanted to worry about perf
      39:              IReverseContractClient ircc = new IReverseContractClient(
      40:                  new NetTcpContextBinding(),
      41:                  new EndpointAddress("net.tcp://localhost:10003/ReverseContract")
      42:                  );
      43:              IContextManager icm = ircc.InnerChannel.GetProperty<IContextManager>();
      44:              icm.SetContext(context);
      45:              ircc.TakeAWhileAndTellMeLaterDone(i.ToString());
      46:          }
      50:          #endregion
      51:      } 
      53:     public class IReverseContractClient : ClientBase<IReverseContract>, IReverseContract
      54:     {
      55:          public IReverseContractClient() : base(){}
      56:          public IReverseContractClient(System.ServiceModel.Channels.Binding binding, EndpointAddress address) : base(binding, address) { }
      58:  #region IReverseContract Members
      62:         public void TakeAWhileAndTellMeLaterDone(string s)
      63:         {
      64:             base.Channel.TakeAWhileAndTellMeLaterDone(s);
      65:         }
      67:         #endregion
      68:     }
      70:  }

    Basically, we sit around and wait.  You'll also note in the TakeAWhileAndTellMeLater, we take in a context token (similar to our previous approach), and we will use that to new up a client at the end and call back in after setting the context.  Look at lines 39-44 above.  The nice thing about this is that my above workflow client can actually go idle, persist, and react to a message being delivered later on.

    One thing to note is that one should not place a delay between any of the Send and Receives.  This could cause the workflow to go idle, which may allow you to miss messages.  This is generally considered, a bad thing.  The reason this occurs is that the WorkflowOperationInvoker will use EnqueueOnIdle which means that when teh workflow goes idle, the message will be enqueued.  If the queue hasn't been created by the Receive activity, the message will not get delivered.

    For the final workflow above (the TakeAWhileAndTellMeLater workflow), I will need to spin this up in a WorkflowServiceHost (a la the Duplex Sample in part 2).

    using (WorkflowServiceHost wsh = new WorkflowServiceHost(typeof(CallLongRunningComponents.WorkflowWithmessaging)))
                new NetTcpContextBinding(),
        // don't forget to open up the wsh
        WorkflowRuntime wr = wsh.Description.Behaviors.Find<WorkflowRuntimeBehavior>().WorkflowRuntime;
        WorkflowInstance wi = wr.CreateWorkflow(
        wr.WorkflowCompleted += ((o, e) => waitHandle.Set());
        wr.WorkflowIdled += ((o, e) => Console.WriteLine("We're idled"));

    Why do I think this is cool?

    Two reasons:

    • If I assume that I can modify the called service to callback to me (or put such a wrapper at a runtime service level), this is easier to model than the APM (that code included at the end of this post)
    • This gives me a natural way to start exposing more advanced control over a service call.  Rather than just a send and receive, I can use a send and a listen, and in the listen have a receive, a cancel message receive, and a delay in order to expose more fine grained control points for my workflow, and model the way the process should work very explicitly and declaratively.



    Code for APM approach:

    call some services and wait:

       1:  Console.WriteLine("Press <enter> to execute APM approach");
       2:  Console.ReadLine();
       3:  waitHandle = new AutoResetEvent(false);
       4:  Stopwatch sw = new Stopwatch();
       5:  sw.Start();
       6:  lrwc = new WorkflowHost.ServiceReference1.LongRunningWorkClient();
       7:  lrwc.BeginTakeAWhile(1, HandleClientReturn, "one");
       8:  lrwc.BeginTakeAWhile(2, HandleClientReturn, "two");
       9:  lrwc.BeginTakeAWhile(3, HandleClientReturn, "three");
      10:  lrwc.BeginTakeAWhile(4, HandleClientReturn, "four");
      11:  while (!areDone)
      12:  {
      13:      System.Threading.Thread.Sleep(25);
      14:  }
      15:  Console.WriteLine("APM approach compelted in {0} milliseconds", sw.ElapsedMilliseconds);
      16:  Console.WriteLine("All Done, press <enter> to exit");
      17:  Console.ReadLine();

    Ignore the busy wait on line 11, I should use a waithandle here but was having trouble getting it to work correctly (this is hard code).

    The callback and respective state:

       1:  static ServiceReference1.LongRunningWorkClient lrwc;
       2:  static Int32 countOfFinished = 0;
       4:  static void HandleClientReturn(IAsyncResult result)
       5:  {
       6:      string s = (string)result.AsyncState;
       7:      string resultString = lrwc.EndTakeAWhile(result);
       8:      Console.WriteLine("received {0}", resultString);
       9:      if (Interlocked.Increment(ref countOfFinished) == 4)
      10:      {
      11:          areDone = true;
      12:      }
      13:  }

    I have had some people say that line 9 should use Interlocked.CompareExchange in order to do this correctly, but the point is that this is tricky code, that modeling in WF is pretty nice.  [ignoring for the moment the work required to realize the assumption that we can make the service message back.] 

  • mwinkle.blog

    Finding the Variables in Scope Within the WF Designer


    In this thread, one of our forum customers asked the question:

    “how do I find all of the variables that are in scope for a given activity in the designer?”

    We do not have a public API to do this.  Internally we have a helper type called VariableHelper that we use to display the list of variables in the variable designer, as well as passed into the expression text box intellisense control.



    One thing that I would like to point out is that if you are implementing your expression editor and want to be able to enumerate the variables (to highlight the tokens that are in scope), your implementation of IExpressionEditorService.CreateExpressionEditor() will get a list of model items that correspond to the inscope variables. 

    Now, if you are not building an expression editor, and want to be able to enumerate all of the in scope variables, you will need to do a little more work.  Here are the basic steps

    1. Go to parent container
    2. Does it contain variables*
    3. Does it have a parent?
    4. Go to parent’s parent
    5. Repeat from step 2

    * This is basically the tricky part, because it is not something we can cleanly determine (and it can also be altered at runtime by adding some additional ones via the CacheMetadata() method.  I’ll describe what the designer is currently doing to accumulate this list, and then talk about designs we could have in a future release.

    Current Approach

    Currently, there are two things that we look for when we navigate up the model item tree.

    1. Collections of Variables name Variables
    2. Variables injected by the use of ActivityAction

    As we walk up the tree, we need to find these.  Let’s first consider the following workflow we will use for our tests:

       1:  wd.Load(new Sequence
       2:  {
       3:      Activities =
       4:      {
       5:          new Sequence
       6:          {
       7:              Activities = 
       8:              {
       9:                  new ForEach<string>
      10:                  {
      11:                      Body = new ActivityAction<string>
      12:                      {
      13:                           Argument = new DelegateInArgument<string> { Name="foo" },
      14:                           Handler = 
      15:                              new Sequence
      16:                              {
      17:                                  Activities = 
      18:                                  {
      19:                                      new WriteLine()
      20:                                  }
      21:                              }
      22:                      }
      23:                  }
      24:              }
      25:          }
      26:      }
      27:  });

    Line 13 is the only one that might look a little different here, I’ll eventually have a post up about ActivityAction that talks in more detail what’s going on there.

    So, this loaded in a rehosted designer looks like the following:


    Now, let’s look at the code to figure out what is in scope of the selected element.  First, let’s get the selected element :-)

     Selection  sel = wd.Context.Items.GetValue<Selection>();
     if (sel != null)
            ModelItem mi = sel.PrimarySelection;

    mi is now the model item of my selected item.  Let’s write some code to walk up the tree and add to a collection called variables

       1:  while (mi.Parent != null)
       2:  {
       3:      Type parentType = mi.Parent.ItemType;
       4:      if (typeof(Activity).IsAssignableFrom(parentType))
       5:      {
       6:          // we have encountered an activity derived type
       7:          // look for variable collection
       8:          ModelProperty mp = mi.Parent.Properties["Variables"];
       9:          if (null != mp && mp.PropertyType == typeof(Collection<Variable>))
      10:          {
      11:              mp.Collection.ToList().ForEach(item => variables.Add(item));
      12:          }
      13:      }
      14:      // now we need to look action handlers 
      15:      // this will ideally return a bunch of DelegateArguments
      16:      var dels = mi.Properties.Where(p => typeof(ActivityDelegate).IsAssignableFrom(p.PropertyType));
      17:      foreach (var actdel in dels)
      18:      {
      19:          if (actdel.Value != null)
      20:          {
      21:              foreach (var innerProp in actdel.Value.Properties)
      22:              {
      23:                  if (typeof(DelegateArgument).IsAssignableFrom(innerProp.PropertyType) && null != innerProp.Value)
      24:                  {
      25:                      variables.Add(innerProp.Value);
      26:                  }
      27:              }
      28:          }
      29:      }
      32:      mi = mi.Parent;
      33:  }

    Lines 4-13 handle the case where I just encounter an activity.  Lines 16-29 handle the slightly more tricky case where I need to handle ActivityAction (which ultimately derives from ActivityDelegate).  There are a few loops there but basically I look through all of the properties of the item which inherit from ActivityDelegate.  For each one of those, and for each property on that, I look for properties assignable from DelegateArgument.  As I find them I add them to my collection of model items for variables. 

    In my WPF app, I have a simple list box that shows all of these.  Because of the loosely typed data template I use in my WPF app, I get a pretty decent display since they share common things like ItemType from ModelItem, and a Name that routes through to the underlying Name property both elements share.


    <ListBox Name="listBox1" Grid.Row="1" Grid.ColumnSpan="2" Grid.Column="0">
                <StackPanel Orientation="Horizontal">
                    <TextBlock FontWeight="Bold" Text="{Binding Path=ItemType}"/>
                    <TextBlock Text="{Binding Path=Name}"/>


    Potential Future Approach

    Our approach outlined above is generally correct.  That said, you can imagine a deeply nested data structure on an activity that contain variables that get injected into the scope at runtime.  We need a way for an activity to provide a way to declare how to get all of its variables.  The current approach is one of inspection, which works for all of our activities. In the future we may need to add an extensibility point to allow an activity author to specify how to find the variables that are available to its children.  The way that we could do that is to introduce an “VariableResolver” attribute which points to a type (or maybe a Func<>) that operates on the activity and returns the list of variables.  Actually today you could likely introduce a custom type descriptor that lifted the variables out of the activity and surfaces them in such a way that they would be discovered by the inspection above. 

    Disclaimer: the Potential Future Approach is simply something that we’ve discussed that we could do, it does not represent something that we will do.  If you think that is an interesting extensibility point that you might want to take advantage of down the road, let me know. 

  • mwinkle.blog

    Thoughts on Flowchart


    Last night I saw that Maurice had a few tweets that caught my attention about flowchart, but this is the one that I want to talk about:

    Come to think of it I also really mis a parallel execution in a flowchart. But other than that flowcharts rock! #wf4 #dmabout 13    hours ago via web

    I thought about replying on twitter, but it’s a bit of a longer topic.   When we were building the flowchart for VS2010, we considered a number of possible capabilities, but ultimately the capabilities customers were looking for fell into two categories:

    • Arbitrary rework patterns
    • Coordination patterns

    Arbitrary Rework

    We can describe the model of execution in the current flowchart to be one that supports arbitrary rework.  You could also refer to this as GOTO.   One benefit (and downside) of breaking out of the tightly scoped, hierarchical execution model that we see with the Sequential style of workflows (and in most procedural languages) is the fact there exists more freedom in terms of defining the “next” step to be taken.  The simplest way to think about this is the following flowchart:


    Now, this isn’t particularly interesting, and most folks who look at this will simply ask “why not a loop?”  which in this case is a valid question.  As a process gets more sophisticated (or if humans tend to be involved), the number of possible loops required gets rather tricky to manage in a sequential flow (consider the following rework scenario which includes “next steps” across conditional boundaries, and from some branches but not others.


    Now, mathematically, we can describe the machinery in both of these machines as a set of valid transitions, and it is likely possible that there exists a complete mapping from any flowchart into procedural.


    The second pattern we consistently saw was the use of a free form style of execution in order to build complex coordination patterns.  The example I consistently return to (pointing back to a blog post I made back in the WF3 days)


    Here I want to be able to coordinate a few different things, namely that 3 executes when 1 completes, 5 when 2, 4 when 1 AND 2, 6 when 3 AND 4 AND 5 complete.  Here we’ve created a dependency graph that can’t be handled with the procedural constructs that we have.  How could this happen? Imagine we’re trying to model the ranking process for a series of horse races.  Task 3 can happen when Race 1 completes, as Task 5 can happen when Race 2 completes.  Task 4 represents some work that requires the data from both Races.  When those 3 tasks (3, 4, and 5) complete, I can move forward and take some action (like bet on the next race). 

    This type of pattern can be very powerful, and is often described by petri-nets.  There exists a multitude of join semantics from simple AND joins to conditional joins (ranging from a simple “count” to a condition based on data). 

    How’d we get to where we are today?

    Today, the flowchart in the .NET framework only supports the former pattern, that of arbitrary rework.  How’d we get there.  While we found a lot of value in both patterns what we found when we tried to compose them, we often got into places that became very hard to have predictable execution semantics.   The basic crux of the design issue gets to the “re-arming” of a join.  If I decide to jump back to task 3 at some point, what does it mean for 3 to complete?  Do I execute 6 again right away, do I wait for 4 and 5 to complete a second time?  What happens if I jump to 3 a second time?  Now, there certainly exist formal models for this, and things like Petri-net define a set of semantics for these things.  What we found though was that any expression of the model that took these things into account required a pretty deep understanding of the process model, and we lost one of the key benefits we see from a flowchart, which is that it is a simple, approachable model.  This is not to say that we don’t think that the Coordination pattern is useful, or that petri-nets don’t ultimately hold the unified approach,  we just did not have the time to implement a coordination pattern in VS 2010, and creating an approachable petri-net backed flowchart modeling style was something we’d need more time to get correct. 

    Where does this leave us?

    If you’re looking at this and saying, “but I want coordination (or a full blown petri-net)”, there are a couple of options:

    • Roll up your sleeves and write that activity.  In the blog post above, I outline how you could go about building the coordination pattern, and with the new activity model, this would be simpler.  The harder part is still writing a designer, and there is now some good guidance on free form designers in the source of the state machine on codeplex. 
    • Provide feedback on connect, when you do this it opens a bug in our bug database that our team is looking at daily.  Describe your scenario, what you’re trying to do, and what you’d like to see.  This type of customer feedback is vital to helping us plan the right features for the next release.  
  • mwinkle.blog

    Usability Testing the WF Designer vNext (or, Yelling at Customers)


    One of the things that my team is working on is the next version of the workflow designer.  In order to help us get real feedback, we engaged with our usability teams to design and execute a usability study. 

    For details on what the test looks like (when we did them 3 years ago for the first version of the WF designer, see this great channel9 video).  The setup is still the same (one way glass mirror, cameras tracking the face, screen, posture of the subject), the only difference is the software, we're busy testing out some new concepts to make workflow development much more productive.  At this stage of the lifecycle, we're really experimenting with some different designer metaphors, and a usability test is a great way to get real feedback.

    One thing I've always tried to do since I came to Microsoft is being sucked into the Redmond bubble.  The symptoms of placement inside said bubble are a gradual removal from the reality that everyday developers face.  When I came to the company two years ago, I was chock full of great thoughts and ideas from the outside, and much less tolerant of the "well, that's just how it works" defense. 

    Slowly, though, as you start to get deep into thinking about a problem, and tightly focusing on that problem, those concerns start to fade away, as you look to optimize the experience you are providing.  Sitting in on the usability labs yesterday was a great reminder to me of how easily one can slip into the bubble.  Our test subject was working with a workflow in the designer and had a peculiar style of working with the property window in VS.  Now, when I use VS, I use the property grid in one way.  I have it docked, and I have the dock set to automatically hide.  I have known some developers who prefer the Apple / photoshop style where the property pane floats.  The customer's way of working with the property grid was that he had it floating, but he would close it after every interaction.  This required him to do one of two things in order to display the grid again, either go to the View menu, or (and what his style of work was) right clicking on an element and selecting properties.

    The prototype we were doing the usability testing with, however, does not have that feature wired up, in fact, it currently doesn't displaimagey the properties item in the context menu at all.  Not because we have an evil, nefarious plan to remove the properties item inconsistently throughout our designer, but rather because no one gave it any thought when we put the prototype together as we had other UI elements we wanted to focus on. 

    This became a serious problem for our customer, as the way he expected to work was completely interrupted.  At one point, we asked him to dock the property window so we could continue with the test.  This is the most fascinating part of the study to me, and that was watching him work to dock the property grid in the left panel.  I've become so used to the docking behavior in VS (see screenshot below), that it didn't even occur to me that this might present a problem for the user.  Instead, we watched for 3 minutes or so as he attempted to figure out how to move the window, and then try to process the feedback that the UX elements give.  About 60 seconds in or so, the property grid was about at a similar location to the screenshot, with just a centimeter or two's distance away from being in "the right place".  Watching his face, we saw him look slightly confused and then move it elsewhere.  Two more times he came back to that same spot, just far enough away to not get the feedback that might help him in the right direction.  It was at this point, the spontaneous yelling started among the observers in the room.  Something that has become so obvious to us, something we have internalized and accepted as "just the way the world," was becoming crystal clear to us how much difficulty this was causing.  The yelling was things like "Move up, move up" "no, wait, over, over" "oh, you almost, almost, no...." trying to will the customer through the soundproof wall what we wanted him to do.

    This situation repeated itself time and time again with different UI elements, and it was very, very educational to see the way different users manage their workspace and interact with a tool that I've become so familiar with that I forget to see the forest for the trees.  I also realized, that although I had worked with a lot of customers and other developers, very rarely had I paid attention to how they work, rather than simply their work. 

    Now, here's where I open up the real can of worms.  We're looking to make usability improvements in the WF designer.  Are there any that really bother you?  What can we do to make you a more productive WF developer? 

  • mwinkle.blog

    AttachedProperty Part 2, Putting it Together


    On my last post, Jason jumped right to the punchline in his comment here. He asks “if there is an easy way to have the properties value serialized out to the xaml.”

    First, let’s look at what we need to do from the XAML side.

    First, create a helper type with a getter and setter for the property that you want to attach.  Here we’re going to attach comments:

    public class Comment
        static AttachableMemberIdentifier CommentTextName = new AttachableMemberIdentifier(typeof(Comment), "CommentText");
        public static object GetCommentText(object instance)
            object viewState;
            AttachablePropertyServices.TryGetProperty(instance, CommentTextName, out viewState);
            return viewState;
        public static void SetCommentText(object instance, object value)
            AttachablePropertyServices.SetProperty(instance, CommentTextName, value);

    Next, let’s use the AttachableMemberIdentifier and the AttachablePropertyServices to do something interesting with this:

    AttachableMemberIdentifier ami = new AttachableMemberIdentifier(typeof(Comment), "CommentText");
    Dog newDog = new Dog { Age = 12, Name = "Sherlock", Noise = "Hooowl" };
    AttachablePropertyServices.SetProperty(newDog, ami, "A very good dog");
    string s = XamlServices.Save(newDog);
    Dog aSecondNewDog = XamlServices.Load(new StringReader(s)) as Dog;
    string outter;
    AttachablePropertyServices.TryGetProperty(aSecondNewDog, ami, out outter);
    Console.WriteLine("read out: {0}", outter);

    Let’s see the output from this:

         Comment.CommentText="A very good dog"
         xmlns="clr-namespace:AttachedPropertiesBlogPosting;assembly=AttachedPropertiesBlogPosting" />
    read out: A very good dog

    You’ll note that the value is contained in the XAML under Comment.CommentText.

    Pulling it all Together

    Let’s take what we did in the last post and combine it with the above stuff in order to have an attached property that writes through and is stored inside the XAML.

    AttachedProperty<string> Comment = new AttachedProperty<string>
        IsBrowsable = true,
        Name = "Comment",
        Getter = (mi =>
            { string temp;
              AttachablePropertyServices.TryGetProperty<string>(mi.GetCurrentValue(), ami, out temp);
              return temp;
        Setter = ( (mi,val) => AttachablePropertyServices.SetProperty(mi.GetCurrentValue(), ami, val) )
    dogMi.Properties["Comment"].SetValue("I think I like that dog");
    string xaml = XamlServices.Save(dogMi.GetCurrentValue());

    What are we doing here, well, we basically just use the Getter and Setter to write through to the underlying instance.  You’ll note that usually, we never want to go and use GetCurrentValue(), as any changes made there are not made via the ModelItem tree which means we might miss a change notification.  However, given that the only place where we can store the XAML attached property is on the instance itself, this gives us a good way to write through.  The XAML output below shows that this works as expected:

       Comment.CommentText="I think I like that dog"
       xmlns="clr-namespace:AttachedPropertiesBlogPosting;assembly=AttachedPropertiesBlogPosting" />
  • mwinkle.blog

    WF inside of SharePoint


    I'm a little late to notice this, but Eilene Hao, a PM on the SharePoint Workflow team, has put together a massive 7 part series on writing workflows in SharePoint. I haven't focused a lot on workflows inside of Office, looks like I need to spin up a VPC and dive through some articles!


  • mwinkle.blog

    WF 4.0 -- Designer Type Visibility


    One of the exercises I'm going through now is scrubbing our API and taking a hard look at types marked as public and trying to decide "does it really need to be public."  While on first blush, you can

    What I'd love to get from folks are answers to the following questions:

    • Were there designer types in 3.0/3.5 that were not public that caused you problems?
      • If so, why?  What and how did you want to use that type?
    • Were there scenarios where you found our decisions on type visibility odd, inconsistent or painful?  Let me know what those were.


    Looking forward to your thoughts!

  • mwinkle.blog

    Oslo PDC Sessions Posted


    Late last night, the PDC team posted an additional "bunch" of sessions, including one I'm particularly interested in:

    Extending Windows Workflow Foundation v.Next with Custom Activities

    Presenter: Matt Winkler

    Windows Workflow Foundation (WF) coordinates and manages individual units of work, encapsulated into activities. The next version of WF comes with a library of activities, including Database and PowerShell. Learn how to extend this library by encapsulating your own APIs with custom activities. See how to compose those basic activities into higher level units using rules, flowchart, or state machine control flow styles. Learn how to extend beyond WF control styles by building your own. Learn how to customize and re-host the workflow authoring experience using the new WF designer framework.


    Advanced, WF

    They also posted a number of other interesting Oslo sessions that folks might be interested in, these cover a wide range of the things that our group is doing:

    • "Oslo:" The Language - Don Box and David Langworthy
      • "Oslo" provides a language for creating schemas, queries, views, and values. Learn the key features of the language, including its type system, instance construction, and query. Understand supporting runtime features such as dynamic construction and compilation, SQL generation, and deployment. Learn to author content for the "Oslo" repository and understand how to programmatically construct and process the content.

    • "Oslo:" Customizing and Extending the Visual Design Experience - Florian Voss
      • "Oslo" provides visual tools for writing data-driven applications and services. Learn how to provide a great experience over domain-specific schemas, and explore the basic user model, data-driven viewer construction, user-defined queries, and custom commands. See how the design experience itself is an "Oslo" application and is driven by content stored in the "Oslo" repository.

    • Hosting Workflows and Services in "Oslo" - Ford McKinstry
      • "Oslo" builds on Windows Workflow (WF) and Windows Communication Foundation (WCF) to provide a feature-rich middle-tier execution and deployment environment. Learn about the architecture of "Oslo" and the features that simplify deployment, management, and troubleshooting of workflows and services.

    • "Oslo" Repository and Schemas - Martin Gudgin, Chris Sells
      • "Oslo" uses schematized data stored in the "Oslo" repository to drive the development and execution of applications and services. Tour the schemas and see how user-defined content can be created and related to them. Learn how to utilize platform schemas, such as worflow, services, and hosting. Also, learn how to extend the repository and how to use repository-extended SQL database services to support critical lifecycle capabilities such as versioning, security, and deployment.

    It'll be a good time, can't wait to see ya there!

    • mwinkle.blog

      Subscription / Notification Engine (WF4 EditingContext Intro Part 5)


      This part 5 of my 6  part series on the EditingContext.

      In this post, we’re going to tie together a few of the things we’ve seen in the last few posts and show how we can wire up parts of the designer (or the hosting application) to changes made to the Items collection of the EditingContext to do some interesting things.

      You will note that both the ServiceManager and ContextItemManager have Subscribe methods, and I’ve talked in previous posts about how the publish mechanism is a little different.  I want to dive a little deeper into how these work, the different overloads, and what you can expect have happen on the subscribe side of things.


      On service, there are two different publish methods.  I will list all four, and then talk about how there are really only two :-)



      Publish(Type, PublishServiceCallback) Publishes the specified service type, but does not declare an instance. When the service is requested, the Publish service callback will be invoked to create the instance. The callback is invoked only once. After that, the instance it returned is cached.
      Publish(Type, Object) Publishes the given service. After it is published, the service instance remains in the service manager until the editing context is disposed of.
      Publish<(Of <(TServiceType>)>)(TServiceType) Publishes the given service. After it is published, the service instance remains in the service manager until the editing context is disposed of.
      Publish<(Of <(TServiceType>)>)(PublishServiceCallback<(Of <(TServiceType>)>)) Publishes the given service type, but does not declare an instance yet. When the service is requested, the PublishServiceCallback will be invoked to create the instance. The callback is invoked only once. After that, the instance it returned is cached.

      There are really only two methods here, and some generic sugar for the other two.  They are Publish(Type, PublishServiceCallback), and Publish(Type, Object).  If you were to look at the implementation of the generic versions, they simply turn around and call the un-generic form.

      The difference between the basic one (Publish(Type, Object)) and the version with the callback is that the callback enables us to be a little more lazy and wait to actually create the instance of the object until it is first requested.  Let’s look at how PublishServiceCallback is defined:

      public delegate Object PublishServiceCallback(
          Type serviceType

      What this lets us do is that the first time someone calls GetService, this method will be called with the responsibility of returning an instance of the service type.  Subsequent calls to GetService will simply return the instance returned by the method provided fro the PublishServiceCallback.  It is important to note that on Publish, any subscribers will be notified.  As the Subscribe callback takes an instance, we will internally call GetService on the notification, which will in turn call the PublishServiceCallback to instantiate an object.  If we have subscribers, our publish is less lazy (but that’s by design, we have consumers who are ready and waiting to consume).

      Let’s now look at the subscribe methods.  Again, here there are two methods (generic and non-generic), but they both do the same thing:



      Subscribe Invokes the provided callback when someone has published the requested service. If the service was already available, this method invokes the callback immediately.
      Subscribe<(Of <(TServiceType>)>) Invokes the provided callback when someone has published the requested service. If the service was already available, this method invokes the callback immediately.

      Both of these use a SubscribeServiceCallback defined as the following

      public delegate void SubscribeServiceCallback(
          Type serviceType,
          Object serviceInstance

      This allows any consumer to be notified when a service is initially published.  This is an important distinction we will call out versus items which provide a more advanced subscription method (namely, to changes).

      Why is this Useful?

      Generally we find this useful for a few reasons:

      • Services may not be available at designer initialization, or the order in which they are created may not be fixed (it is ultimately up to the host to determine this). 
      • Services may be provided by the host.  It is possible your activity designer may be used in a host that does not provide that service.  You may want to be flexible in handling that
      • Services can be injected at any time.  A publish – subscribe model lets us have a little more flexibility to react to new services as they are added.  You could imagine a spell checking service that a host only provides on the first time someone hits “spell check.”  When this service comes online, then our designers can decide to consume this.
      • Flipping things around, you may want to use a service to callout to a host, and you would like the host to subscribe for when an certain activity designer will publish a service.

      Now let’s talk about Items:


      Items do not have a publish method, per se, but they have the SetValue method which basically publishes an instance to the context.  The semantics of SetValue are that it will first attempt to store the new value.  Provided that succeeds, we then call OnItemChanged on the ContextItem itself.  This is basically notifies the object itself (giving it a chance to react, clean up, or throw if something is really wrong).  If this throws, the old value is preserved.  If this succeeds, we then notify anyone who has subscribed to the changes. 

      GetValue allows me to retrieve the ContextItem.  There are two GetValue’s, one generic, the other non-generic, but with a type as its parameter.  It is important to note the point that is also present in the docs.  If there is not an item present when this is called, the default value will be instantiated and returned. 

      Provided items are written using SetValue, all of the subscribers will be subsequently notified.  If I just do an arbitrary GetValue and then make a few changes without calling SetValue, by default nothing interesting is going to happen (that is, no subscribers will be notified, subsequent calls to GetValue will get the updated object however).  Subscribe (and it’s generic counterpart) allow me to provide a SubscribeContextCallback which will be invoked whenever SetValue is called.   This functions basically in the same way that it does for Services.

      An interesting pattern for Items that we use in a few places throughout the designer is to create an AttachedProperty on the modelItem (similar to this post) which in the implementation of the Getter and Setter will call out to the editing context to get or set the value from a ContextItem.  This gives me a WPF friendly binding surface (foo.Bar binding syntax) that we can wire up to be change aware.  We do this for a number of our triggers within our style implementation for things like is selected, etc.  Future post note for me is that I should go through all of the attached properties present on a ModelItem that you could use to bind to :-)


      This wraps up a tour of the Subscription / Notification engine present within the EditingContext. 

    • mwinkle.blog

      Types, Metatypes and Bears, Redux


      I made a quick post a few months back where I tried to talk about the way the designer works and lets us design types, as well as simply configure instances of types.

      There were a couple of key points that I wanted to make there in that post:

      • The workflow designer can configure instances of activity graphs, and create entire types as well
      • Types are designed by editing an instance of a type that represents the type being designed, the metatype
      • There is some XAML trickery required to serialize and deserialize this
      • This same type of work is done to enable the DynamicActivity capability

      A few folks have noticed (and sent me mail), that things look a little different in Beta2.  While we are going to have a more thorough, “here’s everything that changed” doc, I want to go ahead and update at least some of the things that I’ve been talking about here.

      What’s New

      In reality, very little is new, we’ve primarily moved stuff around now.  One thing that you may remember is that the DesignTimeXamlReader was not public in beta1, and if you are looking around, you may not find it.  We have made this functionality public however.  Thus, see the “what’s changed" bit.

      What’s Changed

      We took a long look at things and realized we had a bunch of XAML stuff all over the place.  We felt it would be a good idea to try to consolidate that into one place in WF, so System.Activities.XamlIntegration.ActivityXamlServices becomes your one stop shop for most things Activity and XAML related.  Let’s take a quick look and see what’s in there:

      ActivityXamlServices Members

      [This documentation is for preview only, and is subject to change in later releases. Blank topics are included as placeholders.]

      Creates an instance of an activity tree described in XAML.

      The ActivityXamlServices type exposes the following members.

      clip_image001[4] Methods





      Overloaded. Maps an x:Class activity tree to an ActivityBuilder or ActivityBuilder<(Of <(TResult>)>).



      Maps an ActivityBuilder or ActivityBuilder<(Of <(TResult>)>) from the specified writer to an x:Class activity tree.



      Overloaded. Maps an x:Class activity tree to an DynamicActivity or DynamicActivity<(Of <(TResult>)>).



      Overloaded. Creates an instance of a declarative workflow.



      Load is used to generally take some XAML and return an Activity which you can then use to execute.  If Load encounters a XAML stream for <Activity x:Class, it will subsequently generate a DynamicActivity.  This functions basically the same way WorkflowXamlServices.Load() did in beta1. 

      You also see CreateBuilderReader, and CreateBuilderWriter, which are used to surface the DesignTimeXaml capabilities that we used in beta1.  These will return an instance of a XamlReader/Writer that handles the transformation between the metatype and the <Activity x:Class XAML.   The metatype has changed names from ActivitySchemaType to ActivityBuilder.

      The table below should help summarize the uses and changes between beta1 and beta2.  In this area, I don’t expect any changes between what you see now, and what you will see in RTM.




      metatype (type to build types) ActivitySchemaType ActivityBuilder
      Mechanism to load DynamicActivity WorkflowXamlServices.Load() ActivityXamlServices.Load()
      Mechanism to load ActivityBuilder use WorkflowDesigner.Load() to get an ActivitySchemaType Use the reader from CreateBuilderReader() to pass into XamlServices.Load()
      Mechanism to save ActivityBuilder to XAML Create a new DesignTimeXamlWriter, pass that to XamlServices.Save() Use the writer returned from CreateBuilderWriter() to pass into XamlServices.Save()

      To explore this, use CreateBuilderReader() and XamlServices.Load() on a workflow that you’ve built in the designer and poke around a bit to see what’s going on.

      Here is some sample code that walks through this:

         1:  ActivityBuilder ab1 = new ActivityBuilder();
         2:  ab1.Name = "helloWorld.Foo";
         3:  ab1.Properties.Add(new DynamicActivityProperty { Name = "input1", Type = typeof(InArgument<string>) });
         4:  ab1.Properties.Add(new DynamicActivityProperty { Name = "input2", Type = typeof(InArgument<string>) });
         5:  ab1.Properties.Add(new DynamicActivityProperty { Name = "output", Type = typeof(OutArgument<string>) });
         6:  ab1.Implementation = new Sequence
         7:  {
         8:      Activities =
         9:      {
        10:          new WriteLine { Text = "Getting Started " },
        11:          new Delay { Duration = TimeSpan.FromSeconds(4) },
        12:          new WriteLine { Text = new VisualBasicValue<string> { ExpressionText= "input1 + input2" }},
        13:          new Assign<string> { To = new VisualBasicReference<string> { ExpressionText = "output" },
        14:                       Value = new VisualBasicValue<string> {ExpressionText= "input1 + input2 + \"that's it folks\"" } }
        15:      }
        17:  };
        18:  StringBuilder sb = new StringBuilder();
        19:  StringWriter tw = new StringWriter(sb);
        20:  XamlWriter xw = ActivityXamlServices.CreateBuilderWriter(
        21:      new XamlXmlWriter(tw, new XamlSchemaContext()));
        22:  XamlServices.Save(xw , ab1);
        23:  string serializedAB = sb.ToString();
        25:  DynamicActivity da2 = ActivityXamlServices.Load(new StringReader(serializedAB)) as DynamicActivity;
        26:  var result = WorkflowInvoker.Invoke(da2, new Dictionary<string,object> { {"input1","hello"}, {"input2", "world" }});
        27:  Console.WriteLine("result text is {0}", result["output"]);
        30:  ActivityBuilder ab = XamlServices.Load(
        31:      ActivityXamlServices.CreateBuilderReader(
        32:          new XamlXmlReader(new StringReader(serializedAB)))) as ActivityBuilder;
        34:  Console.WriteLine("there are {0} arguments in the activity builder", ab.Properties.Count);
        35:  Console.WriteLine("Press enter to exit");
        36:  Console.ReadLine();


      Good luck, and happy metatyping!

    • mwinkle.blog

      Custom WF Designer Sample

      As mentioned below, the sample is now available here.
    • mwinkle.blog

      Navigating the WF4 Beta 2 Samples


      Hot off the presses (and the download center) come the WF4 Beta 2 samples here.  The team has invested a lot of time into these samples and they provide a good way to get up to speed on the way a particular feature or group of features work together.

      Note, there are 2300 files to be unzipped, so hopefully there is a sample in here for everyone.

      At a high level, we work down the directory structure from technology, sample type, and then some functional grouping of samples. 



      Within the “Sample Type” we have a few different categories we use.

      • Basic
        • These are “one feature” samples that are used to illustrate how to use a given sample.  Often times they are hosted in the most basic wrapper necessary in order to get the feature to a point where it can be shown. 
        • These are grouped within feature level areas, a few  examples from the samples are:
          • \WF\Basic
            • \BuiltInActivities – how to configure and use the activities that are in the box
            • \CustomActivities
              • \CodeBodied\ – writing activities in code, including async, composite, and leveraging ActivityAction’s
              • \Composite – writing composite activities
              • \CustomActivityDesigners – writing activity designers
            • \Designer – programming against the infrastructure, a tree view designer, and a XamlWriter which will remove view state.
            • \Persistance
            • \Tracking
      • Scenario
        • These are higher level samples that require pulling together a number of features in order to highlight how the features can be combined to enable an application of the technology that might be of interest.  In the WF bucket, you will see things like Compensation and Services, which pull together a number of individual features
        • The ActivityLibrary folder is chock full of interesting sample activities that are useful for seeing how to write activities, as well as code that might be useful in your application.  Some of these are items which we aren’t shipping in the framework but may in the future.   Many of these also include interesting designers as well.  Some of the interesting sample activities in here:
      • Application
        • These samples are used to show how to pull everything together within the context of an application.  For instance, the WorkflowSimulator is described this way:
          • This sample demonstrates how to write a workflow simulator that displays the running workflow graphically. The application executes a simple flowchart workflow (defined in Workflow.xaml) and re-hosts the workflow designer to display the currently executing workflow. As the workflow is executed, the currently executing activity is shown with a yellow outline and debug arrow. In addition tracking records generated by the workflow are also displayed in the application window. For more information about workflow tracking, see Workflow Tracking and Tracing. For more information about re-hosting the workflow designer, see Rehosting the Workflow Designer.

            The workflow simulator works by keeping two dictionaries. One contains a mapping between the currently executing activity object and the XAML line number in which the activity is instantiated. The other contains a mapping between the activity instance ID and the activity object. When tracking records are emitted using a custom tracking profile, the application determines the instance ID of the currently executing activity and maps it back to the XAML file that instantiated it. The re-hosted workflow designer is then instructed to highlight the activity on the designer surface and use the same method as the workflow debugger, specifically drawing a yellow border around the activity and displaying a yellow arrow along the left side of the designer

      • Extensibility
        • This is a section inside the WCF samples that focuses on the various mechanisms and levels of extensibility


      I will be blogging more on some of the interesting (to me) individual samples.  What do you think?  Are there samples you’d like to see?  How are you using these, is there anything we can do to make these more useful?

    • mwinkle.blog

      Workflows that don't start with a Receive


      A question recently came up on an internal list about how to start a workflow to do some work and then have it accept a message via a Receive activity.  This led to an interesting discussion that provides some insight into how the WorkflowServiceHost instantiates workflows in conjunction with the ContextChannel.

      Creating a Message Activated Workflow

      By default, the WorkflowServiceHost will create a workflow when the following two conditions are true:

      • The message received is headed for an operation that is associated with a RecieveActivity that has the CanCreateInstance property set to true
      • The message contains no context information

      It is interesting to note that you don't even need to use a binding element collection that contains a ContextBindingElement.  The ContextBindingElement is responsible for creating the ContextChannel.  The job of the ContextChannel is to do two things on the Receive side

      • Extract the context information and pass that along up the stack (hand it off into the service model)
      • On the creation, and only on the creation, of a new instance, return the context information to the caller in the header of the response.

      So, if we want to create workflows based on messages dropped into an MSMQ queue, we can do that by not trying to add the ContextBindingElement into a custom binding on top of the netMsmqBinding, and associating the operation with a Receive activity with the CanCreateInstance equaling true. Note, that any subsequent communication with the workflow will have to occur with a communication channel over which we can pass context.

      Creating a Non-Message Activated Workflow

      In the case that this post is about, we do not want to activate off an inbound message.  The way to do this doesn't require much additional work.  We first need to make sure we don't have any of our Receive activities marked with CanCreateInstance to true.  This means that no message coming in can activate the workflow.  Our workflow will then do some work prior to executing the Receive activity and waiting for the next message.  Our workflow will look like this (pretty simple)


      When we want to start a workflow, we need to reach into the workflow service host and extract the workflow runtime and initiate the workflow:

      WorkflowServiceHost myWorkflowServiceHost = new WorkflowServiceHost(typeof(Workflow1), null);
      // do some work to set up workflow service host
      // on some reason to start the workflow
      WorkflowRuntime wr = myWorkflowServiceHost.Description.Behaviors.Find<WorkflowRuntimeBehavior>().WorkflowRuntime;
      WorkflowInstance wi = wr.CreateWorkflow(typeof(Workflow1));
      // need to send wi.InstanceId somewhere for others to communicate with it
      The last note is important.  In order for a client to eventually be able to communicate to the workflow, the workflow instance Id will need to be relayed to that client. 
    • mwinkle.blog

      wf.netfx3.com new rss feeds


      One thing that isn't so nice about the new http://wf.netfx3.com site is that all of the file listings do not roll up into one single nice syndication feed.  I want the ability to aggregate all of the files into nice rss feeds so I can stay on top of samples, activities, etc. In order to enable this I had to create new blogs that aggregate the individual folder feeds, and then another one to aggregate those blogs.

      So, here we go:

      And, the rss feed that rules them all

      All Content From wf.netfx3.com


    • mwinkle.blog

      Recent WF Content Summary


      I've been having some fun playing around with Visual Studio 2008 and the .NET Framework 3.5, and wanted to summarize some of the content I've put up on channel9 and other places.


      • The Conversation Sample remixed -- if there is one sample in the SDK to help you understand what is going on with context passing and duplex messaging, this is the sample that helped me learn it.  I had this sample reworked a little bit so that you don't have 5 console windows open.
      • Pageflow sample 1, live hosted -- watch this as pageflow is hosted "live" in the cloud.  This lets you interact with a pageflow as well as dive into the code using some tools my team has built.
      • Pageflow sample 2, live hosted as above -- this is the sample that shows how we can leverage the navigator workflow type to be in multiple paths at the same time (a parallel state machine almost).


      Channel9 Videos


      In the upcoming months, we've got more samples and content coming out about these features.  If you've got questions, keep 'em coming.

    • Page 2 of 6 (148 items) 12345»