• mwinkle.blog

    Introducing the Pageflow Sample

    • 40 Comments

      Most people think of workflows as a tool to represent and automate back-end business processes. Back-end business processes normally require some user interaction but their main purpose is not to drive the user experience or manage the UI. However, there is a growing type of application that leverages workflow as a tool to drive the user interaction and drive the user experience of an interactive process. This type of technology is called page flow.

      Last year at TechEd, we showed off some bits we had been working on internally that were designed to make that possible, the ability to model the user interaction of an application using workflow. This approach provides developers the ability to continue managing the complexity of their application in a structure and scalable manner. It turned out that the code we showed at TechEd wasn't going to end up in any of the product releases, so the dev team requested permission to release that code as a sample of how one can implement a generic navigation framework using WF that can support multiple UI technologies (i.e. ASP.NET and WPF).  This year, I just finished giving a talk showing this off and talking about how it will be available today!

      Thanks go to Shelly Guo, the developer and Israel Hilerio, the PM who had worked on this feature, and to Jon Flanders for providing packaging and quality control

      Now for the good stuff, download the bits from here!

      Navigate to setup.exe and run the setup, this will copy the sample projects and the source code for the sample, as well as some new visual studio project templates.

      Now, let's open up a sample project, so navigate to the samples directory and open the ASPWorkflow sample, this will show off both an ASP.NET Front end as well as a WPF Controller (you can actually use the two together). Let's get to the good stuff right away, and open up the workflow file.

      Wow… what's going on here? It kind of looks like a state machine, but not really. What has been done here is to create a new base workflow type. Things like SequentialWorkflow and StateMachineWorkflow aren't the only ways to write workflows, they are just two common patterns of execution. A NavigatorWorkflow type has been created (and you can inspect the source and the architecture document to see what this does) and a WorkflowDesigner has been created for it as well (again, this source is available as a guide for those of you who are creating your own workflow types).

      Each of the activities you see on the diagram above is an InteractionActivity, representing the interaction between the user (via the UI technology of their choosing) and the process. A nice model is to think of the InteractionActivity as mapping to a page within a UI. The output property is the information that is sent to that page (a list of orders or addresses to display) and the input is the information that is received from the page when the user clicks "submit". The InteractionActivity is a composite activity, allowing one to place other activities within the activity to be executed when input is received. The interesting property of the InteractionActivity is the Transitions collection. By selecting this and opening its designer, we are presented with the following dialog:

      This allows us to specify n-transitions from this InteractionActivity or "page" to other InteractionActivities. And we can specify this via a WF activity condition. This way, we could forward orders greater than $1000 to a credit verification process, or orders containing fragile goods through a process to obtain insurance from a shipper. What's cool about this, my page does not know about that process, it just says "GoForward" and my process defines what comes next. This de-couples the pages from the logic of your process.

      We then need to wire things up in config:

      <configSections>
      <section name="NavigationManagerSettings" 
      type="Microsoft.Samples.Workflow.UI.NavigationManagerConfigSection, Microsoft.Samples.Workflow.UI, Version=1.0.0.0, Culture=neutral, PublicKeyToken=40B940EB90393A19"/>
      <section name="AspNavigationSettings" 
      type="Microsoft.Samples.Workflow.UI.Asp.AspNavigationConfigSection, Microsoft.Samples.Workflow.UI, Version=1.0.0.0, Culture=neutral, PublicKeyToken=40B940EB90393A19"/>
      </configSections>
      
      

      <NavigationManagerSettings StartOnDemand="false">
        <Workflow mode="Compiled" value="ASPUIWorkflow.Workflow1, ASPUIWorkflow"/>
        <!--<Workflow mode="XOML" value="WebSite/XAMLWorkflow.xoml" rulesFile="WebSite/XAMLWorkflow.rules" />-->
        <Services>
          <add type="System.Workflow.Runtime.Hosting.DefaultWorkflowCommitWorkBatchService, System.Workflow.Runtime, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
          <add type="System.Workflow.Runtime.Hosting.SqlWorkflowPersistenceService, System.Workflow.Runtime, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" ConnectionString="Initial Catalog=WorkflowStore;Data Source=localhost;Integrated Security=SSPI;" UnloadOnIdle="true"/>
        </Services>
      </NavigationManagerSettings>
      <AspNavigationSettings>
        <PageMappings>
          <add bookmark="Page1" location="/WebSite/Default.aspx"/>
          <add bookmark="Page2" location="/WebSite/Page2.aspx"/>
          <add bookmark="Page3" location="/WebSite/Page3.aspx"/>
          <add bookmark="Page4" location="/WebSite/Page4.aspx"/>
          <add bookmark="Page5" location="/WebSite/Page5.aspx"/>
          <add bookmark="LastPage" location="/WebSite/LastPage.aspx"/>
        </PageMappings>
        <ExceptionMappings>
          <add type="Microsoft.Samples.Workflow.UI.WorkflowNotFoundException" location="/WebSite/ErrorPage.aspx"/>
          <add type="Microsoft.Samples.Workflow.UI.WorkflowCanceledException" location="/WebSite/ErrorPage.aspx"/>
          <add type="System.ArgumentException" location="/WebSite/ErrorPage.aspx"/>
          <add type="System.Security.SecurityException" location="/WebSite/ErrorPage.aspx"/>
        </ExceptionMappings>
      </AspNavigationSettings>
      
      

       

      Finally, let's look inside an ASP.NET page and see what we need to do to interact with the process:

      AspNetUserInput.GoForward("Submit", userInfo, this.User);

      This code is specifying the action and is submitting a userInfo object (containing various information gathered from the page) to the InteractionActivity (in this case, it submits to the Page2 InteractionActivity). If we look at what we've configured as the Input for this InteractionActivity, we see the following, which we can then refer to in the transition rules in order to make decisions about where to go next:

      Plenty of other stuff we could talk about here, support for back button, persistence, etc and I could continue to ramble on about this in another record-length blog post, but I will stop here for now. I will continue to blog about this, look forward to hearing any and all types of feedback, and what you'd be interested in seeing in this. Moving forward, there aren't any formal plans around this, but if there is enough interest in the community, we could get it created as a project on codeplex. If that sounds intriguing either contact me through this blog, leave a comment so that I can gauge the interest in such a scenario.

      Go, grab the bits!  And, if you have feedback, please contact me.

  • mwinkle.blog

    WCF and WF in "Orcas"

    • 15 Comments

    The wheels of evangelism never stop rolling.  Just a few months ago I was blogging that .NET 3.0 was released.  I've been busy since then, and now I can talk about some of that.  Today, the March CTP of Visual Studio "Orcas" was released to the web.  You can get your fresh hot bits here.  Samples will be coming shortly. Thom has a high level summary here.

    UPDATE: Wendesday, 2/28/2007 @ 11pm.  The readme file is posted here, a few minor corrections have been made to the caveats below.

    More updates... corrections to another caveat (a post-build event is required to get the config to be read).

    A Couple of Minor Caveats

    Since this is a CTP, it's possible that sometimes the wrong bits end up in the right place at the wrong time. Here are a few things to be aware of (not intended to be a comprehensive list):

    • Declarative Rules in Workflows:  There is an issue right now where the .rules file does not get hooked into the build process correctly.
      • Solution: Use code conditions, or load declarative rules for policy activiites using the RulesFromFile activity available at the community site
    • WF Project templates are set to target the wrong version: As a result, trying to add assemblies that are 3.0.0.0 or greater will not be allowed.
      • Solution: Right click the project, select properties, and change the targeted version of the framework to 3.0.0.0 or 3.5.0.0
    • A ServiceHost may not read config settings because the app config does not get copied to the bin (update: only on server 2003):  You will get an exception that "no application endpoints can be found"
      • Add the following Post Build Event "copy “$(ProjectDir)\app.config” $(TargetName).config "
      • Solution: For the time being, configure the WorkflowServiceHost in code (using AddServiceEndpoint() and referencing the WorkflowRuntime property to configure any services on the workflow runtime
      • This also means that a number of the workflow enabled services samples will not work out of the box.  Replace the config based approach with the code based approach and you will be fine.  I will try to post modified versions of these to the community site shortly.
    • WorkflowServiceHost exception on closing: You will get an exception that "Application image file could not be loaded... System.BadImageFormatException:  An attempt was made to load the program with an incorrect format"
      • Solution: Use the typical "These are not the exceptions you are looking for" jedi mind trick.  Catch the exception and move along in your application, as if there is nothing to see here.
    • Tools from the Windows SDK that you've come to know and love, like SvcConfigEditor and SvcTraceViewer are not available on the VPC. 
      • Solution: Copy these in from somewhere else and they will work fine. The SvcConfigEditor will even pick up the new bindings and behaviors to configure the services for some of the new functionality.

    The CTP is not something that is designed for you to go into production with, it's designed to let you explore the technology. There is no go-live license associated with this, it's for you to learn more about the technology. Since most of these issues have some work around, this shouldn't prevent you from checking these things out (because they are some kind of neat).

    New Features In WF and WCF in "Orcas"

    Workflow Enabled Services

    We've been talking about this since we launched at PDC 2005.  There was a session at TechEd 2006 in the US and Beijing that mentioned bits and pieces of this.  One of the key focus areas is the unification of WCF and WF.  Not only have the product teams joined internally, the two technologies are very complementary.  So complementary that everyone usually asks "so how do I use WCF services here?" when I show a workflow demo.  That's fixed now! 

    Workflow enabled services allow two key things:

    • Easily consume WCF services inside of a workflow
    • Expose a workflow as a WCF service

    This is accomplished by the following additions built on top of v1:

    • Messaging Activities (Send and Receive)
      • With designer support to import or create contracts
    • WorkflowServiceHost, a derivation of ServiceHost
    • Behavior extensions that handles message routing and instantiation of workflows exposed via services.
    • Channel extensions for managing conversation context.

     The Send and Receive activites live inside of the workflow that we define.  The cool part of the Receive activity is that we have a contract designer, so you don't have to dive in and create an interface for the contract, you can just specifiy it right on the Receive activity, allowing you a "workflow-first" approach to building services. 

    Once we've built a workflow, we need a place to expose it as a service.  We use the WorkflowServiceHost which is a subclass of ServiceHost in order to host these workflow enabled services.  The WorkflowServiceHost takes care of the nitty-gritty details of managine workflow instances, routing incoming messages to the appropriate workflow and performing security checks as well.  This means that the code required to host a workflow as a WCF service is now reduced to four lines of code or so.  In the sample below, we are not setting the endpoint info in code due to the issue mentioned above.

       1:  WorkflowServiceHost wsh = new WorkflowServiceHost(typeof(MyWorkflow));
       2:  wsh.Open();
       3:  Console.WriteLine("Press <Enter> to Exit");
       4:  Console.ReadLine();
       5:  wsh.Close();

    To support some of the more sophisticated behavior, such as routing messages to a running workflow, we introduce a new channel extension responsible for managing context.  In the simple case, this context just contains the workflowId, but in a more complicated case, it can contain information similar to the correlation token in v1 that allows the message to be delivered to the right activity (think three receives in parallel, all listing on the same operation).  Out of the box there is the wsHttpContextBinding and the netTcpContextBinding which implicitly support the idea of maintaining this context token.  You can also roll your own binding and attach a Context element into the binding definition.

    The Send activity allows the consumption of a service, and relies on configuraiton to detemrine exactly how we will call that service.  If the service we are calling is another workflow, the Send activity and the Receive activity are aware of the context extensions and will take advantage of them. 

    With the Send and Receive actiivty, it gets a lot easier to do workflow to workflow communicaiton, as well as more complicated messaging patterns. 

    Another nice feature of the work that was done to enable this is that we know have the ability to easily support durable services.  These are "normal" WCF services written in code that utilize an infrastructure similar to the workflow persistence store in order to provide a durable storing of state between method calls.

    As you can imagine, I'll be blogging about this a lot more in the future.

    JSON / AJAX Support

    While there has been a lot of focus on the UI side of AJAX, there still remains the task of creating the sources for the UI to consume.  One can return POX (Plain Old Xml) and then manipulate it in the javascript, but that can get messy.  JavaScript Object Notation (JSON) is a compact, text-based serialization of a JavaScript object.  This lets me do something like:

    var stuff = {"foo" : 78, "bar" : "Forty-two"};
    document.write("The meaning of life is " + stuff.bar);

    In WCF, we can now return JSON with a few switches of config.  The following config:

       1:  <service name="CustomerService">
       2:      <endpoint contract="ICustomers"
       3:        binding="webHttpBinding"
       4:        bindingConfiguration="jsonBinding"
       5:        address="" behaviorConfiguration="jsonBehavior" />
       6:  </service>
       7:   
       8:  <webHttpBinding>
       9:          <binding name="jsonBinding" messageEncoding="Json" />
      10:  </webHttpBinding>
      11:   
      12:  <behaviors>
      13:     <endpointBehaviors>
      14:          <behavior name ="jsonBehavior">
      15:            <webScriptEnable />
      16:          </behavior>
      17:      </endpointBehaviors>
      18:   </behaviors>

    will allow a function like this:

       1:  public Customer[] GetCustomers(SearchCriteria criteria)
       2:  {
       3:     // do some work here
       4:     return customerListing;
       5:  }

    to return JSON when called.  In JavaScript, I would then have an instance of a CustomerOrder object to manipulate.  We can also serialize from JavaScript to JSON so this provides a nice way to send parameters to a method.   So, in the above method, we can send in the complex object SearchCriteria from our JavaScript.  There is an extension to the behavior that creates a JavaScript proxy.  So, by referencing /js as the source of the script, you can get IntelliSense in the IDE, and we can call our services directly from our AJAX UI.

    We can also use the JSON support in other languages like Ruby to quickly call our service and manipulate the object that is returned.

    I think that's pretty cool.

    Syndication Support

    While we have the RSS Toolkit in V1, we wanted to make syndication part of the toolset out of the box.  This allows a developer to quickly return a feed from a service.  Think of using this as another way to expose your data for consumption.  We have introduced a SyndicationFeed object that is an abstraction of the idea of a feed that you program against.  We then leave it up to config to determine if that is an ATOM or RSS feed (and, would it be WCF if we didn't give you a way to implement a custom encoding as well?)  So this is cool if you just want to create a simple feed, but it also allows you to create a more complicated feed that has content that is not just plain text.  For instance, the digg feed has information about the submission, the flickr feed has info about the photos.  Your customer feed may want to have an extension that contains the customer info that you will allow your consumers to have access to.  The SyndicationFeed object allows you to create these extensions and then the work of encoding it to the specific format is taken care of for you.  So, let's seem some of that code (note, this is from the samples above):

       1:  public SyndicationFeed GetProcesses()
       2:  {
       3:      Process[] processes = Process.GetProcesses();
       4:   
       5:      //SyndicationFeed also has convenience constructors
       6:      //that take in common elements like Title and Content.
       7:      SyndicationFeed f = new SyndicationFeed();            
       8:   
       9:      //Create a title for the feed
      10:      f.Title = SyndicationContent.CreatePlaintextTextSyndicationContent("Currently running processes");
      11:      f.Links.Add(SyndicationLink.CreateSelfLink(OperationContext.Current.IncomingMessageHeaders.To));
      12:   
      13:      //Create a new RSS/Atom item for each running process
      14:      foreach (Process p in processes)
      15:      {
      16:          //SyndicationItem also has convenience constructors
      17:          //that take in common elements such as Title and Content
      18:          SyndicationItem i = new SyndicationItem();
      19:   
      20:          //Add an item title.
      21:          i.Title = SyndicationContent.CreatePlaintextTextSyndicationContent(p.ProcessName);
      22:   
      23:          //Add some HTML content in the summary
      24:          i.Summary = new TextSyndicationContent(String.Format("<b>{0}</b>", p.MainWindowTitle), TextSyndicationContentKind.Html);
      25:          
      26:          //Add some machine-readable XML in the item content.
      27:          i.Content = SyndicationContent.CreateXmlSyndicationContent(new ProcessData(p));
      28:   
      29:          f.Items.Add(i);
      30:      }
      31:   
      32:      return f;
      33:  }

    And the config associated with this would be:

     

       1:  <system.serviceModel>
       2:    <services>
       3:      <service name="ProcessInfo">
       4:         <endpoint address="rss"
       5:             behaviorConfiguration="rssBehavior" binding="webHttpBinding"
       6:             contract="HelloSyndication.IDiagnosticsService" />
       7:        <endpoint address="atom"
       8:             behaviorConfiguration="atomBehavior" binding="webHttpBinding"
       9:             contract="HelloSyndication.IDiagnosticsService" />    
      10:      </service>
      11:    </services>
      12:    <behaviors>
      13:      <endpointBehaviors>
      14:        <behavior name="rssBehavior">
      15:          <syndication version="Rss20"/>
      16:        </behavior>
      17:        <behavior name="atomBehavior">
      18:          <syndication version="Atom10"/>
      19:        </behavior>
      20:      </endpointBehaviors>
      21:    </behaviors>  
      22:  </system.serviceModel>

    This config will actually create an RSS and an ATOM endpoint.  The feed returned would have the process information embedded as: (in this case in ATOM)

       1:  <entry>
       2:    <id>fe1f1d2e-d676-417d-85bf-b7969dd07661</id>
       3:    <title type="text">devenv</title>
       4:     <summary type="html">&lt;b&gt;Conversations - Microsoft Visual Studio&lt;/b&gt;</summary>
       5:     <content type="text/xml">
       6:       <ProcessData xmlns="http://schemas.datacontract.org/2004/07/HelloSyndication" 
       7:              xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
       8:              <PeakVirtualMemorySize>552157184</PeakVirtualMemorySize>
       9:              <PeakWorkingSetSize>146124800</PeakWorkingSetSize>
      10:              <VirtualMemorySize>456237056</VirtualMemorySize>
      11:        </ProcessData>
      12:     </content>
      13:  </entry>

    We can also use the Syndication support to consume feeds!

       1:  SyndicationFeed feed = new SyndicationFeed();
       2:  feed.Load(new Uri("http://blogs.msdn.com/mwinkle/atom.xml"));
       3:  foreach (SyndicationItem item in feed.Items)
       4:  {
       5:     // process the feed here
       6:  }

    In the case where there have been extensions to the feed, we can access those as the raw XML or we can attempt to deserialize into an object.  This is accomplished in the reverse of the above:

       1:  foreach (SyndicationItem i in feed.Items)
       2:  {
       3:      XmlSyndicationContent content = i.Content as XmlSyndicationContent;
       4:      ProcessData pd = content.ReadContent<ProcessData>();
       5:   
       6:      Console.WriteLine(i.Title.Text);
       7:      Console.WriteLine(pd.ToString());
       8:  }
     

    HTTP Programming Support

    In order to enable both of the above scenarios (Syndication and JSON), there has been work done to create the webHttpBinding to make it easier to do POX and HTTP programming.

    Here's an example of how we can influence this behavior and return POX.  First the config:

       1:  <service name="GetCustomers">
       2:    <endpoint address="pox" 
       3:              binding="webHttpBinding" 
       4:              contract="Sample.IGetCustomers" />
       5:  </service>

    Now the code for the interface:

       1:  public interface IRestaurantOrdersService
       2:  {
       3:     [OperationContract(Name="GetOrdersByRestaurant")]
       4:     [HttpTransferContract(Method = "GET")]  
       5:     CustomerOrder[] GetOrdersByRestaurant();  
       6:  }

    The implementation of this interface does the work to get the CustomerOrder objects (a datacontract defined elsewhere).  And the returned XML is the datacontract serialization of CustomerOrder (omitted here for some brevity).  With parameters this gets more interesting as these are things we can pass in via the query string or via a POST, allowing arbitrary clients that can form URL's and receive XML to consume our services.

    Partial Trust for WCF

    I'm not fully up to date on all of the details here, but there has been some work done to enable some of the WCF functionality to operate in a partial trust environment.  This is especially important for situations where you want to use WCF to expose a service in a hosted situation (like creating a service that generates an rss feed off of some of your data).  I'll follow up with more details on this one later.

    WCF Tooling

    You now get a WCF project template that also includes a self hosting option (similar to the magic Visual Studion ASP.NET hosting).  This means that you can create a WCF project, hit F5 and have your service available.  This is another are where I will follow up later on.

    Wrap Up

    So, what now? 

    • Grab the bits
    • Explore the new features
    • Give  us feedback (through my blog or a channel9 wiki I am putting together now)! What works, doesn't work, what do you like, not like, etc.
    • Look forward to more posts, c9 videos and screencasts on the features in Orcas.
  • mwinkle.blog

    Swiss Cheese and WF4, or, An Introduction to ActivityAction

    • 8 Comments

    swiss cheese

    One common scenario that was often requested by customers of WF 3 was the ability to have templated or “grey box” or “activities with holes” in them (hence the Swiss cheese photo above).  In WF4 we’ve done this in a way that way we call ActivityAction

    Motivation

    First I’d like to do a little bit more to motivate the scenario. 

    Consider an activity that you have created for your ERP system called CheckInventory.  You’ve gone ahead and encapsulated all of the logic of your inventory system, maybe you have some different paths of logic, maybe you have interactions with some third party systems, but you want your customers to use this activity in their workflows when they need to get the level of inventory for an item. 

    Consider more generally an activity where you have a bunch of work you want to get done, but at various, and specific, points throughout that work, you want to allow the consumer of that activity to receive a callback and provide their own logic to handle that.  The mental model here is one of delegates.

    Finally, consider providing the ability for a user to specify the work that they want to have happen, but also make sure that you can strongly type the data that is passed to it.  In the first case above, you want to make sure that the Item in question is passed to the action that the consumer supplies. 

    In wf3, we had a lot of folks want to be able to do something like this. It’s a very natural extension to wanting to model things as activities and composing into higher level activities.  We like being able to string together 10 items as a black box for reuse, but we really want the user to specify exactly what should happen between steps 7 and 8. 

    A slide that I showed at PDC showed it this way (the Approval and Payment boxes represent the places I want a consumer to supply additional logic):

    image 

     

     

    Introducing ActivityAction

    Very early on the release, we knew this was one of the problems that we needed to tackle. The mental model that we are most aligned with is that of a delegate/ callback in C#.  If you think about a delegate, what are you doing, you are giving an object the implementation of some bit of logic that the object will subsequently call.  That’s the same thing that’s going on with an ActivityAction.  there are three important parts to an ActivityAction

    • The Handler (this is the logic of the ActivityAction)
    • The Shape (this determines the data that will be passed to the handler)
    • The way that we invoke it from our activity

    Let’s start with some simple code (this is from a demo that I showed in my PDC talk).  This is a timer activity which allows us to time the execution for the contained activity and then uses an activity action to act on the result.

    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.Activities;
    using System.Diagnostics;
     
    namespace CustomActivities.ActivityTypes
    {
     
        public sealed class Timer : NativeActivity<TimeSpan>
        {
            public Activity Body { get; set; }
            public Variable<Stopwatch> Stopwatch { get; set; }
            public ActivityAction<TimeSpan> OnCompletion { get; set; }
     
            public Timer()
            {
                Stopwatch = new Variable<Stopwatch>();
            }
     
            protected override void CacheMetadata(NativeActivityMetadata metadata)
            {
                metadata.AddImplementationVariable(Stopwatch);
                metadata.AddChild(Body);
                metadata.AddDelegate(OnCompletion);
            }
     
            protected override void Execute(NativeActivityContext context)
            {
                Stopwatch sw = new Stopwatch();
                Stopwatch.Set(context, sw);
                sw.Start();
                // schedule body and completion callback
                context.ScheduleActivity(Body, Completed);
     
            }
     
            private void Completed(NativeActivityContext context, ActivityInstance instance)
            {
                if (!context.IsCancellationRequested)
                {
                    Stopwatch sw = Stopwatch.Get(context);
                    sw.Stop();
                    Result.Set(context, sw.Elapsed);
                    if (OnCompletion != null)
                    {
                        context.ScheduleAction<TimeSpan>(OnCompletion, Result.Get(context));
                    }
                }
            }
     
            protected override void Cancel(NativeActivityContext context)
            {
                context.CancelChildren();
                if (OnCompletion != null)
                {
                    context.ScheduleAction<TimeSpan>(OnCompletion, TimeSpan.MinValue);
                }
            }
        }
    }

    A few things to note about this code sample:

    • The declaration of an ActivityAction<TimeSpan> as a member of the activity.  You’ll note we use the OnXXX convention often for activity actions.
    • The usage of the ActivityAction<T> with on type argument.  The way to read this, or any of the 15 other types is that the T is the type of the data that will be passed to the activity action’s handler.
      • Think about this like an Activity<Foo> corresponding to a void DoSomething(Foo argument1) method
    • The call to NativeActivityMetadata.AddDelegate() which lets the runtime know that it needs to worry about the delegate
    • The code in the Completed( ) method which checks to see if OnCompletion is set and then schedules it using ScheduleAction.  I want to call out that line of code.
    if (OnCompletion != null)
    {
        context.ScheduleAction<TimeSpan>(OnCompletion, Result.Get(context));
    }

    It is important to note that I use the second parameter (and the third through 16th if that version is provided) in order to provide the data.  This way, the activity determines what data will be passed to the handler, allowing the activity to determine what data is visible where.  This is a much better way than allowing an invoked child to access any and all data from the parent.  This lets us be very specific about what data goes to the ActivityAction.   Also, you could make it so that OnCompletion must be provided, that is, the only way to use the activity is to supply an implementation.  If you have something like “ProcessPayment” you likely want that to be a required thing.  You can use the CacheMetadata method in order to check and validate this.

    Now, let’s look at the code required to consume this time activity:

    DelegateInArgument<TimeSpan> time = new DelegateInArgument<TimeSpan>();
    a = new Timer
    {
        Body = new HttpGet { Url = "http://www.microsoft.com" },
        OnCompletion = new ActivityAction<TimeSpan>
        {
            Argument = time,
            Handler = new WriteLine { 
                Text = new InArgument<string>(
                    ctx => 
                        "Time input from timer " + time.Get(ctx).TotalMilliseconds)
            } 
    
        }
    };

    There are a couple of interesting things here:

    • Creation of DelegateInArgument<TimeSpan> : This is used to represent the data passed by the ActivityAction to the handler
    • Creation of the ActivityAction to pass in.  You’ll note that the Argument property is set to the DelegateInArgument, which we can then use in the handler
    • The Handler is the “implementation” that we want to invoke.  here’s it’s pretty simple, it’s a WriteLine and when we construct the argument we construct if from a lambda that uses the passed in context to resolve the DelegateInArgument when that executes.

    At runtime, when we get to the point in the execution of the Timer activity, the WriteLine that the hosting app provided will be scheduled when the ScheduleAction is called.  This means we will output the timing information that the Timer observed.  A different implementation could have an IfThen activity and use that to determine if an SLA was enforced or not, and if not, send a nasty email to the WF author.  The possibilities are endless, and they open up scenarios for you to provide specific extension points for your activities.

    That wraps up a very brief tour of ActivityAction.  ActivityAction provides a easy way to create an empty place in activity that the consumer can use to supply the logic that they want executed.  In the second part of this post, we’ll dive into how to create a designer for one of these, how to represent this in XAML, and a few other interesting topics.

    It’s that time of year that I’ll be taking a little bit of time off for the holidays, so I will see y’all in 2010!

    Photo Credit

  • mwinkle.blog

    Introducing the WF4 Designer

    • 6 Comments

    // standard disclaimer applies, this is based on the released Beta 1 bits, things are subject to change, if you are reading this in 2012, things may be, look, smell, work differently.  That said, if it’s 2012 and you’re reading this, drop me a line and let me know how you found this!

     

    As you might have heard, Beta1 of VS is out the door, and available to the public sometime today.  As you may know we’ve done a bunch of work for WF4, and I wanted to give a quick, high level overview of the designer.  Here’s a good overview for the new WF bits all up.

    First, let’s start with your existing WF projects.  What happens if I want to create a 3.5 workflow?  We’re still shipping that designer, in fact, let’s start there on our tour.  This shows of a feature of VS that’s  pretty cool, multitargeting. 

    Click New Project

    image

    Notice the “Framework Version” dropdown in the upper right hand corner.

    image

    This tells VS which version of the framework you would like the project you are creating to target.  This means you can still work on your existing projects in VS 2010 without upgrading your app to the new framework.  Let’s pick something that’s not 4.0, namely 3.5.  You’ll note that the templates may have updated a bit, select Workflow from the left hand tree view and see what shows up.

    image

    There isn’t anything magical about what happens next, you will now see the 3.5 designer inside of VS2010.  You’re able to build, create, edit and update your existing WF applications. 

    image

    Let’s move on and switch over to a 4.0 workflow.

    Create a new project and select 4.0

    image

    Create a new WF Sequential Console application and name it “SampleProject”.  Click Ok.

    We’ll do a little bit of work here, but you will shortly see the WF 4.0 designer.  It looks a little different from the 3.x days, we’ve taken this time to update the designer pretty substantially.  We’ve built it on top of WPF, which opens up the doors for us to do a lot of interesting things.  If you were at PDC and saw any Quadrant demos, you might think that these look similar. We haven’t locked on the final look and feel yet, so expect to see some additional changes there, but submit your feedback early and often, we want to know what you think. 

    image

    Let’s drop some activities into our sequence and see what’s there to be seen.

    image

    We’ve categorized the toolbox into functional groupings for the key activities.  We heard a lot of feedback that it was tough to know what to use when, so we wanted to provide a little more help with some richer default categories.  Add an Assign activity, a WriteLine activity and a Delay activity to the canvas by clicking and dragging over the to the sequence designer.

    image

    You’ll note that we’ve now got some icons on each activity indicating something is not correct.  This is a result of the validation executing and returning details about what is wrong.  Think of these as the little red squiggles that show up when you spell something wrong.  You can hover over the icon to see what’s wrong

    image

    You can also see that errors will bubble up to their container, so hovering over sequence will tell you that there is a problem with the child activities.

    image

    What if I have a big workflow, and what if I want to see a more detailed listing of errors?  Open up the Error View and you will see the validation results are also displayed here.  You’ll note there is some minor formatting weirdness.  This is a bug that we fixed but not in time for the Beta1 release.

    image

    Now, let’s actually wire up some data to this workflow.  WF4 has done a lot of work to be much more crisp about the way we think about data within the execution environment of a workflow. We divide the world into two types of data, Arguments, and Variables.  If you mentally map these to the way you write a method in code (parameters, and state internal to the method), you are one the right track.  Arguments determine the shape of an activity, what goes in, what goes out.  Variables allocate storage within the context of an activities execution.  The neat thing about variables, once the containing activity is done, we can get rid of the variables, as our workflow no longer needs them (note, we pass the important data in and out through the arguments).  To do this, we have two special designers on the canvas that contain information about the arguments and variables in your workflow

    image

    First, let’s click on the Argument designer and pass in some data. 

    Arguments consist of a few important elements

    • Name
    • Type
    • Direction
    • Default Value (Optional)

    image

    Most of these are self explanatory, with the one exception being the Direction.  You’ll note that this has In, Out and Property.  Now, when you are editing the arguments, you are actually editing the properties of the underlying type you are creating (I’ll explain more about this in a future post).  A more appropriate name might be “Property Editor” but the vast majority of what you’ll be creating with it is arguments.  Anyway, If you select In or Out, this basically wraps the type T in an InArgument, so it becomes a property of type InArgument<T>.  We just provide a bit of a shorthand so you don’t always have to pick InArgument as the type.  The default value takes an expression, but in this case, we won’t be using it.

    Let’s go ahead and add an argument of type TimeSpan named DelayTime.  You’ll need to select browse for types and then search for the TimeSpan

    image

    Variables are similar, but slightly different, variables have a few important elements:

    • Name
    • Type
    • Scope
    • Default Value (Optional)

     

    Remember earlier, I mentioned that variable is part of an activity, this is what Scope refers to.  Variables will only show up to be the scope of the selected activity, so if you don’t see any, make sure to select the Sequence, and then you will be able to add a variable.  Let’s add new variable, named StringToPrint, of type String. 

    image

    Now let’s do something with these in the workflow.  One thing I’m particularly happy with that we’ve done on the designer side of things is to enable people to build activity designers more easily.  There are lots of times where you have activities that have just a few key properties that need to be set, and you’d like to be able to see that “at a glance”  The assign designer is like that.

    image

    Now, let’s dig into expressions.  One big piece of feedback from 3.0 was that people really wanted richer binding experiences.  You see this as well with the WPF data binding .  We’ve taken it to the next level, and allow any expression to be expressed as a set of activities.  What this means is that we do have to “compile” textual expressions into a tree of activities, and this is one of the reasons we use VB to build expressions.  In the fullness of time, other languages will come on board. But how to use it, let’s see.  Click on the “To” text box on the Assign activity.  You will see a brief change of the text box, and then you will be in a VB Expression Editor, or what we’ve come to refer to as “The Expression Text Box” or ETB.  Start typing S, and already you will see intellisense begin to scope down the choices.  This will pick up all of the variables and arguments in scope. 

    image

    On the right side, we won’t use any of the passed in arguments, we’ll show off a richer expression.   Now, the space on the right side of the designer is kind of tight for something lengthy, so go to the property grid and click on the “…” button for the Value property

    image

    String.Format("Someone wants to wait {0} seconds", TimeToWait.TotalSeconds)

    This just touches the surface of what is possible with expressions in WF4, we can really get  much richer expressions (3.x expressions are similar to WPF data binding, they are really an “object” + “target” structure).

    Not everything makes it to the canvas of the designer surface, and for that, we have the property grid.  If you’ve used the WPF designer in VS2008, this should look pretty familiar to you.  Select the delay activity, and use the property grid to set the duration property to the InArgument you created above.  This experience is similar, with the ETB embedded into the property grid for arguments.

    Finally, repeat with the WriteLine and bind to the StringToPrint variable.

    Navigating the Workflow

    There are two different things that we have to help navigating the workflow, our breadcrumb bar at the top and the overview map (which appears as the “Mini Map” in the beta).  Let’s look at the overview map.  This gives you a view of the entire workflow and the ability to quickly scrub and scroll across it.

     

    image

    Finally, across the top we display our breadcrumbs which are useful when you have a highly nested workflow.  Double click on one of the activities, and you should see the designer “drill into” that activity.  Now notice the breadcrumb bar, it displays where you have been, and by clicking you can navigate back up the hierarchy.  In beta1, we have a pretty aggressive breadcrumb behavior, and so you see “collapsed in place” as the default for many of our designers.  We’re probably going to relax that a bit in upcoming milestones to reduce clicking and provide a better overview of the workflow.

    image

    Finally, there may be times where we don’t want to have a designer view, but would rather see the XAML.  To get there, just right click on the file and ask to “View Code”

    image

    This will currently ask you if you are sure that you want to close the designer, and you will then see the XAML displayed in the XML editor.  For the workflow we just created, this is what it looks like:

     

       1:  <p:Activity mc:Ignorable=""
       2:       x:Class="WorkflowConsoleApplication1.Sequence1" 
       3:       xmlns="http://schemas.microsoft.com/netfx/2009/xaml/activities/design"
       4:       xmlns:__Sequence1="clr-namespace:WorkflowConsoleApplication1;" 
       5:       xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" 
       6:       xmlns:p="http://schemas.microsoft.com/netfx/2009/xaml/activities"
       7:       xmlns:s="clr-namespace:System;assembly=mscorlib"
       8:       xmlns:sad="clr-namespace:System.Activities.Debugger;assembly=System.Activities" 
       9:       xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
      10:    <x:Members>
      11:      <x:Property Name="TimeToWait" Type="p:InArgument(s:TimeSpan)" />
      12:    </x:Members>
      13:    <__Sequence1:Sequence1.TimeToWait>
      14:      <p:InArgument x:TypeArguments="s:TimeSpan">[TimeSpan.FromSeconds(10)]</p:InArgument>
      15:    </__Sequence1:Sequence1.TimeToWait>
      16:    <p:Sequence>
      17:      <p:Sequence.Variables>
      18:        <p:Variable x:TypeArguments="x:String" Name="StringToPrint" />
      19:      </p:Sequence.Variables>
      20:      <p:Assign>
      21:        <p:Assign.To>
      22:          <p:OutArgument x:TypeArguments="x:String">
      23:              [StringToPrint]
      24:          </p:OutArgument>
      25:        </p:Assign.To>
      26:        <p:Assign.Value>
      27:          <p:InArgument x:TypeArguments="x:String">
      28:                    [String.Format("Someone wants to wait {0} seconds", TimeToWait.TotalSeconds)]
      29:           </p:InArgument>
      30:        </p:Assign.Value>
      31:      </p:Assign>
      32:      <p:Delay>[TimeToWait]</p:Delay>
      33:      <p:WriteLine>[StringToPrint]</p:WriteLine>
      34:    </p:Sequence>
      35:  </p:Activity>

     

    Executing the Workflow

    Inside the project you will see the Program.cs to execute the workflow, let’s take a look at that.

     

       1:  namespace WorkflowConsoleApplication1
       2:  {
       3:      using System;
       4:      using System.Linq;
       5:      using System.Threading;
       6:      using System.Activities;
       7:      using System.Activities.Statements;
       8:      using System.Collections.Generic;
       9:   
      10:      class Program
      11:      {
      12:          static void Main(string[] args)
      13:          {
      14:              AutoResetEvent syncEvent = new AutoResetEvent(false);
      15:   
      16:              WorkflowInstance myInstance =
      17:                  new WorkflowInstance(new Sequence1(),
      18:                      new Dictionary<string, object>
      19:                      {
      20:                          {"TimeToWait", TimeSpan.FromSeconds(3.5) }
      21:                      }
      22:   
      23:   
      24:   
      25:                      );
      26:              myInstance.OnCompleted = delegate(WorkflowCompletedEventArgs e) { syncEvent.Set(); };
      27:              myInstance.OnUnhandledException = delegate(WorkflowUnhandledExceptionEventArgs e)
      28:              {
      29:                  Console.WriteLine(e.UnhandledException.ToString());
      30:                  return UnhandledExceptionAction.Terminate;
      31:              };
      32:              myInstance.OnAborted = delegate(WorkflowAbortedEventArgs e)
      33:              {
      34:                  Console.WriteLine(e.Reason);
      35:                  syncEvent.Set();
      36:              };
      37:   
      38:              myInstance.Run();
      39:   
      40:              syncEvent.WaitOne();
      41:              Console.WriteLine("Press <Enter> to exit");
      42:              Console.ReadLine();
      43:   
      44:          }
      45:      }
      46:  }

     

    This is the standard program.cs template with two modifications.  The first is passing data into the workflow, indicated by the dictionary we create to pass into the WorkflowInstance.  This should look familiar if you have used WF in the past.

     WorkflowInstance myInstance =
                    new WorkflowInstance(new Sequence1(),
                        new Dictionary<string, object>
                        {
                            {"TimeToWait", TimeSpan.FromSeconds(3.5) }
                        }
                     );

     

    The second is a break at the end to keep our console window open (lines 41 and 42).  Hitting F5 from our project results in the following output (as expected).

    image

    This concludes our brief tour through the new WF designer.  I’ll be talking a lot more in upcoming days about some of the more programmatic aspects of it and how it’s put together.

  • mwinkle.blog

    Introduction to WF Designer Rehosting (Part 2)

    • 3 Comments

    standard beta disclaimer.  This is written against the beta1 API’s.  If this is 2014, the bits will look different.  When the bits update, I will make sure to have a new post that updates these (or points to SDK samples that do)

    In yesterday’s post, we went over the core components of the designer.  Let’s now take that and build that rehosts the designer, and then we’ll circle back around and talk about what we did and what comes next.

    Start VS, Create a new project, and select a WPF project

    image

    Inside the VS project add references to the System.Activities.* assemblies.  For now, that list looks like

    • System.Activities.dll
    • System.Activities.Design.Base.dll
    • System.Activities.Design.dll
    • System.Activities.Core.Design.dll

    image

    You might think the list of design assemblies is excessive.  We’ll be collapsing probably into two design assemblies, one with the designer infrastructure and one with the activity designers in subsequent milestones.

    Create some layout in the WPF window to hold the various designer elements.  I usually do a three column grid for toolbox, property grid and designer canvas.

    The XAML for this looks roughly like this:

    <Window x:Class="BlogPostRehosting.Window1"
            xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
            xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
            Title="Window1" Height="664" Width="831">
        <Grid>
            <Grid.ColumnDefinitions>
                <ColumnDefinition Width="1*"/>
                <ColumnDefinition Width="3*"/>
                <ColumnDefinition Width="2*"/>
            </Grid.ColumnDefinitions>
        </Grid>
    </Window>

    Now that we’ve got the layout down, let’s get down to business.  First let’s just get an app that displays the workflow designer and then we will add some other interesting features. We wanted to make it easy to get a canvas onto your host application, and to program against it.  The key type that we use is WorkflowDesigner, this encapsulates all of the functionality, and operating context, required.  Let’s take a quick look at the type definition

     

    Name Description
    Context Gets or sets an EditingContext object that is a collection of services shared between all elements contained in the designer and used to interact between the host and the designer. Services are published and requested through the EditingContext.
    ContextMenu Gets the context menu for this designer.
    DebugManagerView Provides a DebuggerServicethat is used for runtime debugging.
    PropertyGridFontAndColorData Sets the property grid font and color data.
    PropertyInspectorView Returns a UI element that allows the user to view and edit properties of the workflow.
    Text Gets or sets the XAML string representation of the workflow.
    View Returns a UI element that allows the user to view and edit the workflow visually.

     

    The editing context is where we will spend more time in the future, for now the View is probably what’s most interesting, as this is the primary designer canvas.  There are also some useful methods to load and persist the workflow as well.

    Let’s start off real simple, and write some code that will display a basic sequence, and we’ll get more sophisticated as we go along.

       1:  using System.Windows;
       2:  using System.Windows.Controls;
       3:  using System.Activities.Design;
       4:  using System.Activities.Core.Design;
       5:  using System.Activities.Statements;
       6:   
       7:  namespace BlogPostRehosting
       8:  {
       9:      /// <summary>
      10:      /// Interaction logic for Window1.xaml
      11:      /// </summary>
      12:      public partial class Window1 : Window
      13:      {
      14:          public Window1()
      15:          {
      16:              InitializeComponent();
      17:              LoadWorkflowDesigner();
      18:          }
      19:   
      20:          private void LoadWorkflowDesigner()
      21:          {
      22:              WorkflowDesigner wd = new WorkflowDesigner();
      23:              (new DesignerMetadata()).Register();
      24:              wd.Load(new Sequence 
      25:                              { 
      26:                                  Activities = 
      27:                                  {
      28:                                      new Persist(), 
      29:                                      new WriteLine()
      30:                                  }
      31:                              });
      32:              Grid.SetColumn(wd.View, 1);
      33:              grid1.Children.Add(wd.View);
      34:          }
      35:      }
      36:  }

    Let’s walk through this line by line:

    • Line 22, construct the workflow designer
    • Line 23, Call Register on the DesignerMetadata class.  Note that this associates all of the out of the box activities with their out of the box designers.  This is optional as a host may wish to provide custom editors for all or some of the out of box activities, or may not be using the out of box activities.
    • Line 24-31, Call Load, passing in an instance of an object graph to display.  This gives the host some flexibility, as this instance could come from XAML, a database, JSON, user input, etc.  We simply create a basic sequence with two activities
    • Line 32, set the column for the view
    • Line 33, add the view to the display

    This gives us the following application:

    image

    Now, that was pretty simple, but we’re also missing some key things, namely, the property grid.  It’s important to note however that this has all of the functionality of the designer (the variables designer, the overview map, etc.  This will react just the same as if you were building the workflow in VS. 

    Let’s add the property grid by adding the following two lines:

    Grid.SetColumn(wd.PropertyInspectorView, 2);
    grid1.Children.Add(wd.PropertyInspectorView);

    This will let us see the property grid (so things get a little more interesting).

    image

    So, we’re able to display the workflow and interact with it, but we probably also want to have a constrained authoring experience (not just editing), so that comes in the form of the ToolboxControl.  For the sake of this blog post, we’ll use this in XAML, but we certainly can code against it imperatively as well. 

    <Window x:Class="BlogPostRehosting.Window1"
            xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
            xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
            xmlns:sad="clr-namespace:System.Activities.Design;assembly=System.Activities.Design"
            xmlns:sys="clr-namespace:System;assembly=mscorlib"
            Title="Window1" Height="664" Width="831">
        <Window.Resources>
            <sys:String x:Key="AssemblyName">System.Activities, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35</sys:String>
        </Window.Resources>
        <Grid Name="grid1">
            <Grid.ColumnDefinitions>
                <ColumnDefinition Width="1*"/>
                <ColumnDefinition Width="3*"/>
                <ColumnDefinition Width="2*"/>
            </Grid.ColumnDefinitions>
            <sad:ToolboxControl>
                <sad:ToolboxControl.Categories>
                    <sad:ToolboxCategoryItemsCollection CategoryName="Basic">
                        <sad:ToolboxItemWrapper AssemblyName="{StaticResource AssemblyName}" ToolName="System.Activities.Statements.Sequence"/>
                        <sad:ToolboxItemWrapper AssemblyName="{StaticResource AssemblyName}" ToolName="System.Activities.Statements.If"/>
                        <sad:ToolboxItemWrapper AssemblyName="{StaticResource AssemblyName}" ToolName="System.Activities.Statements.Parallel"/>
                        <sad:ToolboxItemWrapper AssemblyName="{StaticResource AssemblyName}" ToolName="System.Activities.Statements.WriteLine"/>
                        <sad:ToolboxItemWrapper AssemblyName="{StaticResource AssemblyName}" ToolName="System.Activities.Statements.Persist"/>
                    </sad:ToolboxCategoryItemsCollection>
                </sad:ToolboxControl.Categories>
            </sad:ToolboxControl>
        </Grid>
    </Window>

     

     

    This lets me specify the items I want to allow a user to drop.

    image

    The thing that is interesting to point out here is that we’ve built a full featured, constrained editor (with things like copy paste, undo/redo, etc) with not too much code.

    Next time, we’ll get into to doing some more interesting bits as well to interact with the item being edited, serialize to XAML, and explore the editing context some more.  Let me know what you think!

  • mwinkle.blog

    Different Execution Patterns with WF (or, Going beyond Sequential and State Machine)

    • 7 Comments

    How do I do this?

    image

    A lot of times people get stuck with the impression that there are only two workflow models available: sequential and state machine. True, out of the box these are the two that are built in, but only because there are is a set of common problems that map nicely into their execution semantics. As a result of these two being "in the box," I often see people doing a lot of very unnatural things in order to fit their problem into a certain model.

    The drawing above illustrates the flow of one such pattern. In this case, the customer wanted parallel execution with two branches ((1,3) and (2,5)). But, they had an additional factor that played in here, 4 could execute, but it could only execute when both 1 and 2 had completed. 4 didn't need to wait for 3 and 5 to finish, 3 and 5 could take a long period of time, so 4 could at least start once 1 and 2 were completed. Before we dive into a more simple solution, let's look at some of the ways they tried to solve the problem, in an attempt to use "what's in the box."

    The "While-polling" approach

    image

     

    The basic idea behind this approach is that we will use a parallel activity, and in the third branch we will place a while loop that loops on the condition of "if activity x is done" with a brief delay activity in there so that we are not busy polling. What's the downside to this approach:

    • The model is unnatural, and gets more awkward given the complexity of the process (what do we do if activity 7 has a dependency on 4 and 5)
    • The polling and waiting is just not an efficient way to solve the problem
    • This Is a lot to ask a developer to do in order to translate the representation she has in her head (first diagram, with the model we are forcing into).

    The SynchScope approach

    WF V1 does have the idea of synchronizing some execution by using the SynchronizationScope activity. The basic idea behind the SynchronizationScope is that one can specify a set of handles that the activity must be able to acquire before allowing it's contained activities to execute. This let's us serialize access and execution. We could use this to mimic some of the behavior that the polling is doing above. We will use sigma(x, y, z) to indicate the synchronization scope and it's handles (just because I don't get to use nearly as many Greek letters as I used to).

    image

    This should work, provided the synchronization scopes can obtain the handles in the "correct" or "intended" order. Again, the downside here is that this gets to be pretty complex, how do we model 4 having a dependency on 3 and 2? Well, our first synchronization scope now needs to extend to cover the whole left branch, and then it should work. For the simple case like the process map I drew at the beginning, this will probably work, but as the dependency map gets deeper, we are going to run into more problems trying to make this work.

    Creating a New Execution Pattern

    WF is intended to be a general purpose process engine, not just a sequential or state machine process engine. We can write our own process execution patterns by writing our own custom composite activity. Let's first describe what this needs to do:

    • Allow activities to be scheduled based on all of their dependent activities having executed.

      • We will start by writing a custom activity that has a property for expressing dependencies. A more robust implementation would use attached properties to push those down to any contained activity
    • Analyze the list of dependencies to determine which activities we can start executing (perhaps in parallel)
    • When any activity completes, check where we are at and if any dependencies are now satisfied. If they are, schedule those for execution.

    So, how do we go about doing this?

    Create a simple activity with a "Preconditions" property

    In the future, this will be any activity using an attached property, but I want to start small and focus on the execution logic. This one is a simple Activity with a "Preconditions" array of strings where the strings will be the names of the activities which must execute first:

    public partial class SampleWithPreconProperty: Activity
    {
        public SampleWithPreconProperty()
        {
            InitializeComponent();
        }
    
        private string[] preconditions = new string[0];
    
        public string[] Preconditions
        {
            get { return preconditions; }
            set { preconditions = value; }
        }
    

    Create the PreConditionExecutor Activity

    Let's first look at the declaration and the members:

    [Designer(typeof(SequentialActivityDesigner),typeof(IDesigner))]
    public partial class PreConditionExecutor : CompositeActivity
    {
        // this is a dictionary of the executed activities to be indexed via
        // activity name
        private Dictionary<string, bool> executedActivities = new Dictionary<string, bool>();
    
        // this is a dictionary of activities marked to execute (so we don't 
        // try to schedule the same activity twice)
        private Dictionary<string, bool> markedToExecuteActivities = new Dictionary<string, bool>();
    
        // dependency maps
        // currently dictionary<string, list<string>> that can be read as 
        // activity x has dependencies in list a, b, c
        // A more sophisticated implementation will use a graph object to track
        // execution paths and be able to check for completeness, loops, all
        // that fun graph theory stuff I haven't thought about in a while
        private Dictionary<string, List<string>> dependencyMap = new Dictionary<string, List<string>>();
    

    We have three dictionaries, one to track which have completed, one for which ones are scheduled for execution, and one to map the dependencies. As noted in the comments, a directed graph would be a better representation of this so that we could do some more sophisticated analysis on it.

    Now, let's look at the Execute method, the one that does all the work.

    protected override ActivityExecutionStatus Execute(ActivityExecutionContext executionContext)
    {
        if (0 == Activities.Count)
        {
            return ActivityExecutionStatus.Closed;
        }
        // loop through the activities and mark those that have no preconditions
        // as ok to execute and put those in the queue
        // also generate the graph which will determine future activity execution.
        foreach (Activity a in this.Activities)
        {
            // start with our basic activity
            SampleWithPreconProperty preconActivity = a as SampleWithPreconProperty;
            if (null == preconActivity)
            {
                throw new Exception("Not right now, we're not that fancy");
            }
            // construct the execution dictionary
            executedActivities.Add(a.Name, false);
            markedToExecuteActivities.Add(a.Name, false);
            List<string> actDependencies = new List<string>();
            if (null != preconActivity.Preconditions)
            {
                foreach (string s in preconActivity.Preconditions)
                {
                    actDependencies.Add(s);
                }
            }
            dependencyMap.Add(a.Name, actDependencies);
        }
        // now we have constructed our execution map and our dependency map
        // let's do something with those, like find those activities with
        // no dependencies and schedule those for execution.
        foreach (Activity a in this.Activities)
        {
            if (0 == dependencyMap[a.Name].Count)
            {
                Activity executeThis = this.Activities[a.Name];
                executeThis.Closed += currentlyExecutingActivity_Closed;
                markedToExecuteActivities[a.Name] = true;
                executionContext.ExecuteActivity(this.Activities[a.Name]);
                Console.WriteLine("Scheduled: {0}", a.Name);
            }
        }
        return ActivityExecutionStatus.Executing;
    }

    Basically, we first construct the execution tracking dictionaries, initializing those to false. We then create the dictionary of dependencies. We then loop through the activities and see if there are any that have no dependencies (there has to be one, this would be a good point to raise an exception if there isn't. We record in the dictionary that this one has been marked to execute and then we schedule it for execution (after hooking the Closed event so that we can do some more work later). So what happens when we close?

    void currentlyExecutingActivity_Closed(object sender, ActivityExecutionStatusChangedEventArgs e)
    {
        e.Activity.Closed -= this.currentlyExecutingActivity_Closed;
        if (this.ExecutionStatus == ActivityExecutionStatus.Canceling)
        {
            ActivityExecutionContext context = sender as ActivityExecutionContext;
            context.CloseActivity();
        }
        else if (this.ExecutionStatus == ActivityExecutionStatus.Executing)
        {
            // set the Executed Dictionary
            executedActivities[e.Activity.Name] = true;
            if (executedActivities.ContainsValue(false) /* keep going */)
            {
                // find all the activities that contain the precondition 
                // and remove them, and then cycle through any that have 0 
                // preconditions (and have not already executed or are marked
                // to execute
                // who contains this precondition? 
                foreach (Activity a in this.Activities)
                {
                    // filter out those activities executed or executing
                    if (!(executedActivities[a.Name] || markedToExecuteActivities[a.Name]))
                    {
                        if (dependencyMap[a.Name].Contains(e.Activity.Name))
                        {
                            // we found it, remove it
                            dependencyMap[a.Name].Remove(e.Activity.Name);
                            // if we now have no dependencies, let's schedule it
                            if (0 == dependencyMap[a.Name].Count)
                            {
                                a.Closed += currentlyExecutingActivity_Closed;
                                ActivityExecutionContext context = sender as ActivityExecutionContext;
                                markedToExecuteActivities[a.Name] = true;
                                context.ExecuteActivity(a);
                                Console.WriteLine("Scheduled: {0}", a.Name);
                            }
                        }
                    }
                }
            }
            else //close activity
            {
                ActivityExecutionContext context = sender as ActivityExecutionContext;
                context.CloseActivity();
            }
        }
    }

     

    There are a few lines of code here, but it's pretty simple what's going on.

    • We remove the event handler
    • If we're still executing, mark the list of activities appropriately
    • Loop through and see if any of them have dependencies on the activity which just now completed being done
    • If they do, remove that entry from the dependency list and check if we can run it (if the count == 0). If we can, schedule it, otherwise keep looping.
    • If all the activities have completed (there is no false in the Executed list) then we will close out this activity.

    To actually use this activity, we place it in the workflow, place a number of the child activity types within it (again, with the attached property, you could put nearly any activity in there) and specify the activities that it depends on. Since I haven't put a designer on it, I just use the SequenceDesigner. Here's what it looks like (this is like the graph I drew above but kicks off with the "one" activity executing first:

    image

     

    Where can we go from here

    • Validation, remember all that fun graph theory stuff checking for cycles and completeness and no gaps. Yeah, we should probably wire some of that stuff up here to make sure we can actually execute this thing
    • Analysis of this might be interesting, especially as the process gets more complex (identifying complex dependencies, places for optimization, capacity stuff)
    • A designer to actually do all of this automatically. Right now, it is left as an exercise to the developer to express the dependencies by way of the properties. It would be nice to have a designer that would figure that out for you, and also validate so you don't try to do the impossible.
    • Make this much more dynamic and pull in the preconditions and generate the context for the activities on the fly.  This would be cool if you had a standard "approval" activity that you wanted to have a more configurable execution pattern.  You could build the graph through the designer and then use that to drive the execution

    I'm going to hold off on posting the code, as I've got a few of these and I'd like to come up with some way to put them out there that would make it easy to get to them and use them. You should be able to pretty easily construct your own activity based on the code presented here.

  • mwinkle.blog

    Deep Dive into the WF4 Designer Data Model: ModelItem and ModelProperty

    • 0 Comments

    In this post I published an overview of the designer architecture.  I’ll copy the picture and the description of what I want to talk about today.

    image

    There are a few key components here

    • Source
      • In VS, this is xaml, but this represents the durable storage of the “thing” we are editing
    • Instance
      • This is the in memory representation of the item being edited.  In vs2010 for the WF designer, this is a hierarchy of System.Activities instances (an object tree)
    • Model Item Tree
      • This serves as an intermediary between the view and the instance, and is responsible for change notification and tracking things like undo state
    • Design View
      • This is the visual editing view that is surfaced to the user, the designers are written using WPF, which uses plain old data binding in order to wire up to the underlying model items (which represent the data being edited).
    • Metadata Store
      • This is a mapping between type and designers, an attribute table for lack of a better term.  This is how we know what designer to use for what type

    Motivating the ModelItem tree

    One observation that you could make when looking at the diagram above is “I should just be able to bind the view to the instance.”  This approach could work, but has a couple of implementation problems:

    • It is unrealistic to expect that instances of runtime types will all be built to fit perfectly into a design oriented world.  The most obvious problem is that our activity types aren’t going to inherit from DependencyObject, or implement INotifyPropertyChanged.  We can’t specify that all collections are ObservableCollection  [interesting trivia, ObservableCollection has moved from WindowsBase.dll into System.dll].  If we could, that would make life easy, but that’s not the case.  
      • Additionally, there are design time services that we need to think about supporting (such as Undo/Redo), and we need to make sure we can consistently handle this across many object types, including those that have not been written by us.
    • There may be a case for not actually operating on a live instance of the object graph.  Note, in VS2010, we do, but if we want to do work that would enable a design time XAML experience, we would need our instance to actually contain a lot of information about the source document. 
    • If we go directly from the view to the instance, we tightly couple the two together, and that makes doing more interesting things in the future tricky.  For instance, if we want to add refactoring support to update instances of objects, we need more than just the object graph to do that (the model item tree also keeps track of things like back pointers, so I know everybody that references the object). 

    These reasons cause us to think about an abstraction we can use to intermediate the implementation details of the instance and the view with a common layer.  If you have programmed at the WPF designer extensibility level, you will likely be familiar with the idea (and some of the types) here.

    The ModelItem tree

    The way that I think about the ModelItem/ModelProperty tree is that it forms a very thing proxy layer on top of the shape of the instance being edited. 

    Let’s start with a very simple type:

    public class Animal
    {
        // simple property
        public string Name { get; set; }
        // complex property 
        public Location Residence { get; set; } 
        // list 
        public List<Animal> CloseRelatives { get; set; }
        // dictionary
        public Dictionary<string, object> Features { get; set; } 
    }
    
    public class Location
    {
        public string StreetAddress { get; set; }
        public string City { get; set; }
        public string State { get; set; } 
    }

    Ignore for a moment that I just gave an animal Features, I’m a PM, it’s how we think :-)

    Now, let’s create some instances of that, and then actually create a ModelItem.

       1:  EditingContext ec = new EditingContext();
       2:  var companion1 = new Animal { Name = "Houdini the parakeet" };
       3:  var companion2 = new Animal { Name = "Groucho the fish" };
       4:  var animal = new Animal 
       5:                   {
       6:                       Name = "Sasha the pug",
       7:                       Residence = new Location 
       8:                       {
       9:                           StreetAddress = "123 Main Street",
      10:                           City = "AnyTown",
      11:                           State = "Washington"
      12:                       },
      13:                       Features = { 
      14:                          {"noise", "snort" },
      15:                          {"MeanTimeUntilNaps", TimeSpan.FromMinutes(15) }
      16:                       },
      17:                       CloseRelatives = { companion1, companion2 } 
      18:                   };
      19:  ModelTreeManager mtm = new ModelTreeManager(ec);  mtm.Load(animal);
      20:  ModelItem mi = mtm.Root;

    One thing to note here is that I am using ModelTreeManager and EditingContext outside the context (no pun intended) of the designer (see lines 1, 19, and 20 in the above snippet).  This isn’t the usual way we interact with these, but it’s for this sample so that we can focus just on the data structure itself. [as an aside, my brother did have a parakeet named Houdini]. 

    Let’s take a quick look at a visualization of what the data structure will look like.  Remember to think about the ModelItem tree as a thin proxy to the shape of the instance.

    Rather than spend an hour in powerpoint,  I’ll just include a sketch :-)

    IMAG0111

    On the left, you see the object itself.  For that object, there will be one ModelItem which “points” to that object.  You can call ModelItem.GetCurrentValue() and that will return the actual object. If you look at the ModelItem type, you will see some interesting properties which describe the object.

    •  ItemType is the type of the object pointed to (in this case, Animal)
    • Parent is the item in the tree which “owns” this model item (in the case of the root, this is null)
    • Source is the property that provided the value (in the case of the root, this is null)
    • Sources is a collection of all the backpointers to all of the properties which hold this value
      • Note, the distinction between Source and Sources and Parent and Parents is a topic worthy of another post
    • View is the DependencyObject that is the visual (in teh case above, this is null as there is no view service hooked into the editing context)
    • Properties is the collection of properties of this object

    Properties is the part where things get interesting.  There is a collection of ModelProperty objects which correspond to the shape of the underlying objects.  For the example above, let’s break in the debugger and see what there is to see.

    image

    As we might expect, there are 4 properties, and you will see all sorts of properties that describe the properties.  A few interesting ones:

    • ComputedValue  is a short circuit to the return the underlying object that is pointed to by the property.  This is equivalent to ModelProperty.Value.GetCurrentValue(), but has an interesting side effect that setting it is equivalent to SetValue().  
    • Name, not surprisingly, this is name of the property
    • PropertyType is the type of the property
    • Collection and Dictionary are interesting little shortcuts that we’ll learn about in a future blog post.
    • Value points to a model item that is in turn the pointer to the value
    • Parent points to the ModelItem which “owns” this property

    As Value points to another model item, you can see how this begins to wrap the object, and how this can be used to program against the data model.  Let’s look at a little bit of code. 

    root.Properties["Residence"].
                    Value.
                    Properties["StreetAddress"].
                    Value.GetCurrentValue()

    You might say “hey, that’s a little ugly”  and I have two bits of good news for you.

    1. ModelItem has a custom type descriptor, which means that in WPF XAML we can bind in the way we expect (that is, I can bind to ModelItem.Location.StreetAddress, and the WPF binding mechanism will route that to mi.Properties[“Location”].Value.Properties[“StreetAddress”].  So, if you don’t use C# (and just use XAML), you don’t worry about this
    2. In RTM, we will likely add support for the dynamic keyword in C# that will let you have a dynamic object and then program against it just like you would from WPF XAML.  It’s pretty cool and I hope we get to it, if we do I will blog about it.

    Here’s a set of tests which show the different things we’ve talked about:

     

       1:  ModelItem root = mtm.Root;
       2:  Assert.IsTrue(root.GetCurrentValue() == animal, "GetCurrentValue() returns same object");
       3:  Assert.IsTrue(root.ItemType == typeof(Animal),"ItemType describes the item");
       4:  Assert.IsTrue(root.Parent == null,"root parent is null");
       5:  Assert.IsTrue(root.Source == null, "root source is null");
       6:  Assert.IsTrue(((List<Animal>)root.Properties["CloseRelatives"].ComputedValue)[0] == companion1, 
       7:             "ComputedValue of prop == actual object");
       8:  Assert.IsFalse(((List<Animal>)root.Properties["CloseRelatives"].ComputedValue)[0] == companion2, 
       9:             "ComputedValue of prop == actual object");
      10:  Assert.AreEqual(root.Properties["Residence"].
      11:      Value.
      12:      Properties["StreetAddress"].
      13:      Value.GetCurrentValue(), "123 Main Street", "get actual value back out");
      14:  Assert.AreEqual(root, root.Properties["Residence"].Parent, "property points to owner");
      15:  ModelItem location = root.Properties["Residence"].Value;
      16:  Assert.AreEqual(root.Properties["Residence"], location.Source, "sources point to the right place");

    Oh my, won’t this get explosively large?

    Good question, and the truth is that yes, this could get large as you were to spelunk the object graph.  The good news is that we’re incredibly lazy about loading this, we will only flush out the properties collection on demand, and we won’t generate a ModelItem until it is requested.  When we combine this with the View virtualization work we have done, we will only ever load as much in the WF designer as you need.   This keeps the overhead minimal, and in does not represent a substantial memory overhead.

    Why should I be careful about ModelItem.GetCurrentValue()

    One might be tempted to just say “Hey, I’m in C#, I’ll just call GetCurrentValue() and party on that.  If you do that, you are entering dangerous waters where you can mess up the data model.  Since the underlying instance doesn’t likely support any change notification mechanism, the model item tree will get out of sync with the underlying instance description.  This will manifest itself in problems at the designer because our serialization is based off the instance, not the ModelItem tree (note, that’s a vs2010 implementation detail that could change in a subsequent release).  The net result though is that you will get your view out of sync with your instance and serialization and that’s generally considered problematic. 

    Summary

    Wow, that’s a longer post than I intended.  What have we covered:

    • The Modelitem tree and why we need it
    • The relationship between the underlying instance and ModelItem/ModelProperty
    • The shape and use of ModelItem / ModelProperty
    • Imperative programming against the tree

    What haven’t we covered yet

    • ModelItemCollection and ModelItemDictionary
    • How to use Sources and Parents to understand how the object sits in the graph and manipulation we can do for fun and profit

    I’ll get there.  In the meantime, if you have questions, let me know.

     

    **** Minor update on 10/29 to fix a bug in the code ****

    This is the way to use ModelTreeManager to generate ModelItems (Line 3 is the critical piece that was missing):

       1:  EditingContext ec = new EditingContext();
       2:  ModelTreeManager mtm = new ModelTreeManager(ec);
       3:  mtm.Load(new Sequence());
       4:  mtm.Root.Properties["Activities"].Collection.Add(new WriteLine());
  • mwinkle.blog

    Implementing the N of M Pattern in WF

    • 11 Comments

    The second in my series of alternate execution patterns (part 1)

    I recently worked with a customer who was implementing what I would call a "basic" human workflow system. It tracked approvals, rejections and managed things as they moved through a customizable process. It's easy to build workflows like this with an Approval activity, but they wanted to implement a pattern that's not directly supported out of the box. This pattern, which I have taken to calling "n of m", is also referred to as a "Canceling partial join for multiple instances" in the van der Aalst taxonomy.

    The basic description of this pattern is that we start m concurrent actions, and when some subset of those, n, complete, we can move on in our process and cancel the other concurrent actions. A common scenario for this is where I want to send a document for approval to 5 people, and when 3 of them have approved it, I can move on. This comes up frequently in human or task-based workflows. There are a couple of "business" questions which have to be answered as well, the implementation can support any set of answers for this:

    • What happens if an individual rejects? Does this stop the whole group from completing, or is it simply noted as a "no" vote?
    • How should delegation be handled? Some business want this to break out from the approval process at this point.

    The first approach the customer took was to use the ConditionedActivityGroup (CAG). The CAG is probably one of the most sophisticated out of the box activities that we ship in WF today, and it does give you a lot of control. It also gives you the ability to set the Until condition which would allow us to specify the condition that the CAG could complete, and the others would be cancelled (see Using ConditionedActivityGroup)

    ConditionedActivityGroup

    What are pros and cons of this approach:

    Pros

    • Out of the box activity, take it and go
    • Focus on approval activity
    • Possibly execute same branch multiple times

    Cons

    • Rules get complex ( what happens if the individual rejections causes everything to stop)
    • I need to repeat the same activity multiple times (especially in this case, it's an approval, we know what activity needs to be in the loop)
    • I can't control what else a developer may put in the CAG
    • We may want to execute on some set of approvers that we don't know at design time, imagine an application where one of the steps is defining the list of approvers for the next step. The CAG would make that kind of thing tricky.

    This led us to the decision to create a composite activity that would model this pattern of execution. Here are the steps we went through:

    Build the Approval activity

    The first thing we needed was the approval activity. Since we know this is going to eventually have some complex logic, we decided to take the basic approach of inheriting from SequenceActivity and composing our approval activity out of other activities (sending email, waiting on notification, handling timeouts, etc.). We quickly mocked up this activity to have an "Approver" property, a property for a timeout (which will go away in the real version, but is useful to put some delays into the process. We also added some code activities which Console.WriteLine 'd some information out so we knew which one was executing. We can come back to this later and make it arbitrarily complex. We also added the cancel handler so that we can catch when this activity is canceled (and send out a disregard email, clean up the task list ,etc). Implementing ICompensatableActivity may also be a good idea so that we can play around with compensation if we want to (note, that we will only compensate the closed activities, not the ones marked as canceled).

    Properties of the Approval Activity

    Placing the Approval Activity inside our NofM activity.

    What does the execution pattern look like?

    Now that we have our approval activity, we need to determine how this new activity is going to execute. This will be the guide that we use to implement the execution behavior. There are a couple of steps this will follow

    1. Schedule the approval's to occur in parallel, one per each approver submitted as one of the properties
    2. Wait for each of those to finish.
    3. When one finishes, check to see if the condition to move onward is satisfied (in this case, we increment a counter towards a "number of approvers required" variable.
    4. If we have not met the criteria, we keep on going. [we'll come back to this, as we'll need to figure out what to do if this is the last one and we still haven't met all of the criteria.]
    5. If we have met the criteria, we need to cancel the other running activities (they don't need to make a decision any more).
    6. Implement the easy part of this (scheduling the approvals to occur in parallel)

    I say this is the easy part as this is documented in a number of places, including Bob and Dharma's book. The only trickery occurring here is that we need to clone the template activity, that is the approval activity that we placed inside this activity before we started working on it. This is a topic discussed in Nate's now defunct blog.

        protected override ActivityExecutionStatus Execute(ActivityExecutionContext executionContext)
        {
            // here's what we need to do.
            // 1.> Schedule these for execution, subscribe to when they are complete
            // 2.> When one completes, check if rejection, if so, barf
            // 3.> If approve, increment the approval counter and compare to above
            // 4.> If reroute, cancel the currently executing branches.
            ActivityExecutionContextManager aecm = executionContext.ExecutionContextManager;
            int i = 1;
            foreach (string approver in Approvers)
            {
                // this will start each one up.
                ActivityExecutionContext newContext = aecm.CreateExecutionContext(this.Activities[0]);
                GetApproval ga = newContext.Activity as GetApproval;
                ga.AssignedTo = approver;
                // this is just here so we can get some delay and "long running ness" to the
                // demo
                ga.MyProperty = new TimeSpan(0, 0, 3 * i);
                i++;
                // I'm interested in what happens when this guy closes.
                newContext.Activity.RegisterForStatusChange(Activity.ClosedEvent, this);
                newContext.ExecuteActivity(newContext.Activity);
            }
            return ActivityExecutionStatus.Executing;
        }
    

    Code in the execute method

    One thing that we're doing here is RegisterForStatusChange() This is a friendly little method that will allow me to register for a status change event (thus it is very well named). This is a property of Activity, and I can register for different activity events, like Activity.ClosedEvent or Activity.CancelingEvent. On my NofM activity, I implment IActivityEventListener of type ActivityExecutionStatusChangedEvent (check out this article as to what that does and why). This causes me to implement OnEvent which since it comes from a generic interface is now strongly typed to accept the right type of event arguments in. That's always a neat trick that causes me to be thankful for generics. That's going to lead us to the next part.

    Implement what happens when one of the activities complete

    Now we're getting to the fun part of how we handle what happens when one of these approval activities return. For the sake of keeping this somewhat brief, I'm going to work off the assumption that a rejection does not adversely affect the outcome, it is simply one less person who will vote for approval. We can certainly get more sophisticated, but that is not the point of this post! ActivityExecutionStatusChangedEventArgs has a very nice Activity property which will return the Activity which is the one that caused the event. This let's us find out what happened, what the decision was, who it was assigned to, etc. I'm going to start by putting the code for my method in here and then we'll walk through the different pieces and parts.

    public void OnEvent(object sender, ActivityExecutionStatusChangedEventArgs e)
    {
        ActivityExecutionContext context = sender as ActivityExecutionContext;
        // I don't need to listen any more
        e.Activity.UnregisterForStatusChange(Activity.ClosedEvent, this);
        numProcessed++;
        GetApproval ga = e.Activity as GetApproval;
        Console.WriteLine("Now we have gotten the result from {0} with result {1}", ga.AssignedTo, ga.Result.ToString());
        // here's where we can have some additional reasoning about why we quit
        // this is where all the "rejected cancels everyone" logic could live.
        if (ga.Result == TypeOfResult.Approved)
            numApproved++;
        // close out the activity
        context.ExecutionContextManager.CompleteExecutionContext(context.ExecutionContextManager.GetExecutionContext(e.Activity));
        if (!approvalsCompleted  && (numApproved >= NumRequired))
        {
            // we are done!, we only need to cancel all executing activities once
            approvalsCompleted = true;
            foreach (Activity a in this.GetDynamicActivities(this.EnabledActivities[0]))
                if (a.ExecutionStatus == ActivityExecutionStatus.Executing)
                    context.ExecutionContextManager.GetExecutionContext(a).CancelActivity(a);
        }
        // are we really done with everything? we have to check so that all of the 
        // canceling activities have finished cancelling
        if (numProcessed == numRequested)
            context.CloseActivity();  
    }
    

    Code from "OnEvent"

    The steps here, in English

    • UnregisterForStatusChange - we're done listening.
    • Increment the number of activities which have closed (this will be used to figure out if we are done)
    • Write out to the console for the sake of sanity
    • If we've been approved, increment the counter tracking how many approvals we have
    • Use the ExecutionContextManager to CompleteExecutionContext, this marks the execution context we created for the activity done.
    • Now let's check if we have the right number of approvals, if we do, mark a flag so we know we're done worrying about approves and rejects and then proceed to cancel the activities. CancelActivity. CancelActivity schedules the cancellation, it is possible that this is not a synchronous thing (we can go idle waiting for a cancellation confirmation, for instance.
    • Then we check if all of the activities have closed. What will happen once the activities are scheduled for cancellation is that each one will eventually cancel and then close. This will cause the event to be raised and we step through the above pieces again. Once every activity is done, we finally close out the activity itself.

    Using it

    I placed the activity in a workflow, configured it with five approvers and set it for two to be required to move on. I also placed a code activity outputting "Ahhh, I'm done". I also put a Throw activity in there to raise an exception and cause compensation to occur to illustrate that only the two that completed are compensated for.

    So, what did we do?

    • Create a custom composite activity with the execution logic to implement an n-of-m pattern
    • Saw how we can use IEventActivityListener in order to handle events raised by our child activities
    • Saw how to handle potentially long running cancellation logic, and how to cancel running activities in general.
    • Saw how compensation only occurs for activities that have completed successfully

    Extensions to this idea:

    • More sophisticated rules surrounding the approval (if a VP or two GM's say no, we must stop)
    • Non binary choices (interesting for scoring scenarios, if the average score gets above 95%, regardless of how many approvers remaining, we move on)
    • Create a designer to visualize this, especially when displayed in the workflow monitor to track it
    • Validation (don't let me specify 7 approvals required, and only 3 people)
  • mwinkle.blog

    Introduction to WF Designer Rehosting (Part 1)

    • 1 Comments

    standard beta disclaimer.  This is written against the beta1 API’s.  If this is 2014, the bits will look different.  When the bits update, I will make sure to have a new post that updates these (or points to SDK samples that do)

     

    In WF3, we allowed our customers to rehost the WF designer inside their own applications.  This has many reasons, usually about monitoring a workflow or allowing an end user to customize a constrained workflow (visual construction of an approval process, for instance). This article became the gold standard for writing rehosted applications.  As we were planning our work for the WF4 designer, this was certainly a scenario we considered, and one we wanted to make easier. 

    This post consists of a few parts

    • Designer architecture – introduce the pieces, parts and terms we’ll use throughout
    • Simple rehosting – getting it up and running
    • What to do next

     

    Designer Architecture

    image

    There are a few key components here

    • Source
      • In VS, this is xaml, but this represents the durable storage of the “thing” we are editing
    • Instance
      • This is the in memory representation of the item being edited.  In vs2010 for the WF designer, this is a hierarchy of System.Activities instances (an object tree)
    • Model Item Tree
      • This serves as an intermediary between the view and the instance, and is responsible for change notification and tracking things like undo state
    • Design View
      • This is the visual editing view that is surfaced to the user, the designers are written using WPF, which uses plain old data binding in order to wire up to the underlying model items (which represent the data being edited).
    • Metadata Store
      • This is a mapping between type and designers, an attribute table for lack of a better term.  This is how we know what designer to use for what type

    I’ll go into more detail about these pieces and parts in future posts as well, but this is the mental model of the things I will be talking about as we go through the designer.

     

    Stay tuned, part 2 will come tomorrow!

  • mwinkle.blog

    Types, Metatypes and Bears, Oh my!

    • 1 Comments

    ***** UPDATE: Please see this post for how these features and functionality work in Beta2 *****

     

    Polar Bear

    image courtesy of flickr user chodhound

    This post comes about after a little conversation on the forums where I was talking about using the xaml stack save and load objects.

    Here’s what I said:

    Bob  (it's a little late in Seattle, so I don't have a code editor handy, so there may be some minor errors below),
    If you want to serialize any object graph to xaml, simply look at the XamlServices.Save() api that's in System.Xaml.  I'm sure there are a few SDK samples around that as well.  It takes in a file name to output to, a text writer or a stream, so you get to pick your poison for destination.  Similarly, if you want to get an object graph deserialized from Xaml, you can just use XamlServices.Load() again, with overloads for files, streams, text, xml, and xaml readers.
    To see this API, just do something like

    Sequence s = new Sequence { Activities = { new Persist(), new Persist() } };
    XamlServices.Save(fileNameVar, s);

    If you want to read, basically do the reverse.
    Save and Load are convinient helper functions to operate on the whole of the doc, there Xaml stack surfaces much more programmability and has a nice node stream style API that lets you plug in while nodes are being read and written.
    Now, if you want to deserialize a Xaml file that contains an x:Class directive, you are going to need to do a bit more work (and it depends what you want to serialize to or from).  I'll try to blog about that in the next week or so.

    Now, I want to take a little bit of time to explain the last part.

    A convenient way to think about XAML is a way to write down objects, really, instances of objects.  That said I am not limited to writing down just instances, I can actually write down type definitions as well.  I do this using the x:Class attribute. 

    Consider the following XAML snippet

    <Activity x:Class="foo" ...

    This is roughly equivalent to the following C#

    public class foo : Activity
    {
    ...

    Now, “normally” what happens with XAML like this in .xaml files is that it is used in a VS project and it is set up so that there is a build task who has the job of generating a type from that XAML so that you can use that type subsequently in your application.  This works basically the same way for WPF as well as WF.

    However, if you think about trying simply deserialize this, it’s a little confusing what this will actually deserialize to.  This is a type definition, so if you simply try to pass it to XamlServices.Load(), you will encounter an exception:

    Error System.Xaml.XamlObjectWriterException: No matching constructor found on type System.Activities.Activity. You can use the Arguments or FactoryMethod direct ives to construct this type. ---> System.MissingMethodException: No default constructor found for type System.Activities.Activity. You can use the Arguments or FactoryMethod directives to construct this type.
       at System.Xaml.Runtime.ClrObjectRuntime.DefaultCtorXamlActivator.EnsureConstructorDelegate(XamlType xamlType)
       at System.Xaml.Runtime.ClrObjectRuntime.DefaultCtorXamlActivator.CreateInstance(XamlType xamlType)
       at System.Xaml.Runtime.ClrObjectRuntime.CreateInstanceWithCtor(XamlType xamlType, Object[] args)
       at System.Xaml.Runtime.ClrObjectRuntime.CreateInstance(XamlType xamlType, Object[] args)
       --- End of inner exception stack trace ---

    So, if we want to deserialize that, we need to ask the question first: “Why do we want to deserialize it?”

    Deserialize to Execute

    If we want to simply use the activity we’ve created and have it execute, we have a special type, DynamicActivity.  DynamicActivity lets you execute the activity, and rather than creating a type for it, will allow you to pass in the arguments in the typical Dictionary<string,object> way that we are used to. 

    Imagine a xaml file, just sitting on disk somewhere that looks like this (a workflow that concats two strings): 

    <p:Activity mc:Ignorable=""
        x:Class="WorkflowConsoleApplication1.Sequence1" 
         xmlns="http://schemas.microsoft.com/netfx/2009/xaml/activities/design"
         xmlns:__Sequence1="clr-namespace:WorkflowConsoleApplication1;" 
         xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" 
         xmlns:p="http://schemas.microsoft.com/netfx/2009/xaml/activities" 
         xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
      <x:Members>
        <x:Property Name="argument1" Type="p:InArgument(x:String)" />
        <x:Property Name="argument2" Type="p:InArgument(x:String)" />
      </x:Members>
      <p:Sequence>
        <p:WriteLine>[argument1 + " " + argument2]</p:WriteLine>
      </p:Sequence>
    </p:Activity>

    The code for this is the following:

    object o = WorkflowXamlServices.Load(File.OpenRead("Sequence1.xaml"));
    Console.WriteLine("Success {0}", o.GetType());
    DynamicActivity da = o as DynamicActivity;da.Properties.ToList().ForEach(ap => Console.WriteLine("argument: {0}", ap.Name));
    WorkflowInvoker.Invoke(da, new Dictionary<string, object> { { "argument1", "foo" }, { "argument2", "bar" } });

    Deserialize to “Design”

    At design time, we have an even more interesting problem.  Our designer is an instance editor, and, as such, it must always edit an instance of “something”.  In our case, we actually do some work in our design time xaml reader and writer to deserialize into a metatype, an instance of a type whose sole purpose in life is to describe the activity.  In Beta1, this is called ActivitySchemaType.  ActivitySchemaType simply models the type structure of an activity, complete with properties, etc  If you want to construct an instance of an ActivitySchemaType in code, you can, and then you could use the DesignTimeXamlWriter in order to properly serialize out to an Activity x:class xaml file.  The following code works on an ActivitySchemaType in memory and then serializes it:

    XamlSchemaContext xsc = new XamlSchemaContext();
    ActivitySchemaType ast = new ActivitySchemaType()
    {
        Name = "foo",
        Members = 
         {
            new Property { Name="argument1", Type = xsc.GetXamlType(typeof(InArgument<string>)) }
    
         },
        Body = new Sequence { Activities = { new Persist(), new Persist() } }
    };
    StringBuilder sb = new StringBuilder();
    
    DesignTimeXamlWriter dtxw = new DesignTimeXamlWriter(
        new StringWriter(sb),
        xsc, "foo", "bar.rock");
    
    XamlServices.Save(dtxw, ast);
    Console.WriteLine("you wrote:");
    ConsoleColor old = Console.ForegroundColor;
    Console.ForegroundColor = ConsoleColor.DarkGray;
    Console.WriteLine(sb.ToString());
    Console.ForegroundColor = old;

    This is what the output looks like:

    <p:Activity mc:Ignorable=""
         x:Class="foo" 
         xmlns="http://schemas.microsoft.com/netfx/2009/xaml/activities/design" 
         xmlns:__foo="clr-namespace:bar.rock;" 
         xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" 
         xmlns:p="http://schemas.microsoft.com/netfx/2009/xaml/activities" 
         xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
      <x:Members>
        <x:Property Name="argument1" Type="p:InArgument(x:String)" />
      </x:Members>
      <p:Sequence>
        <p:Persist />
        <p:Persist />
      </p:Sequence>
    </p:Activity>

    If you want to read in from this, you have to do a little bit of trickery with the designer as DesignTimeXamlReader is not a public type in beta1. 

    WorkflowDesigner wd = new WorkflowDesigner();
    wd.Load("Sequence1.xaml");
    object obj = wd.Context.Services.GetService<ModelService>().Root.GetCurrentValue();
    Console.WriteLine("object read type: {0}", obj.GetType());
    ActivitySchemaType schemaType = obj as ActivitySchemaType;
    Console.WriteLine("schema type name: {0}", schemaType.Name);
    schemaType.Members.ToList().ForEach(p => Console.WriteLine("argument: {0}, type: {1}", p.Name, p.Type));

    That wraps up our tour of the ways to read (and write)  <Activity x:Class xaml

    Here’s the full text of the little program I put together to execute this, also make sure to drop a sequence1.xaml into your execution directory if you want this to not throw :-)


    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.Xaml;
    using System.Activities.Statements;
    using System.Activities;
    using System.IO;
    using System.Activities.Design.Xaml;
    using System.Windows.Markup;
    using System.Xml;
    using System.Activities.Design;
    using System.Activities.Design.Services;
     
    namespace ConsoleApplication2
    {
        class Program
        {
            [STAThread()]
            static void Main(string[] args)
            {
                bool doStuff = true;
                Guid g = Guid.NewGuid();
                while (doStuff)
                {
                    Console.WriteLine();
                    Console.WriteLine("What do you want to do?");
                    Console.WriteLine("    read [x]:class xaml with XamlServices");
                    Console.WriteLine("    read x:class xaml with W[o]rkflowXamlServices");
                    Console.WriteLine("    [w]rite an ActivitySchemaType");
                    Console.WriteLine("    r[e]ad an ActivitySchemaType");
                    Console.WriteLine("    [q]uit");
                    Console.WriteLine();
                    char c = Console.ReadKey(true).KeyChar;
                    switch (c)
                    {
                        case 'w':
                            XamlSchemaContext xsc = new XamlSchemaContext();
                            ActivitySchemaType ast = new ActivitySchemaType()
                            {
                                Name = "foo",
                                Members = 
                                 {
                                    new Property { Name="argument1", Type = xsc.GetXamlType(typeof(InArgument<string>)) }
     
                                 },
                                Body = new Sequence { Activities = { new Persist(), new Persist() } }
                            };
                            StringBuilder sb = new StringBuilder();
                            
                            DesignTimeXamlWriter dtxw = new DesignTimeXamlWriter(
                                new StringWriter(sb),
                                xsc, "foo", "bar.rock");
                            
                            XamlServices.Save(dtxw, ast);
                            Console.WriteLine("you wrote:");
                            ConsoleColor old = Console.ForegroundColor;
                            Console.ForegroundColor = ConsoleColor.DarkGray;
                            Console.WriteLine(sb.ToString());
                            Console.ForegroundColor = old;
                            break;
     
                        case 'x':
                            try
                            {
                                object o = XamlServices.Load(File.OpenRead("Sequence1.xaml"));
                                Console.WriteLine("Success{0}", o.GetType());
                            }
                            catch (Exception ex)
                            {
                                Console.WriteLine("Error {0}", ex);
                            }
                            break;
                        case 'o':
                            try
                            {
                                object o = WorkflowXamlServices.Load(File.OpenRead("Sequence1.xaml"));
                                Console.WriteLine("Success {0}", o.GetType());
                                DynamicActivity da = o as DynamicActivity;
                                da.Properties.ToList().ForEach(ap => Console.WriteLine("argument: {0}", ap.Name));
                                WorkflowInvoker.Invoke(da, new Dictionary<string, object> { { "argument1", "foo" }, { "argument2", "bar" } });
                            }
                            catch (Exception ex)
                            {
                                Console.WriteLine("Error {0}", ex);
                            }
                            break;
                        case 'e':
                            WorkflowDesigner wd = new WorkflowDesigner();
                            wd.Load("Sequence1.xaml");
                            object obj = wd.Context.Services.GetService<ModelService>().Root.GetCurrentValue();
                            Console.WriteLine("object read type: {0}", obj.GetType());
                            ActivitySchemaType schemaType = obj as ActivitySchemaType;
                            Console.WriteLine("schema type name: {0}", schemaType.Name);
                            schemaType.Members.ToList().ForEach(p => Console.WriteLine("argument: {0}, type: {1}", p.Name, p.Type));
                            break;
                        case 'q':
                            doStuff = false;
                            break;
     
                        default:
                            break;
                    }
     
                }
                Console.WriteLine("All done");
                Console.ReadLine();
            }
     
            private static object CreateWfObject()
            {
                return new Sequence { Activities = { new Persist(), new Persist() } };
            }
        }
    }
  • mwinkle.blog

    WF4 Design Time AttachedPropertiesService and Attached Properties

    • 3 Comments

    I’ve been meaning to throw together some thoughts on attached properties and how they can be used within the designer.  Basically, you can think about attached properties as injecting some additional “stuff” onto an instance that you can use elsewhere in your code.

    Motivation

    In the designer, we want to be able to have behavior and view tied to interesting aspects of the data.  For instance, we would like to have a view updated when an item becomes selected.  In WPF, we bind the style based on the “isSelectionProperty.”  Now, our data model doesn’t have any idea of selection, it’s something we’d like the view level to “inject” that idea on any model item so that a subsequent view could take advantage of.  You can kind of view Attached Properties as a nice syntactic sugar to not have to keep a bunch of lookup lists around.  As things like WPF bind to the object very well, and not so much a lookup list, this ends up being an interesting model.

    To be clear, you could write a number of value converters that take the item being bound, look up in a lookup list somewhere, and return the result that will be used.  The problem we found is that we were doing this in a bunch of places, and we really wanted to have clean binding statements inside our WPF XAML, rather than hiding a bunch of logic in the converters.

    How Does it Work

    First, some types.

    Name Description
    AttachedPropertiesService Service in editing context for managing AttachedProperties
    AttachedProperty Base attached property type (abstract)
    AttachedProperty<T> Strongly typed attached property with interesting getter/setter programmability

    in diagram form:

    image

    One thing that might look a little funny to some folks who have used attached properties in other contexts (WF3, WPF, XAML), is the “IsBrowsable” property.  The documentation is a little sparse right now, but what this will do is determine how discoverable the property is.  If this is set to true, the attached property will show up in the Properties collection of the ModelItem to which the AP is attached.  What this means is that it can show up in the Property grid, you can bind WPF statements directly to it, as if it were a real property of the object.  Attached properties by themselves have no actual storage representation, so these exist as design time only constructs.

    Getter/ Setter?

    One other thing that you see on the AttachedProperty<T> is the Getter and Setter properties.  These are of type Func<ModelItem,T> and Action<ModelItem,T> respectively.  What these allow you to do is perform some type of computation whenever the get or set is called against the AttachedProperty.  Why is this interesting?  Well, let’s say that you’d like to have a computed value retrieved, such as “IsPrimarySelection” checking with the Selection context item to see if an item is selected.  Or, customizing the setter to either store the value somewhere more durable, or updating a few different values.  The other thing that happens is that since all of these updates go through the ModelItem tree, any changes will be propagated to other listeners throughout the designer.

    Looking at Some Code

    Here is a very small console based app that shows how you can program against the attached properties.  An interesting exercise for the reader would be to take this data structure, put it in a WPF app and experiment with some of the data binding.

    First, two types:

    public class Dog
    {
        public string Name { get; set; }
        public string Noise { get; set; }
        public int Age { get; set; }
       
    }
    
    public class Cat
    {
        public string Name { get; set; }
        public string Noise { get; set; }
        public int Age { get; set; }
    }

    Ignore no common base type, that actually makes this a little more interesting, as we will see.

    Now, let’s write some code.  First, let’s initialize and EditingContext and ModelTreeManager

       1:       static void Main(string[] args)
       2:          {
       3:              EditingContext ec = new EditingContext();
       4:              ModelTreeManager mtm = new ModelTreeManager(ec);
       5:              mtm.Load(new object[] { new Dog { Name = "Sasha", Noise = "Snort", Age = 5 },
       6:                                      new Cat { Name="higgs", Noise="boom", Age=1 } });
       7:              dynamic root = mtm.Root;
       8:              dynamic dog = root[0];
       9:              dynamic cat = root[1];
      10:              ModelItem dogMi = root[0] as ModelItem;
      11:              ModelItem catMi = root[1] as ModelItem;

    Note, lines 7-9 will not work in Beta2 (preview of coming attractions).  To get lines 10-11 working in beta2, cast root to ModelItemCollection and then use the indexers to extract the values

    Now, let’s build an attached property, and we will assign it only to type “dog”

       1:  // Add an attached Property
       2:  AttachedProperty<bool> ap = new AttachedProperty<bool>
       3:  {
       4:      IsBrowsable = true,
       5:      Name = "IsAnInterestingDog",
       6:      Getter = (mi => mi.Properties["Name"].ComputedValue.ToString() == "Sasha"),
       7:      OwnerType = typeof(Dog)
       8:  };
       9:  ec.Services.Publish<AttachedPropertiesService>(new AttachedPropertiesService());
      10:  AttachedPropertiesService aps = ec.Services.GetService<AttachedPropertiesService>();
      11:  aps.AddProperty(ap);
      12:   
      13:  Console.WriteLine("---- Enumerate properties on dog (note new property)----");
      14:  dogMi.Properties.ToList().ForEach(mp => Console.WriteLine(" Property : {0}", mp.Name));
      15:   
      16:  Console.WriteLine("---- Enumerate properties on cat (note  no new property) ----");
      17:  catMi.Properties.ToList().ForEach(mp => Console.WriteLine(" Property : {0}", mp.Name));

    Let’s break down what happened here.

    • Line2-8, create an AttachedProperty<bool>
      • We set IsBrowsable to true, we want to see it in the output
      • Name, that’s what it will be projected as
      • OwnerType, we only want this to apply to Dog’s, not Cat’s or Objects or whatever.
      • Finally, Getter, and look what we do here, we operate on the model item to do some computation and return a bool (in this case, we look to see if the name property equals “Sasha”
    • Line 9-11 create an AttachedPropertiesService and add it to the editing context.
    • Lines 13-17 output the properties, and let’s see what that looks like:
    ---- Enumerate properties on dog (note new property)----
     Property : Name
     Property : Noise
     Property : Age
     Property : IsAnInterestingDog
    ---- Enumerate properties on cat (note  no new property) ----
     Property : Name
     Property : Noise
     Property : Age

    Ok, so that’s interesting, we’ve injected a new property, only on the dog type.  If I got dogMI.Properties[“IsAnInterestingDog”], I would have a value that I could manipulate (albeit returned via the getter).

    Let’s try something a little different:

       1:  AttachedProperty<bool> isYoungAnimal = new AttachedProperty<bool>
       2:  {
       3:      IsBrowsable = false,
       4:      Name = "IsYoungAnimal",
       5:      Getter = (mi => int.Parse(mi.Properties["Age"].ComputedValue.ToString()) < 2)
       6:  };
       7:   
       8:  aps.AddProperty(isYoungAnimal);
       9:   
      10:  // expect to not see isYoungAnimal show up
      11:  Console.WriteLine("---- Enumerate properties on dog  (note isYoungAnimal doesn't appear )----");
      12:  dogMi.Properties.ToList().ForEach(mp => Console.WriteLine(" Property : {0}", mp.Name));
      13:  Console.WriteLine("---- Enumerate properties on cat (note isYoungAnimal doesn't appear )----");
      14:  catMi.Properties.ToList().ForEach(mp => Console.WriteLine(" Property : {0}", mp.Name));
      15:   
      16:  Console.WriteLine("---- get attached property via GetValue ----");
      17:  Console.WriteLine("getting non browsable attached property on dog {0}", isYoungAnimal.GetValue(dogMi));
      18:  Console.WriteLine("getting non browsable attached property on cat {0}", isYoungAnimal.GetValue(catMi));

    Let’s break this down:

    • Lines 1-6 create a new attached property
      • IsBrowsable is false
      • No OwnerType being set
      • The Getter does some computation to return true or false
    • Lines 10-14 write out the properties (as above)
    • Lines 17-18 extract the value with AttachedPropertyInstance.GetValue(ModelItem)

    Let’s see the output there:

    ---- Enumerate properties on dog  (note isYoungAnimal doesn't appear )----
     Property : Name
     Property : Noise
     Property : Age
     Property : IsAnInterestingDog
    ---- Enumerate properties on cat (note isYoungAnimal doesn't appear )----
     Property : Name
     Property : Noise
     Property : Age
    ---- get attached property via GetValue ----
    getting non browsable attached property on dog False
    getting non browsable attached property on cat True

    As we can see, we’ve now injected this behavior, and we can extract the value. 

    Let’s get a little more advanced and do something with the setter.  Here, if isYoungAnimal is set to true, we will change the age (it’s a bit contrived, but shows the dataflow on simple objects, we’ll see in a minute a more interesting case).

       1:  // now, let's do something clever with the setter. 
       2:  Console.WriteLine("---- let's use the setter to have some side effect ----");
       3:  isYoungAnimal.Setter = ((mi, val) => { if (val) { mi.Properties["Age"].SetValue(10); } });
       4:  isYoungAnimal.SetValue(cat, true);
       5:  Console.WriteLine("cat's age now {0}", cat.Age);

    Pay attention to what the Setter does now.  We create the method through which subsequent SetValue’s will be pushed.  Here’s that output:

    ---- let's use the setter to have some side effect ----
    cat's age now 10

    Finally, let’s show an example of how this can really function as some nice sugar to eliminate the need for a lot of value converters in WPF by using this capability as a way to store the relationship somewhere (rather than just using at a nice proxy to change a value):

       1:  // now, let's have a browesable one with a setter.
       2:  // this plus dynamics are a mini "macro language" against the model items
       3:   
       4:  List<Object> FavoriteAnimals = new List<object>();
       5:   
       6:  // we maintain state in FavoriteAnimals, and use the getter/setter func
       7:  // in order to query or edit that collection.  Thus changes to an "instance"
       8:  // are tracked elsewhere.
       9:  AttachedProperty<bool> isFavoriteAnimal = new AttachedProperty<bool>
      10:  {
      11:      IsBrowsable = false,
      12:      Name = "IsFavoriteAnimal",
      13:      Getter = (mi => FavoriteAnimals.Contains(mi)),
      14:      Setter = ((mi, val) => 
      15:          {
      16:              if (val)
      17:                  FavoriteAnimals.Add(mi);
      18:              else
      19:              {
      20:                  FavoriteAnimals.Remove(mi);
      21:              }
      22:          })
      23:  };
      24:   
      25:   
      26:  aps.AddProperty(isFavoriteAnimal);
      27:   
      28:  dog.IsFavoriteAnimal = true;
      29:  // remove that cat that isn't there
      30:  cat.IsFavoriteAnimal = false;
      31:  cat.IsFavoriteAnimal = true;
      32:  cat.IsFavoriteAnimal = false;
      33:   
      34:  Console.WriteLine("Who are my favorite animal?");
      35:  FavoriteAnimals.ForEach(o => Console.WriteLine((o as ModelItem).Properties["Name"].ComputedValue.ToString()));

    Little bit of code, let’s break it down one last time:

    • Line 14 – Create a setter that acts upon the FavoriteAnimals collection to either add or remove the element
    • Line 28-32 – do a few different sets on this attached property
      • NOTE: you can’t do that in beta2 as the dynamic support hasn’t been turned on.  Rather you would have to do isFavoriteAnimal.SetValue(dogMi, true).
    • Line 35 then prints the output to the console, and as expected we only see the dog there:
    -- Who are my favorite animals?
    Sasha

    I will attach the whole code file at the bottom of this post, but this shows you how you can use the following:

    • Attached properties to create “computed values” on top of existing types
    • Attached properties to inject a new (and discoverable) property entry on top of the designer data model (in the form of a new property)
    • Using the Setter capability to both propagate real changes to the type, providing a nice way to give a cleaner interface, as well as use it as a mechanism to store data about the object outside of the object, but in a way that gives me access to it such that it seems like the object. 
      • This is some really nice syntactic sugar that we sprinkle on top of things

    What do I do now?

    Hopefully this post gave you some ideas about how the attached property mechanisms work within the WF4 designer.  These give you a nice way to complement the data model and create nice bindable targets that your WPF Views can layer right on top of.

    A few ideas for these things:

    • Use the Setters to clean up a “messy” activity API into a single property type that you then build a custom editor for in the property grid. 
    • Use the Getters (and the integration into the ModelProperty collection) in order to create computational properties that are used for displaying interesting information on the designer surface.
    • Figure out how to bridge the gap to take advantage of the XAML attached property storage mechanism, especially if you author runtime types that look for attached properties at runtime. 
    • Use these, with a combination of custom activity designers to extract and display interesting runtime data from a tracking store

     

    Full Code Posting

    using System;
    using System.Activities.Presentation;
    using System.Activities.Presentation.Model;
    using System.Collections.Generic;
    using System.Linq;
    
    namespace AttachedPropertiesBlogPosting
    {
        class Program
        {
            static void Main(string[] args)
            {
                EditingContext ec = new EditingContext();
                ModelTreeManager mtm = new ModelTreeManager(ec);
                mtm.Load(new object[] { new Dog { Name = "Sasha", Noise = "Snort", Age = 5 },
                                        new Cat { Name="higgs", Noise="boom", Age=1 } });
                dynamic root = mtm.Root;
                dynamic dog = root[0];
                dynamic cat = root[1];
                ModelItem dogMi = root[0] as ModelItem;
                ModelItem catMi = root[1] as ModelItem;
              
                // Add an attached Property
                AttachedProperty<bool> ap = new AttachedProperty<bool>
                {
                    IsBrowsable = true,
                    Name = "IsAnInterestingDog",
                    Getter = (mi => mi.Properties["Name"].ComputedValue.ToString() == "Sasha"),
                    OwnerType = typeof(Dog)
                };
                ec.Services.Publish<AttachedPropertiesService>(new AttachedPropertiesService());
                AttachedPropertiesService aps = ec.Services.GetService<AttachedPropertiesService>();
                aps.AddProperty(ap);
    
                Console.WriteLine("---- Enumerate properties on dog (note new property)----");
                dogMi.Properties.ToList().ForEach(mp => Console.WriteLine(" Property : {0}", mp.Name));
    
                Console.WriteLine("---- Enumerate properties on cat (note  no new property) ----");
                catMi.Properties.ToList().ForEach(mp => Console.WriteLine(" Property : {0}", mp.Name));
    
    
                
                AttachedProperty<bool> isYoungAnimal = new AttachedProperty<bool>
                {
                    IsBrowsable = false,
                    Name = "IsYoungAnimal",
                    Getter = (mi => int.Parse(mi.Properties["Age"].ComputedValue.ToString()) < 2)
                };
    
                aps.AddProperty(isYoungAnimal);
    
                // expect to not see isYoungAnimal show up
                Console.WriteLine("---- Enumerate properties on dog  (note isYoungAnimal doesn't appear )----");
                dogMi.Properties.ToList().ForEach(mp => Console.WriteLine(" Property : {0}", mp.Name));
                Console.WriteLine("---- Enumerate properties on cat (note isYoungAnimal doesn't appear )----");
                catMi.Properties.ToList().ForEach(mp => Console.WriteLine(" Property : {0}", mp.Name));
    
                Console.WriteLine("---- get attached property via GetValue ----");
                Console.WriteLine("getting non browsable attached property on dog {0}", isYoungAnimal.GetValue(dogMi));
                Console.WriteLine("getting non browsable attached property on cat {0}", isYoungAnimal.GetValue(catMi));
                
                
                // now, let's do something clever with the setter. 
                Console.WriteLine("---- let's use the setter to have some side effect ----");
                isYoungAnimal.Setter = ((mi, val) => { if (val) { mi.Properties["Age"].SetValue(10); } });
                isYoungAnimal.SetValue(cat, true);
                Console.WriteLine("cat's age now {0}", cat.Age);
    
                // now, let's have a browesable one with a setter.
                // this plus dynamics are a mini "macro language" against the model items
    
                List<Object> FavoriteAnimals = new List<object>();
    
                // we maintain state in FavoriteAnimals, and use the getter/setter func
                // in order to query or edit that collection.  Thus changes to an "instance"
                // are tracked elsewhere.
                AttachedProperty<bool> isFavoriteAnimal = new AttachedProperty<bool>
                {
                    IsBrowsable = false,
                    Name = "IsFavoriteAnimal",
                    Getter = (mi => FavoriteAnimals.Contains(mi)),
                    Setter = ((mi, val) => 
                        {
                            if (val)
                                FavoriteAnimals.Add(mi);
                            else
                            {
                                FavoriteAnimals.Remove(mi);
                            }
                        })
                };
                aps.AddProperty(isFavoriteAnimal);
                dog.IsFavoriteAnimal = true;
                // remove that cat that isn't there
                cat.IsFavoriteAnimal = false;
                cat.IsFavoriteAnimal = true;
                cat.IsFavoriteAnimal = false;
                Console.WriteLine("Who are my favorite animals?");
                FavoriteAnimals.ForEach(o => Console.WriteLine((o as ModelItem).Properties["Name"].ComputedValue.ToString()));
                Console.ReadLine();
            }
        }
    
        public class Dog
        {
            public string Name { get; set; }
            public string Noise { get; set; }
            public int Age { get; set; }
        }
    
        public class Cat
        {
            public string Name { get; set; }
            public string Noise { get; set; }
            public int Age { get; set; }
        }
    }
  • mwinkle.blog

    Inspection, Default Services and Items (WF4 EditingContext Intro Part 6)

    • 0 Comments

    This part 6 of my 6  part series on the EditingContext.

  • Introduction
  • Sharing Functionality between Designers 
  • Host provided capabilities  
  • Providing callbacks for the host 
  • Subscription/Notification engine
  • Inspection, Default Services and Items (you are here)

    I want to wrap up this series of posts by posting some code for an activity designer that functions more as a diagnostic tool, and will display all of the Items and services of the EditingContext within the designer.  This will be useful from an investigation perspective, and hopefully as a diagnostic tool.  We will use this to help us understand what are the services that are available out of the box in VS, as well as in a rehosted application. 

    We first need to create an empty activity to attach a designer to.

    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.Activities;
    using System.ComponentModel;
    
    namespace blogEditingContext
    {
        [Designer(typeof(DiagnosticDesigner))]
        public sealed class Diagnosticator : CodeActivity
        {
            // Define an activity input argument of type string
            public InArgument<string> Text { get; set; }
    
            // If your activity returns a value, derive from CodeActivity<TResult>
            // and return the value from the Execute method.
            protected override void Execute(CodeActivityContext context)
            {
                // Obtain the runtime value of the Text input argument
                string text = context.GetValue(this.Text);
            }
        }
    }

    Now, let’s create our designer.  We could do fancy treeviews or object browser style UI’s, but as this is a blog post, I want to provide you with the basics, and then let you figure out how that is most useful to you.  So, we will just create a designer that writes out to debug output the relevant information. 

    <sap:ActivityDesigner x:Class="blogEditingContext.DiagnosticDesigner"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        xmlns:sap="clr-namespace:System.Activities.Presentation;assembly=System.Activities.Presentation"
        xmlns:sapv="clr-namespace:System.Activities.Presentation.View;assembly=System.Activities.Presentation">
      <Grid>
            <Button Click="Button_Click">Debug.WriteLine Context Data</Button>
        </Grid>
    </sap:ActivityDesigner>

    And now the code

    using System.Diagnostics;
    using System.Linq;
    using System.Windows;
    
    namespace blogEditingContext
    {
        // Interaction logic for DiagnosticDesigner.xaml
        public partial class DiagnosticDesigner
        {
            public DiagnosticDesigner()
            {
                InitializeComponent();
            }
    
            private void Button_Click(object sender, RoutedEventArgs e)
            {
                // the goal here is to output meaningful and useful information about 
                // the contents of the editing context here. 
                int level = Debug.IndentLevel;
                Debug.WriteLine("Items in the EditingContext");
                Debug.IndentLevel++;
                foreach (var item in Context.Items.OrderBy(x => x.ItemType.ToString()))
                {
                    Debug.WriteLine(item.ItemType);
                }
    
                Debug.IndentLevel = level;
                Debug.WriteLine("Services in the EditingContext");
                foreach (var service in Context.Services.OrderBy(x => x.ToString()))
                {
                    Debug.WriteLine(service);
                }
            }
        }
    }

    Let’s break this down.  The work here happens in the button click where we simply order by types’ string representations and output them to the debug writer (a more robust implementation might use a trace writer that could be configured in the app, but for this purpose, this will be sufficient.

    So, what output do we get?

    VS Standard Services and Items

    We determine this by using the activity in a freshly opened WF project

    Items

     

    System.Activities.Presentation.Hosting.AssemblyContextControlItem
    System.Activities.Presentation.Hosting.ReadOnlyState
    System.Activities.Presentation.Hosting.WorkflowCommandExtensionItem
    System.Activities.Presentation.View.Selection
    System.Activities.Presentation.WorkflowFileItem

    Services

    System.Activities.Presentation.Debug.IDesignerDebugView
    System.Activities.Presentation.DesignerPerfEventProvider
    System.Activities.Presentation.FeatureManager
    System.Activities.Presentation.Hosting.ICommandService
    System.Activities.Presentation.Hosting.IMultiTargetingSupportService
    System.Activities.Presentation.Hosting.WindowHelperService
    System.Activities.Presentation.IActivityToolboxService
    System.Activities.Presentation.IIntegratedHelpService
    System.Activities.Presentation.IWorkflowDesignerStorageService
    System.Activities.Presentation.IXamlLoadErrorService
    System.Activities.Presentation.Model.AttachedPropertiesService
    System.Activities.Presentation.Model.ModelTreeManager
    System.Activities.Presentation.Services.ModelService
    System.Activities.Presentation.Services.ViewService
    System.Activities.Presentation.UndoEngine
    System.Activities.Presentation.Validation.IValidationErrorService
    System.Activities.Presentation.Validation.ValidationService
    System.Activities.Presentation.View.ActivityTypeDesigner+DisplayNameUpdater
    System.Activities.Presentation.View.DesignerView
    System.Activities.Presentation.View.IExpressionEditorService
    System.Activities.Presentation.View.ViewStateService
    System.Activities.Presentation.View.VirtualizedContainerService

     

    Basic Rehosted Application Standard Services and Items

    Items

    System.Activities.Presentation.Hosting.ReadOnlyState
    System.Activities.Presentation.Hosting.WorkflowCommandExtensionItem
    System.Activities.Presentation.View.Selection

    Services

    System.Activities.Presentation.DesignerPerfEventProvider
    System.Activities.Presentation.FeatureManager
    System.Activities.Presentation.Hosting.WindowHelperService
    System.Activities.Presentation.Model.AttachedPropertiesService
    System.Activities.Presentation.Model.ModelTreeManager
    System.Activities.Presentation.Services.ModelService
    System.Activities.Presentation.Services.ViewService
    System.Activities.Presentation.UndoEngine
    System.Activities.Presentation.Validation.ValidationService
    System.Activities.Presentation.View.DesignerView
    System.Activities.Presentation.View.ViewStateService
    System.Activities.Presentation.View.VirtualizedContainerService

    Comparison Table View

    Items VS Rehosted

    System.Activities.Presentation.Hosting.AssemblyContextControlItem 

    Yes

    No

    System.Activities.Presentation.Hosting.ReadOnlyState 

    Yes

    Yes

    System.Activities.Presentation.Hosting.WorkflowCommandExtensionItem 

    Yes

    Yes

    System.Activities.Presentation.View.Selection 

    Yes

    Yes

    System.Activities.Presentation.WorkflowFileItem 

    Yes

    No

    Services VS Rehosted 

    System.Activities.Presentation.Debug.IDesignerDebugView 

    Yes

    No

    System.Activities.Presentation.DesignerPerfEventProvider 

    Yes

    Yes

    System.Activities.Presentation.FeatureManager 

    Yes

    Yes

    System.Activities.Presentation.Hosting.ICommandService 

    Yes

    No

    System.Activities.Presentation.Hosting.IMultiTargetingSupportService 

    Yes

    No

    System.Activities.Presentation.Hosting.WindowHelperService 

    Yes

    Yes

    System.Activities.Presentation.IActivityToolboxService 

    Yes

    No

    System.Activities.Presentation.IIntegratedHelpService 

    Yes

    No

    System.Activities.Presentation.IWorkflowDesignerStorageService 

    Yes

    No

    System.Activities.Presentation.IXamlLoadErrorService 

    Yes

    No

    System.Activities.Presentation.Model.AttachedPropertiesService 

    Yes

    Yes

    System.Activities.Presentation.Model.ModelTreeManager 

    Yes

    Yes

    System.Activities.Presentation.Services.ModelService 

    Yes

    Yes

    System.Activities.Presentation.Services.ViewService 

    Yes

    Yes

    System.Activities.Presentation.UndoEngine 

    Yes

    Yes

    System.Activities.Presentation.Validation.IValidationErrorService 

    Yes

    No

    System.Activities.Presentation.Validation.ValidationService 

    Yes

    Yes

    System.Activities.Presentation.View.ActivityTypeDesigner+DisplayNameUpdater 

    Yes

    No

    System.Activities.Presentation.View.DesignerView 

    Yes

    Yes

    System.Activities.Presentation.View.IExpressionEditorService 

    Yes

    No

    System.Activities.Presentation.View.ViewStateService 

    Yes

    Yes

    System.Activities.Presentation.View.VirtualizedContainerService 

    Yes

    Yes

     

     

     

     

    Conclusion

    This wraps up our series on the editing context.  We’ve gone through the basics of why we need it, what we can do with it, and then we moved how to use it, from both the very simple to the very complex.  We’ve finished with a diagnostic tool to help understand what all items I can bind to.

    What’s Next From Here?

    A few ideas for the readers who have read all of these:

    • Wire up a few attached properties to reflect back through to some interesting data (like if it is selected).  These attached properties could then be used directly by your UI (via the binding in XAML) to let your designers display and react to changes in the data
    • Think about ideas for services you might want to add in VS without depending on an activity to inject it (and send me mail, I am trying to compile a list of interesting things)
    • Are there service/item implementations you want to override in VS?
    • Is there a service/item you expect to see that is not there?

    Thanks for now!

     

     

     

  • mwinkle.blog

    Making Swiss Cheese Look Good, or Designers for ActivityAction in WF4

    • 4 Comments

    In my last post, I covered using ActivityAction in order to provide a schematized callback, or hole, for the consumers of your activity to supply.  What I didn’t cover, and what I intend to here, is how to create a designer for that.

    If you’ve been following along, or have written a few designers using WorkflowItemPresenter, you may have a good idea how we might go about solving this.  There are a few gotcha’s along the way that we’ll cover as we go through this.

    First, let’s familiarize ourselves with the Timer example in the previous post:

    using System;
    using System.Activities;
    using System.Diagnostics;
     
    namespace WorkflowActivitiesAndHost
    {
        public sealed class TimerWithAction : NativeActivity<TimeSpan>
        {
            public Activity Body { get; set; }
            public Variable<Stopwatch> Stopwatch { get; set; }
            public ActivityAction<TimeSpan> OnCompletion { get; set; }
     
            public TimerWithAction()
            {
                Stopwatch = new Variable<Stopwatch>();
            }
     
            protected override void CacheMetadata(NativeActivityMetadata metadata)
            {
                metadata.AddImplementationVariable(Stopwatch);
                metadata.AddChild(Body);
                metadata.AddDelegate(OnCompletion);
            }
     
            protected override void Execute(NativeActivityContext context)
            {
                Stopwatch sw = new Stopwatch();
                Stopwatch.Set(context, sw);
                sw.Start();
                // schedule body and completion callback
                context.ScheduleActivity(Body, Completed);
     
            }
     
            private void Completed(NativeActivityContext context, ActivityInstance instance)
            {
                if (!context.IsCancellationRequested)
                {
                    Stopwatch sw = Stopwatch.Get(context);
                    sw.Stop();
                    Result.Set(context, sw.Elapsed);
                    if (OnCompletion != null)
                    {
                        context.ScheduleAction<TimeSpan>(OnCompletion, Result.Get(context));
                    }
                }
            }
     
            protected override void Cancel(NativeActivityContext context)
            {
                context.CancelChildren();
                if (OnCompletion != null)
                {
                    context.ScheduleAction<TimeSpan>(OnCompletion, TimeSpan.MinValue);
                }
            }
        }
    }
     

     

    So, let’s build a designer for this.  First we have to provide a WorkflowItemPresenter bound to the .Body property.  This is pretty simple.  Let’s show the “simple” XAML that will let us easily drop something on the Body property

    <sap:ActivityDesigner x:Class="actionDesigners.ActivityDesigner1"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        xmlns:sap="clr-namespace:System.Activities.Presentation;assembly=System.Activities.Presentation"
        xmlns:sapv="clr-namespace:System.Activities.Presentation.View;assembly=System.Activities.Presentation">
        <StackPanel>
            <sap:WorkflowItemPresenter 
                     HintText="Drop the body here" 
                     BorderBrush="Black" 
                     BorderThickness="2" 
                     Item="{Binding Path=ModelItem.Body, Mode=TwoWay}"/>
            <Rectangle Width="80" Height="6" Fill="Black" Margin="10"/>
        </StackPanel>
    </sap:ActivityDesigner>

    Not a whole lot of magic here yet.  What we want to do is add another WorkflowItemPresenter, but what do I bind it to? Well, let’s look at how ActivityDelegate is defined [the root class for ActivityAction and ActivityFunc (which I’ll get to in my next post).:

    image

    hmmm, Handler is an Activity, that looks kind of useful.    Let’s try that:

    [warning, this XAML won’t work, you will get an exception, this is by design :-) ]

    <sap:ActivityDesigner x:Class="actionDesigners.ActivityDesigner1"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        xmlns:sap="clr-namespace:System.Activities.Presentation;assembly=System.Activities.Presentation"
        xmlns:sapv="clr-namespace:System.Activities.Presentation.View;assembly=System.Activities.Presentation">
        <StackPanel>
            <sap:WorkflowItemPresenter HintText="Drop the body here" BorderBrush="Black" BorderThickness="2" Item="{Binding Path=ModelItem.Body, Mode=TwoWay}"/>
            <Rectangle Width="80" Height="6" Fill="Black" Margin="10"/>
    <!-- this next line will not work like you think it might --> 
            <sap:WorkflowItemPresenter HintText="Drop the completion here" BorderBrush="Black" BorderThickness="2" Item="{Binding Path=ModelItem.OnCompletion.Handler, Mode=TwoWay}"/>
    
        </StackPanel>
    </sap:ActivityDesigner>

    While this gives us what we want visually, there is a problem with the second WorkflowItemPresenter (just try dropping something on it):

    image

    Now, if you look at the XAML after dropping, the activity you dropped is not present.  What’s happened here:

    • The OnCompletion property is null, so binding to OnCompletion.Handler will fail
    • We (and WPF) are generally very forgiving of binding errors, so things appear to have succeeded. 
    • The instance was created fine, the ModelItem was created fine, and the it was put in the right place in the ModelItem tree, but there is no link in the underlying object graph, basically, the activity that you dropped is not connected
    • Thus, on serialization, there is no reference to the new activity in the actual object, and so it does not get serialized.

    How can we fix this?

    Well, we need to patch things up in the designer, so we will need to write a little bit of code, using the OnModelItemChanged event.  This code is pretty simple, it just means that if something is assigned to ModelItem, if the value of “OnCompletion” is null, initialize it.  If it is already set, we don’t need to do anything (for instance, if you used an IActivityTemplateFactory to initialize).  One important thing here (putting on the bold glasses) YOU MUST GIVE THE DELEGATEINARGUMENT A NAME.  VB expressions require a named token to reference, so, please put a name in there (or bind it, more on that below).

    using System;
    using System.Activities;
    
    namespace actionDesigners
    {
        // Interaction logic for ActivityDesigner1.xaml
        public partial class ActivityDesigner1
        {
            public ActivityDesigner1()
            {
                InitializeComponent();
            }
    
            protected override void OnModelItemChanged(object newItem)
            {
                if (this.ModelItem.Properties["OnCompletion"].Value == null)
                {
                    this.ModelItem.Properties["OnCompletion"].SetValue(
                        new ActivityAction<TimeSpan>
                        {
                            Argument = new DelegateInArgument<TimeSpan>
                            {
                                Name = "duration"
                            }
                        });
                }
                base.OnModelItemChanged(newItem);
    
            }
        }
    }

    Well, this works :-)  Note that you can see the duration DelegateInArgument that was added.

    image

    Now, you might say something like the following “Gosh, I’d really like to not give it a name and have someone type that in” (this is what we do in our ForEach designer, for instance).  In that case, you would need to create a text box bound to OnCompletion.Argument.Name, which is left as an exercise for the reader.

    Alright, now you can get out there and build activities with ActivityActions, and have design time support for them!

    One question brought up in the comments on the last post was “what if I want to not let everyone see this” which is sort of the “I want an expert mode” view.  You have two options.  Either build two different designers and have the right one associated via metadata (useful in rehosting), or you could build one activity designer that switches between basic and expert mode and only surfaces these in expert mode.

  • mwinkle.blog

    Sudoku Validator, part I - Rules and Collections

    • 1 Comments

    When I go out and I talk to partners and customers about WF there is a lot of interest in leveraging the rules capabilities.  Whether they are looking to have declarative rules inside their workflows, or by executing complex policy against a set of custom objects in any .NET code, there's a lot about rules to like. 

    I'm working on a sample which uses Rules to validate a Sudoku board, namely from Sudoku sample on the community site.

    Now, there are plenty of complicated ways to determine if a Sudoku configuration is valid (and plenty of other ways to solve the Sudoku [including Don Knuth's Dancing Links approach to solve the Exact Cover problem for Sudoku]).  But I'm going to focus on a relatively simple, validate rows, validate columns and validate boxes approach. 

    I'm going to need to operate my rules across a collection, namely a 2-dimensional array of integers.  So, how do we operate across a collection?

    Searching in the SDK, which is full of fantastic information, leads us to the "Using Rule Conditions in Workflows" section.  In there you'll find the "Collection Processing" section that outlines how you can process over collections.  It goes a little something like this (in descending priority):

    • Rule 1: Create an initialization rule, with a higher priority than any of the rules that follow.  This is where you can get the enumerator for the collection.
    • Rule 2: Get the current object (this is where we check to make sure we should keep on enumerating by having enumerator.MoveNext() as our condition.  To re-evaluate this rule, we either need to modify enumerator, or we need to explicitly call Update("this/enumerator") in order to cause this rule to re-execute.
    • Rule 3: Execute rules against this instance of the item
    • Rule 4: Lowest priority, we need to make sure this gets evaluated every time, so something like this.currentInstance == this.currentInstance is a good bet.  Because we update currentInstance, this rule will eventually re-fire, but due to its lower priority will execute after the actions on the current instance.  The condition of this rule is that we update the enumerator (the fact that rule 2 depends upon, causing us to loop back up to rule 2 and begin executing on our next instance.

    Now, I have an array.  Sure, I can get the enumerator from this, but I'd like to use an integer which I update to navigate through this array.  This is important to me because I need to know where I am at in the process (the 1st element or the 5th element) in order to convey some relevant information out to the user.

    For an array, we've got a little simpler pattern that incorporates some shortcuts, we could certainly do the approach above, and if we're sure of our incoming variables (namely the integer we use to keep track of our current position), we can do it in one rule.  The rule looks like this:

    if (i < ItemCount)

    THEN

    total = total + items[i]

    i = i + 1  <== this is what causes the rule to then re-evaluate itself due to chaining

    We have an initial rule, with a priority of 2 set to initialize i to 0.  But wait, you may say, why does my class need to have this iterator.  The short story, it doesn't, but you have to have it somewhere, so what I have done is create a rule helper class that contains an instance of the class I am interested in executing the ruleset on, along with whatever "support" variables I need.  So, we use the rules engine chaining capabilities in our second rule, with priority 1, to force the re-execution of the rule as many times as we need.  And thus, we iterate over our simple array, the collection we all start from!

    Ok, you may say, but I want a sample of this working when I truly have a collection of custom objects.  Let's do that as well.  There's one tricky thing to note here:  The Intell-sense like interface sometimes makes it hard to consistently case stuff.  I spent a while chasing down why my ruleset was only executing twice when I had 30 objects in the list.  Consistently casing things made that work right.  The problem that I had was that the final rule, was cased to the member variables, all lower case.  The rule where I set the current item to the next one in the stack was cased to the property.  When I changed this, because the symbol names didn't match up (the facts, if you will) the engine did not know to re-execute my final rule, which in turn did not call update to cause everything to re-evaluate.  I also had some odd behavior with a typed IEnumerator (from System.Collection.Generics), but it appears now that things are working fine after discovering and correcting the cAsInG issues.  So, like I said, pay attention to your casing, especially if you follow a variableName / VariableName naming scheme for C# private variables and properties.

    Also, check out Moustafa's post on how you can use rule tracing to see what rules executed, this is what clued me in to what was and wasn't happening.

    In the meantime, you can check out the sample that I have posted to the community site here.

    Steps to get the application to work:

    • Download and extract the RulesWithCollectionSample
    • Open the RulesWithCollectionSample solution and build it
    • Download and extract the External RuleSet Toolkit, and create a rules database.
    • Import the rules from the .rules file in the RulesWithCollectionSample folder (you will need to point it to the assemblies of the RulesWithCollectionSample solution that you just built)
    • Save this to the rules database
    • Make sure the app.config file points to the right rules database.
    • Hit F5 to run, and you will be able to add numbers in one of two ways (via an array or a collection of objects)

    I look forward to your feedback!

  • mwinkle.blog

    Thoughts on Waiting until 20xx for WF

    • 10 Comments

    Usual msblog disclaimer applies, this represents my opinion!

    While I was on break, a number of folks pinged me asking me about this blog post by Tad Anderson.

    I find the investment in time to learn how to use 3.0/3.5 has been a complete waste time. So we have release 1.0 and 1.5 of WWF becoming obsolete in favor of version 2.0. These are the real release numbers on these libraries, and that is how they should have been labeled. They are not release 3.0 and 3.5.

    First, your investment in the existing technologies is not a "waste of time."  The idea of modeling your app logic declaratively, via workflows doesn't change, nor do the ideas surrounding how one builds an application with workflows.  What we are fixing is that we are making it substantially easier to use, and enabling more advanced scenarios (like implicit message correlation). What you will not be able to re-use is some of the things you did the first time and thought, "hmmm, I wonder why I have to do that [activity execution context cloning, handleExternalEvent, I'm looking at you]."  From a designer perspective, your not going to have to keep remembering the quirks of the v1 designer.  I think about this similarly to the way we went from ASMX web services to WCF.  The API's changed, but the underlying thinking of building an app on services did not.  Regarding version numbers, all of our libraries are versioned to the version of the framework we ship with (see WPF, WCF, etc).  Internally we struggled with what we call the thing we're working on now and decided to stick with framework version (so WF 4.0, rather than WF 2.0). 

    Secondly, it's important to note, we're not getting rid of the 3.0, 3.5 technologies.  We're investing to port them to the new CLR, and work to make the designers operate in VS 2010.  If you get sufficient return by using WF in your apps today, use WF today.  If WF doesn't meet your needs today, and if we're fixing that by something that we're doing in 4.0, then it makes sense to wait.  Note, I'm not defining "return" for you.  Depending upon how you define that, you may reach a different conclusion that someone in a similar setting. 

    Thirdly, activities you write today on 3.0/3.5 will continue to work, even inside a 4.0 workflow by way of the interop activity.  Much as WPF has the ability to host existing WinForms content, we have the ability to execute 3.0-based activities. 

    There is a larger issue of how we (the big, Microsoft "we") handle a combination of innovation, existing application compatibility, and packaging of features.  I'm not sure how we avoid the fact that inevitably, any framework in version n+1 will introduce new features, some of which will not be compatible with framework version n, some of which may do similar things to features in framework version n.  Folks didn't stop writing WinFroms apps when WPF was announced (they still write WinForms apps).  As I mentioned, this is a big issue, but not one I intend to tackle in this post :-)

    The feedback we got from customers around WF was centered around the need for a few things:

    • Activities and Workflows
      • A fully declarative experience (declare everything in XAML)
      • Make it easier to write complex activities (see my talk for the discussion on writing Parallel or NofM)
      • Make data binding easier
    • Runtime
      • Better control over persistence
      • Flow-in transactions
      • Support partial trust
      • Increase perf
    • Tooling
      • Fix Perf and usability
      • Make rehosting and extensibility easier

    Most of these would require changes to the existing code base, and breaking changes would become unavoidable.  The combination of doing all of these things makes the idea of breaking all existing customers absolutely untenable.  We're doing the work to make sure that your WF apps you write today will keep on working, and with services as the mechanisms to communicate between them, one can gradually introduce 4.0 apps as well.  Given the commitment we have to our v1  (or netfx3) customers, we don't want to introduce those kinds of breaking changes.

    Kathleen's article summarizes this very nicely, and rather than be accused of cherry-picking quotes, I encourage you to read the whole article.

    Questions, comments, violent flames, post 'em here or mwinkle_at_largeredmondbasedsoftwarecompany_dot_com

  • mwinkle.blog

    Hosting WF inside a Windows Service

    • 0 Comments

    Dennis has put together a nice sample showing how to host a workflow inside of a Windows Service, a common request for people looking to host long running processes.  Check out his posting here.  This sample is also available on the community site here.

    We're working on some other samples around hosting, so stay tuned for details on those!

  • mwinkle.blog

    Dynamically Generating an Operation Contract in Orcas using WF

    • 3 Comments

    This kicks off a set of posts where I'll be discussing some of the interesting features coming out in Orcas.

    I want to focus on this post on the Receive Activity, and a nice little feature in the designer that lets you create a contract on the fly, without having to drop into code and write a decorated interface.  This allows us to divide the world into two approaches:

    • One where I design my contract first, and then start creating a workflow to implement the operations on the contract.
    • Design my workflow first, and have it figure out the contract for me (this is what I will focus on in more detail)

    Designing a Contract First

    This is what most WCF folks will be familiar with:

    [ServiceContract]
    public interface IOrderProcessing
    {

    [OperationContract]
    bool SubmitOrder(Order order);

    [OperationContract]
    Order[] GetOrders(int customerId);
    }

    When I drop a receive activity onto a workflow, I can now import this contract:

    This will bring up a type chooser that lets me pick my service contract:

    This imports all of the details and we can see the the operation picker

    If we look at the activity properties we now see the parameters to the method.  The (ReturnValue) is the object that we need to return in the operation.  The order parameter is the message that is going to be passed when the method is called.  I can take that and bind that to values in my workflow or whatever I want to do with it.

     

    Designing a Workflow First

    The other approach we can take is to create the contract as we create the workflow.  That's right, we don't need to create the contract explicitly in code.  To do that, drop a receive activity onto the designer and double click.  Instead of selecting "Import Contract" select "Add Contract".  This will create a new contract with a basic operation.  By selecting the contract or the operation we can name it something a little nicer.

    By selecting the operation, I can customize all of its behavior.  I can create parameters to be passed in, I can set the types of those parameters (as well as the return type of the operation). 

    It's relevent to point out that I can select any type that would be valid in a WCF contract.  The drop down list displays the basic types, but by selecting "Browse Type" I am brought into a type picker where I can select custom types.  As you can see below I have created a "CancelOrder" operation that takes in an order, the reason and who authorized the cancellation.

    When I click ok, my activity has had new dependency properties added to it, as can be seen in the property grid for the activity.

     

    So what's happening here?

    In the workflow I created, I used a code separation workflow, so I have an .xoml file which contains the workflow definition.  Let's take a quick peek at how the receive activity is defined (note, some of the xml is truncated, if you view in an rss reader or copy and past you can see all the details, I'll work on updating the blog layout):

        <ns0:ReceiveActivity x:Name="receiveActivity2">
    <ns0:ReceiveActivity.ServiceOperationInfo>
    <ns0:OperationInfo PrincipalPermissionRole="administrators" Name="CancelOrder" ContractName="MyContract">
    <ns0:OperationInfo.Parameters>
    <ns0:OperationParameterInfo Attributes="Out, Retval" ParameterType="{x:Type p9:Boolean}" Name="(ReturnValue)" Position="-1" xmlns:p9="clr-namespace:System;Assembly=mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
    <ns0:OperationParameterInfo Attributes="In" ParameterType="{x:Type p9:Order}" Name="order" Position="0" xmlns:p9="clr-namespace:WcfServiceLibrary1;Assembly=WcfServiceLibrary1, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
    <ns0:OperationParameterInfo Attributes="In" ParameterType="{x:Type p9:String}" Name="reason" Position="1" xmlns:p9="clr-namespace:System;Assembly=mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
    <ns0:OperationParameterInfo Attributes="In" ParameterType="{x:Type p9:String}" Name="authorizedBy" Position="2" xmlns:p9="clr-namespace:System;Assembly=mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
    </ns0:OperationInfo.Parameters>
    </ns0:OperationInfo>
    </ns0:ReceiveActivity.ServiceOperationInfo>
    </ns0:ReceiveActivity>

    Here you can see that within the metadata of the activity, we have the definition for the contract and the operation that I defined.  When we spin up a WorkflowServiceHost to host this workflow as a service, the host will inspect the metadata for the workflow, looking for all of the endpoints and will create them.  You can also see within the OperationInfo element that I am able to define the PrincipalPermissionRole defining the role of allowed callers, taking advantage of the static security checks which I will talk about in another post.  So, defined declaratively in the XAML is the contract for the operations.  I didn't need to write the interface or the contract explicitly, I was able to write a workflow, specifiy in the workflow how it will communicate and then let the WorkflowServiceHost figure out the nitty gritty details of how create the endpoints.  The other part here that is important to mention is that the config will play a part determining what the transport channel and other such details will be.  Within the config, when we set up an endpoint we need to specify the contract is "MyContract", or the name that we assigned when we created it.

    Summary

    We talked about the way that we can implement contracts that already exist in a receive activity, as well as how we can use the designer to actually create our contract while we are designing the workflow.  The WorkflowServiceHost does the heavy lifting here in order to enable this little bit of nifty-ness.

  • mwinkle.blog

    Advanced Workflow Services Talk (Demo 1 of 4)

    • 5 Comments

    So, last week I wrapped up a conversation at TechReady, our internal conference, where I was talking about the integration between WF and WCF in .NET 3.5.  This talk was somewhat bittersweet, it's the last conference where I'm scheduled to talk about WF 3.0/3.5, I'll start talking about WF 4.0 at PDC this fall. 

    There are a series of 4 demos that we'll talk about in this series:

    1. Basic Context Management
    2. Simple Duplex
    3. Long Running Work Pattern
    4. Conversations Pattern

    I've gotten a lot of requests to post the code samples, so I want to do that here:

    Sample 1, Basic Management of Context

    The goal of this sample is to show the way that the context channel works, and how to interact with it from imperative code.

    Ingredients:

    • One basic workflow service that simply has two Receive activities bound to the same operation inside of a sequence.
    • image
    • Inside each Receive, I have placed a Code Activity that simply outputs a little bit of info (the vars declared on lines 1 and 2 are used by the Receive activities:

     

         1:  public String returnValue = default(System.String);
         2:  public String inputMessage = default(System.String);
         3:   
         4:  private void codeActivity1_ExecuteCode(object sender, EventArgs e)
         5:  {
         6:      returnValue = string.Format("first activity {0}", inputMessage);
         7:      Output(inputMessage + " Activity 1");
         8:  }
         9:   
        10:  private void Output(string message)
        11:  {
        12:      Console.WriteLine("Workflow {0} : Message {1}", this.WorkflowInstanceId, message);
        13:  }
        14:   
        15:  private void codeActivity2_ExecuteCode(object sender, EventArgs e)
        16:  {
        17:      returnValue = string.Format("second activity {0}", inputMessage);
        18:      Output(inputMessage + " Activity 2");
        19:  }

     

    Instructions:

    • Create a client type that will call the service for us
      class IWorkflowClient : ClientBase<Intro1.IWorkflow1>, Intro1.IWorkflow1
      {
          public IWorkflowClient() : base() { }
          public IWorkflowClient(Binding binding, EndpointAddress address) : base(binding, address) { }
          public string Hello(string message)
          {
              return base.Channel.Hello(message);
          }
      }
    • Create a utility function CheckAndPrintContext()
      private static void CheckAndPrintContext(IContextManager icm)
      {
          if (null != icm) Console.WriteLine("Context contains {0} elements", icm.GetContext().Count);
          if (null != icm)
          {
              if (icm.GetContext().Count > 0)
              {
                  foreach (string xmlName in icm.GetContext().Keys)
                  {
                      Console.WriteLine("key : {0}", xmlName);
                      Console.WriteLine("value : {0}", icm.GetContext()[xmlName]);
                  }
              }
          }
      }
      • The thing to note here is that we need to traverse the dictionary, since there could be more than one key in here, although there won't be in this sample.
    • Now, let's run the three different bits of code, we want to first show the happy path, show how to break it, and then show how to explicitly manage the context token
    • Scenario 1: The Happy Path

         1:  private static void DemoOne()
         2:  {
         3:      Console.WriteLine("Press Enter to Send a Message and reuse proxy");
         4:      // Console.ReadLine();
         5:      Debugger.Break();
         6:      IWorkflowClient iwc = new IWorkflowClient(new NetTcpContextBinding(),
         7:          new EndpointAddress("net.tcp://localhost:10001/Intro1"));
         8:      IContextManager icm = iwc.InnerChannel.GetProperty<IContextManager>();
         9:      if (null != icm) Console.WriteLine("Context contains {0} elements", icm.GetContext().Count);
        10:      string s = iwc.Hello("message1");
        11:      Console.WriteLine("the service returned the message '{0}'", s);
        12:      CheckAndPrintContext(icm);
        13:      s = iwc.Hello("message2");
        14:      icm = iwc.InnerChannel.GetProperty<IContextManager>();
        15:      CheckAndPrintContext(icm);
        16:      Console.WriteLine("the service returned the message '{0}'", s);
        17:      Console.WriteLine("Press Enter to Continue");
        18:  }
      • What's going on here?
        • Line 5, a more convenient way in demos to hit a breakpoint
        • Line 10: Call the service
        • Line 12: CheckAndPrint the Context Token.  In this case, this will print the Guid of the initiated workflow that is contained in the token
        • Line 13: Call the service a second time
          • Look at the service window, you'll see that this message has been routed to the same instance of the workflow.
          • You can also see in Line 16 that the second activities return message is included.
    • Scenario 2: The Path Grows Darker

         1:  // show this not working using a second client
         2:  private static void DemoTwo()
         3:  {
         4:      Console.WriteLine("Press Enter to Send a Message (it will break this time)");
         5:      //Console.ReadLine();
         6:      Debugger.Break();
         7:      IWorkflowClient iwc = new IWorkflowClient(new NetTcpContextBinding(),
         8:          new EndpointAddress("net.tcp://localhost:10001/Intro1"));
         9:      IContextManager icm = iwc.InnerChannel.GetProperty<IContextManager>();
        10:      if (null != icm) Console.WriteLine("Context contains {0} elements", icm.GetContext().Count);
        11:      string s = iwc.Hello("message1");
        12:      Console.WriteLine("the service returned the message '{0}'", s);
        13:      CheckAndPrintContext(icm);
        14:      iwc = new IWorkflowClient(new NetTcpContextBinding(),
        15:         new EndpointAddress("net.tcp://localhost:10001/Intro1"));
        16:      s = iwc.Hello("message2");
        17:      Console.WriteLine("the service returned the message '{0}'", s);
        18:      icm = iwc.InnerChannel.GetProperty<IContextManager>();
        19:      CheckAndPrintContext(icm);
        20:      Console.WriteLine("Press Enter to Continue");
        21:  }
      • What's going on here? (Same until line 14)
        • Line 14: Let's create a new proxy. 
        • Line 15: Call the service using the new proxy.  You'll note on the server side that a second workflow instance has been created.  This is where we break.
        • Line 19: On the client side, you'll see that the second GUID being returned
    • Scenario 3: Finding the Light

         1:  // show this working with a second client by caching the context
         2:  private static void DemoThree()
         3:  {
         4:      Console.WriteLine("Press Enter to Send a Message (we'll cache the context and apply it to the new proxy)");
         5:      // Console.ReadLine();
         6:      Debugger.Break();
         7:      IWorkflowClient iwc = new IWorkflowClient(new NetTcpContextBinding(),
         8:          new EndpointAddress("net.tcp://localhost:10001/Intro1"));
         9:      IContextManager icm = iwc.InnerChannel.GetProperty<IContextManager>();
        10:      if (null != icm) Console.WriteLine("Context contains {0} elements", icm.GetContext().Count);
        11:      string s = iwc.Hello("message1");
        12:      Console.WriteLine("the service returned the message '{0}'", s);
        13:      CheckAndPrintContext(icm);
        14:      IDictionary<string, string> context = icm.GetContext();
        15:      icm = null;
        16:      iwc = new IWorkflowClient(new NetTcpContextBinding(),
        17:         new EndpointAddress("net.tcp://localhost:10001/Intro1"));
        18:      icm = iwc.InnerChannel.GetProperty<IContextManager>();
        19:      icm.SetContext(context);
        20:      s = iwc.Hello("message2");
        21:      Console.WriteLine("the service returned the message '{0}'", s);
        22:      icm = iwc.InnerChannel.GetProperty<IContextManager>();
        23:      CheckAndPrintContext(icm);
        24:      Console.WriteLine("Press Enter to Exit");
        25:   
        26:  }
      • Line 14 is where the magic happens, here' we grab the context token from the IContextManager. 
      • Line 19 is where the magic completes, we apply this token to the new proxy.  Note, this proxy could be running on different machine somewhere, but one I get the context token, I can use it to communicate with the same workflow instance that the first call did.

    So, what have we shown:

    • Manipulating context in workflow and imperative code
      • How to extract the context token
      • How to explicitly set the context token
    • The caching behavior of the context channel (as seen in Scenario 1)
    • The behavior of the context channel to return the context token only on the activating message
  • mwinkle.blog

    Paste XML as Serializable Type

    • 3 Comments

    Every now and then, there's a really cool feature that's buried somewhere that just hits you and makes you say "Wow, that's insanely helpful, why didn't somebody think of this sooner." 

    I was playing around with the BizTalk Services SDK, specifically the different web programming bits and pieces and stumbled up on the Paste As Serializable visual studio add in (navigate to BizTalk Services SDK\Samples\Web\Tooling). 

    Let's say you're interested in programming against some services that return normal, plain XML. Well, nobody likes writing code to query XML, Linq to XML is fun, but I'd just really like to mess around with some objects and not really deal with how it looks over the wire or under the hood.

    This great little utility lets you select some snippet of XML (say I get it from here, because I am working on a Twitter mashup).  Copy the XML to the clipboard (here's a snippet of what the XML looks like).

    <statuses>
      <status>
        <created_at>Wed May 02 16:54:45 +0000 2007</created_at>
        <id>47434042</id>
        <text>has advil &amp; beineigts in the back of Steve and Don's talk in 4101B.  Anybody needing either is welcome to some, the pastries will go fast.</text>
        <user>
          <id>5440022</id>
          <name>Matt Winkler</name>
          <screen_name>mwinkle</screen_name>
          <location>Seattle</location>
          <description>workflow technical evangelist</description>
          <profile_image_url>http://assets2.twitter.com/system/user/profile_image/5440022/normal/Matt_Winkler-72dpi.jpg?1177350091</profile_image_url>
          <url>http://blogs.msdn.com/mwinkle</url>
          <protected>false</protected>
        </user>
      </status>

    Go into Visual Studio into a code file and open up the edit menu.

    image

    There, clearly indicated by the "this is just an alpha icon," is the option to "Paste XML as Serializable Type".  Here's what it outputs:

       [System.Xml.Serialization.XmlRootAttribute(Namespace = "", ElementName = "statuses")]
        public class StatusesClass
        {
    
            private System.Collections.Generic.List<StatusClass> statusField;
    
            [System.Xml.Serialization.XmlElementAttribute(Namespace = "", ElementName = "status")]
            public virtual System.Collections.Generic.List<StatusClass> Status
            {
                get
                {
                    return this.statusField;
                }
                set
                {
                    this.statusField = value;
                }
            }
    
            [System.Xml.Serialization.XmlRootAttribute(Namespace = "", ElementName = "status")]
            public class StatusClass
            {
    
                private string created_atField;
                private string idField;
                private string textField;
                private UserClass userField;
                [System.Xml.Serialization.XmlElementAttribute(Namespace = "", ElementName = "created_at")]
                public virtual string Created_at
                {
                    get
                    {
                        return this.created_atField;
                    }
                    set
                    {
                        this.created_atField = value;
                    }
                }
    

    ...

     

    Remainder of code truncated for the sake of the readers. 

    This lets me do some cool stuff like this:

    static void Main(string[] args)
    {
        WebHttpClient twc = new WebHttpClient();
        twc.UriTemplate = "http://twitter.com/statuses/user_timeline/mwinkle.xml";
        StatusesClass sc = twc.Get().GetBody<StatusesClass>();
        Console.WriteLine("Got {0}", sc.Status.Count);
        
    }

    And get my result back in a nice typed object that I can then use elsewhere in my code (or at least debug)

     

    image

    Steve, thanks, this thing rocks!

  • mwinkle.blog

    Introduction to the WF4 Designer Editing Context (Part 1)

    • 2 Comments

    I want to briefly touch on the editing context and give a little introduction to its capabilities.  This is part 1 of a 6 part series

    The way to think about the editing context is that it is the point of contact between the hosting application, and the designer (and elements on the designer).  In my PDC talk, I had the following slide which I think captures the way to think about how these elements are layered together. 

    image

    Motivation

    The editing context represents the a common boundary between the hosting application and the designer, and the mechanism to handle interaction with the designer (outside of the most common methods that have been promoted on WorkflowDesigner).  If you were to look at the implementation of some of the more common methods on WorkflowDesigner, you would see that almost all of these use the editing context in order to get anything done.  For instance, the Flush method (and Save which calls Flush) will acquire an IDocumentPersistenceService from the Services collection, and then use that in order to properly serialize the document. 

    The EditingContext type has two important properties

    Items

    The Items collection is for data that is shared between the host and the designer, or data that is available to all designers.  These need to derive from ContextItem which will provide the mechanism to hook into subscription and change notification. There are a couple of interesting methods on the ContextItemManager type  :

    Services

    Services represent functionality that is either provided by the host for the designer to use, or is used by the designer to make functionality available to all designers within the editor.  Generally, these are likely defined as interfaces, but can also be a type.  It is then required for an implementation or an instance to be provided.  This instance will be shared as a singleton.  There are  a few interesting methods on the ServiceManager type:

    We’ll start walking through these over the next few posts.

  • mwinkle.blog

    Emitting the mc:Ignorable Instruction In Your WF4 XAML

    • 1 Comments

    Frequent forum guest Notre posed this question to the forums the other day noting that the XAML being produced from ActivityXamlServices.CreateBuilderWriter() was slightly different than the XAML being output from WorkflowDesigner.Save().  The reason for this stems from the fact that WorkflowDesigner leverages an additional internal type (which derives from XamlXmlWriter) in order to attach the mc:Ignorable attribute. 

    Why use mc:Ignorable?

    From the source at MSDN:

    The mc XML namespace prefix is the recommended prefix convention to use when mapping the XAML compatibility namespace http://schemas.openxmlformats.org/markup-compatibility/2006.

    Elements or attributes where the prefix portion of the element name are identified as mc:Ignorable will not raise errors when processed by a XAML processor. If that attribute could not be resolved to an underlying type or programming construct, then that element is ignored. Note however that ignored elements might still generate additional parsing errors for additional element requirements that are side effects of that element not being processed. For instance, a particular element content model might require exactly one child element, but if the specified child element was in an mc:Ignorable prefix, and the specified child element could not be resolved to a type, then the XAML processor might raise an error.

    Basically, this lets a XAML reader gracefully ignore any content marked from that namespace if it cannot be resolved.  So, imagine a runtime scenario where we don’t want to load System.Activities.Presentation every time we read a WF XAML file that may contain viewstate.  As a result, we use mc:Ignorable, which means the reader will skip that content when it does not have that assembly referenced at runtime. 

    This is what the output from the designer usually contains:

    <Sequence 
         mc:Ignorable="sap" 
         mva:VisualBasic.Settings="Assembly references and imported namespaces serialized as XML namespaces"
         xmlns="http://schemas.microsoft.com/netfx/2009/xaml/activities" 
         xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
         xmlns:mva="clr-namespace:Microsoft.VisualBasic.Activities;assembly=System.Activities" 
         xmlns:sap="http://schemas.microsoft.com/netfx/2009/xaml/activities/presentation" 
         xmlns:scg="clr-namespace:System.Collections.Generic;assembly=mscorlib" 
         xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
      <sap:WorkflowViewStateService.ViewState>
        <scg:Dictionary x:TypeArguments="x:String, x:Object">
          <x:Boolean x:Key="IsExpanded">True</x:Boolean>
        </scg:Dictionary>
      </sap:WorkflowViewStateService.ViewState>
      <Persist sap:VirtualizedContainerService.HintSize="211,22" />
      <Persist sap:VirtualizedContainerService.HintSize="211,22" />
      <WriteLine sap:VirtualizedContainerService.HintSize="211,61" />
    </Sequence>

    mc:Ignorable will cause the ViewState and HintSize to be ignored.

     

    Why do I have to worry about this?

    If you use WorkflowDesigner.Save(), you don’t.  If you want to be able to serialize the ActivityBuilder and have XAML which is what the designer produces, you need will need to add a XamlXmlWriter into the XamlWriter stack in order to get the right output. You may also worry about this if you are implementing your own storage and plan on writing some additional XAML readers or writers for additional extensibility and flexibility.

    How Do I Get the Same Behavior?

    The code below describes the same approach you would need to take to implement an XamlXmlWriter that does the same thing our internal type does.  While I can’t copy and paste code, this does the same thing.  We do two things:

    • Override WriteNamespace() to collect all of the namespaces being emitted.  We do this to specifically check for ones that we should ignore, and to also gather all of the prefixes to make sure we don’t have a collision
    • Override WriteStartObject() to generate and write out the Ignorable attribute within the start (first) member for any namespaces we should ignore.

    What namespaces do we ignore in the designer?  Just one: http://schemas.microsoft.com/netfx/2009/xaml/activities/presentation

    using System.Collections.Generic;
    using System.IO;
    using System.Xaml;
    using System.Xml;
     
    namespace IgnorableXamlWriter
    {
        class IgnorableXamlXmlWriter : XamlXmlWriter
        {
     
            HashSet<NamespaceDeclaration> ignorableNamespaces = new HashSet<NamespaceDeclaration>();
            HashSet<NamespaceDeclaration> allNamespaces = new HashSet<NamespaceDeclaration>();
            bool objectWritten;
            bool hasDesignNamespace;
            string designNamespacePrefix;
     
            public IgnorableXamlXmlWriter(TextWriter tw, XamlSchemaContext context)
                : base(XmlWriter.Create(tw,
                                        new XmlWriterSettings { Indent = true, OmitXmlDeclaration = true }),
                                        context,
                                        new XamlXmlWriterSettings { AssumeValidInput = true })
            {
     
            }
     
            public override void WriteNamespace(NamespaceDeclaration namespaceDeclaration)
            {
                if (!objectWritten)
                {
                    allNamespaces.Add(namespaceDeclaration);
                    // if we find one, add that to ignorable namespaces
                    // the goal here is to collect all of them that might point to this
                    // if you had a broader set of things to ignore, you would collect 
                    // those here.
                    if (namespaceDeclaration.Namespace == "http://schemas.microsoft.com/netfx/2009/xaml/activities/presentation")
                    {
                        hasDesignNamespace = true;
                        designNamespacePrefix = namespaceDeclaration.Prefix;
                    }
                }
                base.WriteNamespace(namespaceDeclaration);
            }
     
            public override void WriteStartObject(XamlType type)
            {
                if (!objectWritten)
                {
                    // we should check if we should ignore 
                    if (hasDesignNamespace)
                    {
                        // note this is not robust as mc could naturally occur
                        string mcAlias = "mc";
                        this.WriteNamespace(
                            new NamespaceDeclaration(
                                "http://schemas.openxmlformats.org/markup-compatibility/2006",
                                mcAlias)
                                );
     
                    }
                }
                base.WriteStartObject(type);
                if (!objectWritten)
                {
                    if (hasDesignNamespace)
                    {
                        XamlDirective ig = new XamlDirective(
                            "http://schemas.openxmlformats.org/markup-compatibility/2006",
                            "Ignorable");
                        WriteStartMember(ig);
                        WriteValue(designNamespacePrefix);
                        WriteEndMember();
                        objectWritten = true;
                    }
                }
            }
     
        }
    }

    One note on the code above, it is noted that the generation of the namespace prefix “mc” is not robust.  In the product code we will check to see if mc1, mc2, … are available up to mc1000.  In that case we would then append a GUID for the ugliest XML namespace known to mankind.  The chance of collision up to 1000 would be a highly extreme edge case.

    How would I use this? The following code shows feeding this into a CreateBuilderWriter that is passed to XamlServices.Save()

    StringBuilder sb = new StringBuilder();
    XamlSchemaContext xsc = new XamlSchemaContext();
    var bw = ActivityXamlServices.CreateBuilderWriter(
        new IgnorableXamlXmlWriter(new StringWriter(sb), xsc));
    
    XamlServices.Save(bw,
                      wd.Context.Services.GetService<ModelTreeManager>().Root.GetCurrentValue());
  • mwinkle.blog

    More Details On WF/WCF in .NET 4.0

    • 1 Comments

    Steve Martin, a director of product management for CSD, has a blog post containing more information on the work that we are doing for the next versions of WF and WCF that we will release as a CTP at PDC.  He also introduces "Dublin," the name for our efforts around creating a manageable and scalable host for WF and WCF applications, something that I know a few customers would be interested in. 

    For you WF and WCF fans, there some more information about some of the features that you'll hear more about at PDC.  I think for customers who are using either today, you'll see something on the list below that gets you interested.  And, if you're not using WF or WCF today, I think there are a few things that might make you interested. We think that the features we're introducing (especially in WF, which is close to my heart) will make it easier to use WF, in more places, and by more people.  Let us know what you think.  What's exciting in the list below, what do you want to hear more about, is there something else you'd like to see on the list?

     

    WF Features

    Significant improvements in performance and scalability

    · Ten-fold improvement in performance

    New workflow flow-control models and pre-built activities

    · Flowcharts, rules

    · Expanded built-in activities – PowerShell, database, messaging, etc.

    Enhancements in workflow modeling

    · Persistence control, transaction flow, compensation support, data binding and scoping

    · Rules composable and seamlessly integrated with workflow engine

    Updated visual designer

    · Easier to use by end-users

    · Easier to rehost by ISVs

    Ability to debug XAML

     

    WCF Features

    RESTful enhancements

    · Simplifying the building of REST Singleton & Collection Services, ATOM Feed and Publishing Protocol Services, and HTTP Plain XML Services using WCF

    · WCF REST Starter Kit to be released on Codeplex to get early feedback

    Messaging enhancements

    · Transports - UDP, MQ, Local in-process

    · Protocols - SOAP over UDP, WS-Discovery, WS-BusinessActivity, WS-I BP 1.2

    · Duplex durable messaging

    Correlation enhancements

    · Content and context driven, One-way support

    Declarative Workflow Services

    · Seamless integration between WF and WCF and unified XAML model

    · Build entire application in XAML, from presentation to data to services to workflow

  • mwinkle.blog

    VS 2008 Beta 2 Shipped : 0 to Workflow Service in 60 seconds

    • 2 Comments

    So, per Soma's blog, this great Channel9 video, and a bunch of other places, VS 2008 Beta 2 is available for download (go here).

    Others are covering their favorite feature in depth, I want to cover one of mine: the WCF test client, which I will show by way of creating a Workflow Service application.

    Real quick, for those of you who didn't read the readme file, I know sometimes you just forget, there is an important note regarding how to get  this to work (out of the box you will probably get an exception in svcutil.exe).

    Running a WCF Service Library results in svcutil.exe crashing and the test form not working

    Running a WCF Service Library starts the service in WcfSvcHost and opens a test form to debug operations on the service.  On the Beta2 build this results in crash of svcutil.exe and the test form doesn’t work due to a signing problem.

    To resolve this issue

    Disable strong name signing for svcutil.exe by opening a Visual Studio 2008 Beta2 Command Prompt. At the command prompt run: sn –Vr “<program files>\Microsoft SDKs\Windows\v6.0A\Bin\SvcUtil.exe”  (replace <program files> with your program files path – ex: c:\Program Files)

    Fire up VS 2008, create a new Sequential Workflow Service Library project:

    image

    This creates a basic Sequential workflow with a Receive activity

    image

    It also creates an app.config and IWorkflow1.cs

    image

    IWorkflow1.cs contains the contract our service is going to implement:

    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.ServiceModel;
    
    namespace WFServiceLibrary1
    {
        // NOTE: If you change the interface name "IWorkflow1" here, 
        // you must also update the reference to "IWorkflow1" in App.config.
        [ServiceContract]
        public interface IWorkflow1
        {
            [OperationContract]
            string Hello(string message);
        }
    }

    Now, we can modify this as needed, or we can delete it and create the contract as part of the Receive activity, see my previous post here on the topic.

    Return to the workflow and take a quick look at the properties of the Receive activity, and note that the parameters for the method (message and (returnValue)) have already been promoted and bound as properties on the workflow, that saves us a quick step or two:

    image

    Drop a code activity in the Receive shape, and double click to enter some code:

    image

    private void codeActivity1_ExecuteCode(object sender, EventArgs e)
    {
        returnValue = String.Format("You entered '{0}'.", inputMessage);
    }

    Now, we're pretty much there, but let's take a quick look at the app.config

    <service name="WFServiceLibrary1.Workflow1" behaviorConfiguration="WFServiceLibrary1.Workflow1Behavior">
      <host>
        <baseAddresses>
          <add baseAddress="http://localhost:8080/Workflow1" />
        </baseAddresses>
      </host>
      <endpoint address=""
                binding="wsHttpContextBinding"
                contract="WFServiceLibrary1.IWorkflow1" />
      <endpoint address="mex" 
                binding="mexHttpBinding" 
                contract="IMetadataExchange" />
    </service>

    We're going to use the wsHttpContextBinding, which you can think of as the standard wsHttpBinding with the addition of the Context channel to the channel stack.  Also note, we can right click the config and open it in the WCF config editor, slick!

    image

    Let's hit F5.  We build, do a little bit of processing and up pops the WCF test client.  You may also note this little pop up from your task tray:

    image

    What's this, the "autohosting" of your service, just like you get with ASP.NET on a machine.  This saves me the trouble of having to write a host as well as my service when I just want to play around a bit.  The test client looks like this:

    image

    Double click on the hello operation and fill in a message to send to the service:

    image

    Clicking "Invoke" will invoke the service, which will soon return with the value we hope to see.  Sure enough, after a bit of chugging along, this returns:

    image

    Finally, let's hit the XML tab to see what's in there, and we see it is the full XML of the request and the response.  There's an interesting little tidbit in the header of the response:

    <s:Envelope xmlns:s="http://www.w3.org/2003/05/soap-envelope" 
    xmlns:a="http://www.w3.org/2005/08/addressing" 
    xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">
      <s:Header>
        <a:Action s:mustUnderstand="1" u:Id="_2">
           http://tempuri.org/IWorkflow1/HelloResponse
         </a:Action>
        <a:RelatesTo u:Id="_3">urn:uuid:3f5b7eb5-cc35-4b01-b345-92f6edf728d7</a:RelatesTo>
        <Context u:Id="_4" xmlns="http://schemas.microsoft.com/ws/2006/05/context">
          <InstanceId>fc0f47fd-dd7b-4030-9883-acbf358583c3</InstanceId>
        </Context>

    This is the context token that lets me know how to continue conversing with this workflow.  In the test client, I can't find a way to attach it to a subsequent request, meaning we can't use the test client for testing multiple steps through a workflow, but this new feature lets me get up and running, verify connectivity, and be able to set breakpoints and debug my workflow service, which is pretty cool.

    I've posted the following video on c9 as a screencast, which I will try to do with my subsequent blog postings.

  • mwinkle.blog

    Tracking Objects and Querying For Them

    • 3 Comments

    A sample that I like to give during WF talks is that the tracking service can be used to track things at an event level ("Workflow started") as well as things at a very granular level ("What is the Purchase Order amount?"). 

    I wanted to put together a sample showing these, and you can find the file on netfx3.com

    In order to track things, we first need to specify the tracking profile.  This tells the workflow runtime which events and pieces of data we are interested in collecting while the workflow executes.  I think that the xml representation of the tracking profile is pretty readable, but there is a tool that ships in the Windows SDK designed to make this even easier.  The tool can be found after extracting the workflow samples to a directory under \Technologies\Applications\TrackingProfileDesigner.  This tool will let you open up a workflow by pointing at its containing assembly and then design a tracking profile.  It will deploy the tracking profile to the database for you, but I borrowed some code from another sample that shows the same functionality.  The tool allows you to specify workflow level events as well as activity level events, and allows you to designate what information you would want to extract and specify the annotation as well.

    The output of the designer is an xml file, which we will edit further by hand.  The important parts to look at are the tracking points, that is, where and when do we extract what data.

     

    <ActivityTrackPoint>
                <MatchingLocations>
                  <ActivityTrackingLocation>
                    <Activity>
                      <Type>TrackingObjectsSample.CreditCheck, TrackingObjectsSample, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null</Type>
                      <MatchDerivedTypes>false</MatchDerivedTypes>
                    </Activity>
                    <ExecutionStatusEvents>
                      <ExecutionStatus>Closed</ExecutionStatus>
                      <ExecutionStatus>Executing</ExecutionStatus>
                    </ExecutionStatusEvents>
                  </ActivityTrackingLocation>
                </MatchingLocations>
                <Annotations>
                  <Annotation>data extracted from activity</Annotation>
                </Annotations>
                <Extracts>
                  <ActivityDataTrackingExtract>
                    <Member>IncomingOrder.OrderCustomer.Name</Member>
                  </ActivityDataTrackingExtract>
                  <ActivityDataTrackingExtract>
                    <Member>IncomingOrder.OrderTotal</Member>
                  </ActivityDataTrackingExtract>
                  <ActivityDataTrackingExtract>
                    <Member>IncomingOrder</Member>
                  </ActivityDataTrackingExtract>
                </Extracts>
    </ActivityTrackPoint>

    Here we specify the tracking location, in this case the CreditCheck activity, as well as the events we wish to listen for, and finally what data we want to extract.  I like to point out in the extracts section that we can take advantage of dot-notation in order to traverse an object hierarchy to get to specific pieces of data we wish to extract.  If we don't get down to the simple types, the workflow runtime will attempt to serialize the object and store it as a binary blob (so make sure you mark the types you want tracked as serializable).  This is what is being done with IncomingOrder above.  This will allow us to bring back IncomingOrder, but we won't be able to query against that blog, hence why we might extract OrderTotal in order to generate a report of high value orders.

    When we track the information it will get stored to SQL (assuming we are using the SqlTrackingService).  There are a number of views we could use to build queries on top of.  The one I want to point out is the vw_TrackingDataItem view which contains all of the tracking data items (so it is well named :-) ) The FieldTypeId references back to vw_Type where you will see the different type of objects and their respective assemblies which are being tracked.  This lets the consume know what type of object they may need to instantiate if they wish to consume the object.  But what about querying on that object in plain old SQL?  Well, there is a column in the view designed for that.  The Data_Str column will show a string representation of the tracked item, so in the case of numbers and strings and other basic types, we will be able to query.  In the case of complex types, the name of the type will appear.  This is basicly the ToString of the object being tracked.  The Data_Blob column contains the binary represenation of the objects in order to restore them into objects inside of .NET code.

    So, back to the task at hand, the sample.  A simple workflow has been created that takes an Order object.  The workflow doesn't do much, although the last code activity modifies a value of the Order object.  We do this to show that the changed values can be tracked over time.  The workflow runtime uses the SqlTrackingService, and before this all starts the tracking profile is inserted into the database using the InsertTrackingProfile method that is used in the SDK (I grabbed it from the UserTrackPoints sample: \Technologies\Tracking\UserTrackPoints).  The workflow then executes.  Clicking on the "Get Tracking Info" button will then use the SqlTrackingQuery to get a SqlTrackingWorkflowInstance, an object representation of all of the available tracking information.  We then iterate over the various event sources in the SqlTrackingWorkflowInstance such as the workflow events and the activity events and place them into a TreeView.  This could easily be extend to include user track points as well.  The following bits of code which do this are below:

     

     
    SqlTrackingQuery query = new SqlTrackingQuery(connectionString);
    SqlTrackingWorkflowInstance sqlTrackingWorkflowInstance;
    if (query.TryGetWorkflow(new Guid(WorkflowInstanceIdLabel.Text),
                out sqlTrackingWorkflowInstance))
    {
         TreeNode tn1 = treeView1.Nodes.Add(string.Format("Activity Events ({0})", 
                     sqlTrackingWorkflowInstance.ActivityEvents.Count));
         foreach (ActivityTrackingRecord activityTrackingRecord in sqlTrackingWorkflowInstance.ActivityEvents)
         {
            // do something here
         }
    }  
                

    It is useful to place a breakpoint in the foreach loop in order to inspect the different aspects of the ActivityTrackingRecord and the other tracking objects that live inside of SqlTrackingWorkflowInstance.  By looking at the TrackingDataItem 's that are placed in ActivityTrackingRecord.Body we can find the field name as well as its value, which could be an object.  By browsing around in the break mode, the debugger will take care and allow you to move through the serialized object which will be present inside the TrackingDataItem.Data.

    Link to sample code

  • mwinkle.blog

    Advanced Workflow Services Talk (Demo 2 of 4)

    • 2 Comments

    A continuation of my series of demos from my advanced workflow services talk.  Here we focus on duplex message exchange patterns.

    Duplex messaging is something that we model at the application level (as opposed to the infrastructure level) because we want to model that message exchange at the level of the application.  Here's some scenarios where I could use duplex messaging:

    • [concrete] I submit an order, and you tell me when it ships
    • [abstract] I ask you do to do some long running work, let me know when it is done
    • [abstract] I ask you to start doing something, you update me on the status

    One may ask the question, "But, what about the wsHttpDualBinding, or WCF duplex bindings."  That's a valid question, but it's important to point out that those bindings are really used to describe the behavior of a given proxy instance (and associated service artifacts).  When my proxy dies, or the underlying connection goes away, I lose the ability for the service to call back to me.  Additionally, this binds me to listen in the same way that I sent out the initial message. 

    By modeling this at the application layer, we do lose some of the "automagicity" of the WCF duplex behavior, but I get more flexibility, and I get the ability to sustain potentially repeated recycling of the services and clients.  Also, you could imagine a service that I call that turns around and calls a third party service.  That third party service could call back directly to the client that made the initial call.  Note, once we start doing duplex communication (and we'll encounter this in part 4, conversations), is that the definition of "client" and "service" become a bit muddier.

     

    So, to the code:

     

    Ingredients:

    • My service workflow, I listen for three different messages (start, add item, complete ), and then I will send the message back to the client:
    • image
      • You'll note that there is a loop so that we can keep adding items until we eventually get the complete order message and we then exit the loop.
    • A "client" workflow, which will call this service:
    • image
      • You'll note, some of the magic happens here.  After I start, add and complete the order, you'll see that instead of sending messages, I'll now flip around and wait on the receive in order to receive the shipping cost from the service.

     

    Details

    The first thing that we need to do in order to enable this duplex messaging to occur is that the "client" workflow has to explicitly provide its context token to the service so that the service can address the appropriate instance of the client workflow.

    Note, in the real world, you'll probably need to supply more than just the context token, you will need some address and binding information.

    Let's look at the contract of the service:

    [ServiceContract(Namespace ="http://microsoft.com/dpe/samples/duplex")]
    public interface  IOrderProcessing
    {
        [OperationContract()]
        void SubmitOrder(string customerName, IDictionary<string, string> context);
    
        [OperationContract(IsOneWay = true )]
        void AddItem(OrderItem orderItem);
    
        [OperationContract(IsOneWay = true )]
        void CompleteOrder();
    }

    You'll note that on the SubmitOrder method, I pass in a context token.  This is my callback correlation identifier, this is how I will figure out what instance on the client side I want to talk to.  Now, I need to do some work to get the context token in order to send, so let's look at how we do this:

    On the client side, on the first Send activity, let's hook the BeforeSend event.

    image

     

    Let's look at the implementation of GrabToken:

       1:  private void GrabToken(object sender, SendActivityEventArgs e)
       2:  {
       3:      ContextToSend = receiveActivity1.Context;
       4:      Console.WriteLine("Received token to send along");
       5:  }
       6:   
       7:  public static DependencyProperty ContextToSendProperty = DependencyProperty.Register("ContextToSend", typeof(System.Collections.Generic.IDictionary<string, System.String>), typeof(OrderSubmitter.Workflow1));
       8:   
       9:  [DesignerSerializationVisibilityAttribute(DesignerSerializationVisibility.Visible)]
      10:  [BrowsableAttribute(true)]
      11:  [CategoryAttribute("Parameters")]
      12:  public System.Collections.Generic.IDictionary<string, String> ContextToSend
      13:  {
      14:      get
      15:      {
      16:          return ((System.Collections.Generic.IDictionary<string, string>)(base.GetValue(OrderSubmitter.Workflow1.ContextToSendProperty)));
      17:      }
      18:      set
      19:      {
      20:          base.SetValue(OrderSubmitter.Workflow1.ContextToSendProperty, value);
      21:      }
      22:  }

    First, note that on lines 7-22 we declare a dependency property call ContextToSend.  Think of this simply as a bindable storage space.  On line 3, we go and assign to that the value of receiveActivity1.Context.  "But Matt, couldn't I just build a context token off the workflow ID?"  You could, but you're only going to be correct in the "simple scenario."   You can see we then take that ContextToSend, and pass that into the context parameter for the service operation. Always walk up and ask a Receive activity for its context token, don't try to build one on your own.

    Now, on the service side, we need to extract that, and we need to apply the value to the send activity in the service workflow that needs to call back.  We basically can do the reverse:

    private void codeActivity1_ExecuteCode(object sender, EventArgs e)
    {
        //set callback context
        sendActivity1.Context = callbackContext;
    }

    This is inside a code activity in the first receive activity.  callbackContext is a dependency property that is bound to the inbound context on the Receive activity. 

     

    The final trick is that both workflows have to be hosted inside a WorkflowServiceHost.  This makes sense for the "service" workflow, since it will be message activated.  On the client side, we have to do a little bit of work in order to get to the workflow runtime to spin up a workflow instance.  In the early betas, we had an easy way to get to the runtime, WorkflowServiceHost.WorkflowRuntime.  In order to conform more with the extensibility of WCF, this has been moved to the extensions of the service host.  We get there by:

       1:  static void Main(string[] args)
       2:  {
       3:      using (WorkflowServiceHost wsh = new WorkflowServiceHost(typeof(Workflow1)))
       4:      {
       5:          Console.WriteLine("Press <ENTER> to start the workflow");
       6:          Console.ReadLine();
       7:          wsh.Open();
       8:          WorkflowRuntime wr = wsh.Description.Behaviors.Find<WorkflowRuntimeBehavior>().WorkflowRuntime;
       9:          WorkflowInstance wi = wr.CreateWorkflow(typeof(Workflow1));
      10:          AutoResetEvent waitHandle = new AutoResetEvent(false);
      11:          wr.WorkflowCompleted += delegate { waitHandle.Set(); };
      12:          wr.WorkflowTerminated += delegate(object sender, WorkflowTerminatedEventArgs e) { Console.WriteLine("error {0}", e); waitHandle.Set(); };
      13:          wi.Start();
      14:          waitHandle.WaitOne();
      15:          Console.WriteLine("Workflow Completed");
      16:          Console.ReadLine();
      17:      }
      18:  }

    On line 3, you'll see we new up a WorkflowServiceHost based on the service type (it will do this to find and open the respective endpoints).  On line 8, we reach in and grab the WorkflowRuntimeBehavior and get the WorkflowRuntime, and we use that to create an instance of the workflow. 

    So, here's what we have done:

    • Figure out how to grab the context token from a Receive activity
    • Modify the contract to explicitly send the "callback info" to the service
    • On the service side, figure out how to grab that and apply it to a Send activity
    • Finally, on the client side, how to manually kick off workflows, rather than waiting for them to be message activated (the usual path we have is the infrastructure creating the workflow instance). 
  • Page 1 of 6 (148 items) 12345»