• mwinkle.blog

    Dogs and Cats Living Together, Mass Hysteria (or, Source Available for the .NET Framework)

    • 0 Comments

    The title quote brought to you courtesy of the original Ghostbusters film.  As Scott just announced on his blog, we're making the source available under the Microsoft Reference License (MS-RL).

    This is cool.  But the really cool part of this is that there will be symbol server in the cloud that will let you dynamically, and auto-magically, download the symbols and source on demand from us. This means you can now keep stepping into code beyond when you get to DataAdapter.Update() for instance and trace all the way down the stack.  This is going to make it a lot easier to dive deep into debugging to see what is really going on when you hand off a bunch of parameters to a method in the .NET Framework.  I can think of a number of times this would have been incredibly helpful in tracing down those "oh, I should have set parameter x to something that could have been cast to a y"

    I'll be doing a channel9 video today or tomorrow, any questions for the team, let me know!

  • mwinkle.blog

    Tracking Gem in SDK Documentation

    • 2 Comments

    I am always amazed when I find something cool in the SDK, it is a great source for all sorts of different details regarding WF.  The gem today originated with a customer question:

    The column 6 and 7 are EventArgTypeId and EventArgTypeId, which seem to be NULL.
    I would like to know if I can use these 2 fields (EventArgTypeId, EventArg) to track information when these events occurred. What is the use of these fields? I did not find much information on the web..

    I had a good guess what the fields were for, but one of our support engineers answered by pointing out this page in the SDK, which describes:

    The SQL tracking service in Windows Workflow Foundation lets you add tracking information about workflows and their associated activities. The SqlTrackingQuery class provides high-level access to the data that is contained in the tracking database. However, you can also use direct queries against the SQL tracking service database views for more detailed information. These views map directly to the underlying SQL tracking service table schemas.

    This then goes on to describe all of the fields inside the many tables of the SQL tracking service.  This is a great way to figure out how to write custom queries against the views and what to expect in what columns.  I've talked previously about the tracking service and databases here.

     

    The answer to the original email question by the way is that these two fields are written to when the event raised contains event args we want to capture, say an exception.  If we want to track specific pieces of data, my blog post talks about where we can find those.

  • mwinkle.blog

    Recent WF Content Summary

    • 6 Comments

    I've been having some fun playing around with Visual Studio 2008 and the .NET Framework 3.5, and wanted to summarize some of the content I've put up on channel9 and other places.

    Samples

    • The Conversation Sample remixed -- if there is one sample in the SDK to help you understand what is going on with context passing and duplex messaging, this is the sample that helped me learn it.  I had this sample reworked a little bit so that you don't have 5 console windows open.
    • Pageflow sample 1, live hosted -- watch this as pageflow is hosted "live" in the cloud.  This lets you interact with a pageflow as well as dive into the code using some tools my team has built.
    • Pageflow sample 2, live hosted as above -- this is the sample that shows how we can leverage the navigator workflow type to be in multiple paths at the same time (a parallel state machine almost).

    Screencasts

    Channel9 Videos

     

    In the upcoming months, we've got more samples and content coming out about these features.  If you've got questions, keep 'em coming.

  • mwinkle.blog

    Picture Services

    • 1 Comments

    As Justin announces here, my team recently released the picture services sample.  This is a cool way to expose the pictures on a machine that are found via Windows Desktop Search out in a simple, easy to consume REST endpoint.

    There are a few things here that I think are pretty cool

    • Pretty easily return POX and Syndication formatted data, and creating a URI hierarchy (here)
    • Querying Windows Desktop Search (here)
    • Adding Simple List Extensions (here)

    Justin has a screencast here.

  • mwinkle.blog

    VS 2008 Beta 2 Shipped : 0 to Workflow Service in 60 seconds

    • 2 Comments

    So, per Soma's blog, this great Channel9 video, and a bunch of other places, VS 2008 Beta 2 is available for download (go here).

    Others are covering their favorite feature in depth, I want to cover one of mine: the WCF test client, which I will show by way of creating a Workflow Service application.

    Real quick, for those of you who didn't read the readme file, I know sometimes you just forget, there is an important note regarding how to get  this to work (out of the box you will probably get an exception in svcutil.exe).

    Running a WCF Service Library results in svcutil.exe crashing and the test form not working

    Running a WCF Service Library starts the service in WcfSvcHost and opens a test form to debug operations on the service.  On the Beta2 build this results in crash of svcutil.exe and the test form doesn’t work due to a signing problem.

    To resolve this issue

    Disable strong name signing for svcutil.exe by opening a Visual Studio 2008 Beta2 Command Prompt. At the command prompt run: sn –Vr “<program files>\Microsoft SDKs\Windows\v6.0A\Bin\SvcUtil.exe”  (replace <program files> with your program files path – ex: c:\Program Files)

    Fire up VS 2008, create a new Sequential Workflow Service Library project:

    image

    This creates a basic Sequential workflow with a Receive activity

    image

    It also creates an app.config and IWorkflow1.cs

    image

    IWorkflow1.cs contains the contract our service is going to implement:

    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.ServiceModel;
    
    namespace WFServiceLibrary1
    {
        // NOTE: If you change the interface name "IWorkflow1" here, 
        // you must also update the reference to "IWorkflow1" in App.config.
        [ServiceContract]
        public interface IWorkflow1
        {
            [OperationContract]
            string Hello(string message);
        }
    }

    Now, we can modify this as needed, or we can delete it and create the contract as part of the Receive activity, see my previous post here on the topic.

    Return to the workflow and take a quick look at the properties of the Receive activity, and note that the parameters for the method (message and (returnValue)) have already been promoted and bound as properties on the workflow, that saves us a quick step or two:

    image

    Drop a code activity in the Receive shape, and double click to enter some code:

    image

    private void codeActivity1_ExecuteCode(object sender, EventArgs e)
    {
        returnValue = String.Format("You entered '{0}'.", inputMessage);
    }

    Now, we're pretty much there, but let's take a quick look at the app.config

    <service name="WFServiceLibrary1.Workflow1" behaviorConfiguration="WFServiceLibrary1.Workflow1Behavior">
      <host>
        <baseAddresses>
          <add baseAddress="http://localhost:8080/Workflow1" />
        </baseAddresses>
      </host>
      <endpoint address=""
                binding="wsHttpContextBinding"
                contract="WFServiceLibrary1.IWorkflow1" />
      <endpoint address="mex" 
                binding="mexHttpBinding" 
                contract="IMetadataExchange" />
    </service>

    We're going to use the wsHttpContextBinding, which you can think of as the standard wsHttpBinding with the addition of the Context channel to the channel stack.  Also note, we can right click the config and open it in the WCF config editor, slick!

    image

    Let's hit F5.  We build, do a little bit of processing and up pops the WCF test client.  You may also note this little pop up from your task tray:

    image

    What's this, the "autohosting" of your service, just like you get with ASP.NET on a machine.  This saves me the trouble of having to write a host as well as my service when I just want to play around a bit.  The test client looks like this:

    image

    Double click on the hello operation and fill in a message to send to the service:

    image

    Clicking "Invoke" will invoke the service, which will soon return with the value we hope to see.  Sure enough, after a bit of chugging along, this returns:

    image

    Finally, let's hit the XML tab to see what's in there, and we see it is the full XML of the request and the response.  There's an interesting little tidbit in the header of the response:

    <s:Envelope xmlns:s="http://www.w3.org/2003/05/soap-envelope" 
    xmlns:a="http://www.w3.org/2005/08/addressing" 
    xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">
      <s:Header>
        <a:Action s:mustUnderstand="1" u:Id="_2">
           http://tempuri.org/IWorkflow1/HelloResponse
         </a:Action>
        <a:RelatesTo u:Id="_3">urn:uuid:3f5b7eb5-cc35-4b01-b345-92f6edf728d7</a:RelatesTo>
        <Context u:Id="_4" xmlns="http://schemas.microsoft.com/ws/2006/05/context">
          <InstanceId>fc0f47fd-dd7b-4030-9883-acbf358583c3</InstanceId>
        </Context>

    This is the context token that lets me know how to continue conversing with this workflow.  In the test client, I can't find a way to attach it to a subsequent request, meaning we can't use the test client for testing multiple steps through a workflow, but this new feature lets me get up and running, verify connectivity, and be able to set breakpoints and debug my workflow service, which is pretty cool.

    I've posted the following video on c9 as a screencast, which I will try to do with my subsequent blog postings.

  • mwinkle.blog

    Introducing the Windows Server 2008 Developer Training Kit

    • 0 Comments

    Finally, a post of mine without code, haven't had one of those in a while.

    Through some interesting organizational hierarchies, I actually report up through the Longhorn Server, strike that, Windows Server 2008 evangelism team, and every now and then do deliver some content that is relevant to Windows Server 2008 (there, got it right that time).

    Check out this Developer Training Kit we just released over on James' blog.

    This thing contains about 15 presentations on topics relevant to Windows Server 2008 development, including some cool stuff on TxF (Transactional File System) (and of course, WF and WCF.

    Check it out, here.

  • mwinkle.blog

    N of M Question (Why Use ActivityExecutionContextManager?)

    • 1 Comments

    In this post, mstiefel  asked the following:

    # re: Implementing the N of M Pattern in WF

    Since you are not looping, do you have to use the ActivityExecutionContextManager to generate a new context for the child activities? Couldn't you use the context passed into the Execute method?

    Friday, July 06, 2007 7:54 PM by mstiefel

    If I were creating a parallel activity, and simply wanted to execute a number of distinct child activities, I would just use the passed in context as you suggest to schedule execution of each of the activities.  The difference here is subtle, and that is I don't have distinct child activities, I have one child activity which I need to clone m times and execute those cloned children separately.  The need to create the context is so that each one of those clones maintains it's own execution context, what variables are where, who is in what state, etc.  This is important while executing, and while after completion in the need where I want to compensate for the individual activities.

    This line

    ActivityExecutionContext newContext = aecm.CreateExecutionContext(this.Activities[0]);

    is what will cause the activity to be cloned, allowing me to schedule the execution of the clone, and not the template activity (which, incidentally, will never be scheduled, leaving me with m+1  copies of the activity.  This is the same behavior that I get in a While or a Replicator (or the CAG, depending upon how it is configured).  A state machine workflow will do a similar thing as I may re-enter a state multiple times. 

    If, instead of allowing a user to specify the list of approvers and dynamically creating the activities, I designed it so that a user would have to drop and configure an activity for each approval (similar to the parallel activity), I would have used code like this:

    // Code to schedule distinct activities in parallel, aka code similar to the parallel activity
    protected override ActivityExecutionStatus Execute(ActivityExecutionContext executionContext)
    {
            foreach (Activity activity in ChildActivities)
            {
                // i is some counter I use to track how many branches there are so I know when
                // I am done
                i++;
                // I'm interested in what happens when this guy closes.
                activity.RegisterForStatusChange(Activity.ClosedEvent, this);
                executionContext.ExecuteActivity(activity);
            }
            return ActivityExecutionStatus.Executing;
    }
    

  • mwinkle.blog

    WF and BizTalk

    • 1 Comments

    Before joining Microsoft, I did spend a fair amount of time in the BizTalk world, and to this day, it remains one of the most common source of questions I am asked when presenting on WF.

    Paul showed off some cool stuff at TechEd, and yesterday released the code to enable a pretty interesting pattern where processes can be modeled in WF and then an orchestration can be created from the workflow to handle the messaging.  This gives you the flexible process modeling in WF and then rely on  BizTalk to handle all those messy real world details like transforming messages in the send port, communicating via the built in adapters, and handling retries. 

    This is a cool project that let's you use both technologies together today.  Check it out and give feedback at the connect site!

  • mwinkle.blog

    Pageflow Sample Update

    • 2 Comments

    Updated to version 1.1

    Changes

    • Corrected the install to display the EULA at a more opportune time (thanks to this comment)
    • Updated to fix the exception being thrown at the completion of a workflow
      • NavigationManager.cs
      • AspUserInput.cs
      • WebHostingModule.cs

    To install this, download the bits (same location), uninstall the existing version and install this version.

    Other Setup Notes

    I got a few pieces of feedback on the setup to get things working, here are some hints to get up and running:

    • If you extract the samples.zip file in the install directory (Program Files\Windows Workflow Foundation Samples\...) they may be marked as read-only.  Mark the whole directory as non-read-only (so you can write) and then you will be able to compile successfully.
    • To run the sample, you need to set up a persistence database, this will be used to store the state of the workflow while we are not interacting with it
      • Then go into the App.config or web.config and find the ConnectionString settings and update it to point to the correct database.  On most default VS installs, you can point at localhost\sqlexpress, but it can certainly be any sql install.

    Other Notes

    For the folks using WCSF, Glenn has a good summary that fills in some of the story on the WCSF side, and sneaks an Eminem reference into the title :-)

  • mwinkle.blog

    Implementing the N of M Pattern in WF

    • 11 Comments

    The second in my series of alternate execution patterns (part 1)

    I recently worked with a customer who was implementing what I would call a "basic" human workflow system. It tracked approvals, rejections and managed things as they moved through a customizable process. It's easy to build workflows like this with an Approval activity, but they wanted to implement a pattern that's not directly supported out of the box. This pattern, which I have taken to calling "n of m", is also referred to as a "Canceling partial join for multiple instances" in the van der Aalst taxonomy.

    The basic description of this pattern is that we start m concurrent actions, and when some subset of those, n, complete, we can move on in our process and cancel the other concurrent actions. A common scenario for this is where I want to send a document for approval to 5 people, and when 3 of them have approved it, I can move on. This comes up frequently in human or task-based workflows. There are a couple of "business" questions which have to be answered as well, the implementation can support any set of answers for this:

    • What happens if an individual rejects? Does this stop the whole group from completing, or is it simply noted as a "no" vote?
    • How should delegation be handled? Some business want this to break out from the approval process at this point.

    The first approach the customer took was to use the ConditionedActivityGroup (CAG). The CAG is probably one of the most sophisticated out of the box activities that we ship in WF today, and it does give you a lot of control. It also gives you the ability to set the Until condition which would allow us to specify the condition that the CAG could complete, and the others would be cancelled (see Using ConditionedActivityGroup)

    ConditionedActivityGroup

    What are pros and cons of this approach:

    Pros

    • Out of the box activity, take it and go
    • Focus on approval activity
    • Possibly execute same branch multiple times

    Cons

    • Rules get complex ( what happens if the individual rejections causes everything to stop)
    • I need to repeat the same activity multiple times (especially in this case, it's an approval, we know what activity needs to be in the loop)
    • I can't control what else a developer may put in the CAG
    • We may want to execute on some set of approvers that we don't know at design time, imagine an application where one of the steps is defining the list of approvers for the next step. The CAG would make that kind of thing tricky.

    This led us to the decision to create a composite activity that would model this pattern of execution. Here are the steps we went through:

    Build the Approval activity

    The first thing we needed was the approval activity. Since we know this is going to eventually have some complex logic, we decided to take the basic approach of inheriting from SequenceActivity and composing our approval activity out of other activities (sending email, waiting on notification, handling timeouts, etc.). We quickly mocked up this activity to have an "Approver" property, a property for a timeout (which will go away in the real version, but is useful to put some delays into the process. We also added some code activities which Console.WriteLine 'd some information out so we knew which one was executing. We can come back to this later and make it arbitrarily complex. We also added the cancel handler so that we can catch when this activity is canceled (and send out a disregard email, clean up the task list ,etc). Implementing ICompensatableActivity may also be a good idea so that we can play around with compensation if we want to (note, that we will only compensate the closed activities, not the ones marked as canceled).

    Properties of the Approval Activity

    Placing the Approval Activity inside our NofM activity.

    What does the execution pattern look like?

    Now that we have our approval activity, we need to determine how this new activity is going to execute. This will be the guide that we use to implement the execution behavior. There are a couple of steps this will follow

    1. Schedule the approval's to occur in parallel, one per each approver submitted as one of the properties
    2. Wait for each of those to finish.
    3. When one finishes, check to see if the condition to move onward is satisfied (in this case, we increment a counter towards a "number of approvers required" variable.
    4. If we have not met the criteria, we keep on going. [we'll come back to this, as we'll need to figure out what to do if this is the last one and we still haven't met all of the criteria.]
    5. If we have met the criteria, we need to cancel the other running activities (they don't need to make a decision any more).
    6. Implement the easy part of this (scheduling the approvals to occur in parallel)

    I say this is the easy part as this is documented in a number of places, including Bob and Dharma's book. The only trickery occurring here is that we need to clone the template activity, that is the approval activity that we placed inside this activity before we started working on it. This is a topic discussed in Nate's now defunct blog.

        protected override ActivityExecutionStatus Execute(ActivityExecutionContext executionContext)
        {
            // here's what we need to do.
            // 1.> Schedule these for execution, subscribe to when they are complete
            // 2.> When one completes, check if rejection, if so, barf
            // 3.> If approve, increment the approval counter and compare to above
            // 4.> If reroute, cancel the currently executing branches.
            ActivityExecutionContextManager aecm = executionContext.ExecutionContextManager;
            int i = 1;
            foreach (string approver in Approvers)
            {
                // this will start each one up.
                ActivityExecutionContext newContext = aecm.CreateExecutionContext(this.Activities[0]);
                GetApproval ga = newContext.Activity as GetApproval;
                ga.AssignedTo = approver;
                // this is just here so we can get some delay and "long running ness" to the
                // demo
                ga.MyProperty = new TimeSpan(0, 0, 3 * i);
                i++;
                // I'm interested in what happens when this guy closes.
                newContext.Activity.RegisterForStatusChange(Activity.ClosedEvent, this);
                newContext.ExecuteActivity(newContext.Activity);
            }
            return ActivityExecutionStatus.Executing;
        }
    

    Code in the execute method

    One thing that we're doing here is RegisterForStatusChange() This is a friendly little method that will allow me to register for a status change event (thus it is very well named). This is a property of Activity, and I can register for different activity events, like Activity.ClosedEvent or Activity.CancelingEvent. On my NofM activity, I implment IActivityEventListener of type ActivityExecutionStatusChangedEvent (check out this article as to what that does and why). This causes me to implement OnEvent which since it comes from a generic interface is now strongly typed to accept the right type of event arguments in. That's always a neat trick that causes me to be thankful for generics. That's going to lead us to the next part.

    Implement what happens when one of the activities complete

    Now we're getting to the fun part of how we handle what happens when one of these approval activities return. For the sake of keeping this somewhat brief, I'm going to work off the assumption that a rejection does not adversely affect the outcome, it is simply one less person who will vote for approval. We can certainly get more sophisticated, but that is not the point of this post! ActivityExecutionStatusChangedEventArgs has a very nice Activity property which will return the Activity which is the one that caused the event. This let's us find out what happened, what the decision was, who it was assigned to, etc. I'm going to start by putting the code for my method in here and then we'll walk through the different pieces and parts.

    public void OnEvent(object sender, ActivityExecutionStatusChangedEventArgs e)
    {
        ActivityExecutionContext context = sender as ActivityExecutionContext;
        // I don't need to listen any more
        e.Activity.UnregisterForStatusChange(Activity.ClosedEvent, this);
        numProcessed++;
        GetApproval ga = e.Activity as GetApproval;
        Console.WriteLine("Now we have gotten the result from {0} with result {1}", ga.AssignedTo, ga.Result.ToString());
        // here's where we can have some additional reasoning about why we quit
        // this is where all the "rejected cancels everyone" logic could live.
        if (ga.Result == TypeOfResult.Approved)
            numApproved++;
        // close out the activity
        context.ExecutionContextManager.CompleteExecutionContext(context.ExecutionContextManager.GetExecutionContext(e.Activity));
        if (!approvalsCompleted  && (numApproved >= NumRequired))
        {
            // we are done!, we only need to cancel all executing activities once
            approvalsCompleted = true;
            foreach (Activity a in this.GetDynamicActivities(this.EnabledActivities[0]))
                if (a.ExecutionStatus == ActivityExecutionStatus.Executing)
                    context.ExecutionContextManager.GetExecutionContext(a).CancelActivity(a);
        }
        // are we really done with everything? we have to check so that all of the 
        // canceling activities have finished cancelling
        if (numProcessed == numRequested)
            context.CloseActivity();  
    }
    

    Code from "OnEvent"

    The steps here, in English

    • UnregisterForStatusChange - we're done listening.
    • Increment the number of activities which have closed (this will be used to figure out if we are done)
    • Write out to the console for the sake of sanity
    • If we've been approved, increment the counter tracking how many approvals we have
    • Use the ExecutionContextManager to CompleteExecutionContext, this marks the execution context we created for the activity done.
    • Now let's check if we have the right number of approvals, if we do, mark a flag so we know we're done worrying about approves and rejects and then proceed to cancel the activities. CancelActivity. CancelActivity schedules the cancellation, it is possible that this is not a synchronous thing (we can go idle waiting for a cancellation confirmation, for instance.
    • Then we check if all of the activities have closed. What will happen once the activities are scheduled for cancellation is that each one will eventually cancel and then close. This will cause the event to be raised and we step through the above pieces again. Once every activity is done, we finally close out the activity itself.

    Using it

    I placed the activity in a workflow, configured it with five approvers and set it for two to be required to move on. I also placed a code activity outputting "Ahhh, I'm done". I also put a Throw activity in there to raise an exception and cause compensation to occur to illustrate that only the two that completed are compensated for.

    So, what did we do?

    • Create a custom composite activity with the execution logic to implement an n-of-m pattern
    • Saw how we can use IEventActivityListener in order to handle events raised by our child activities
    • Saw how to handle potentially long running cancellation logic, and how to cancel running activities in general.
    • Saw how compensation only occurs for activities that have completed successfully

    Extensions to this idea:

    • More sophisticated rules surrounding the approval (if a VP or two GM's say no, we must stop)
    • Non binary choices (interesting for scoring scenarios, if the average score gets above 95%, regardless of how many approvers remaining, we move on)
    • Create a designer to visualize this, especially when displayed in the workflow monitor to track it
    • Validation (don't let me specify 7 approvals required, and only 3 people)
  • mwinkle.blog

    Pageflow questions: "What about WCSF / Acropolis / Codename 'foo'"?

    • 1 Comments

    So, I got a little bit of feedback from my initial post.  First, thanks, it's great to see all of the interest in the technology.  I want to use this as the place to answer common questions that arise about the sample. 

    Here's one that I got internally as well as externally(here, here )

    What about WCSF and Acropolis?  Does this change how we think about the problem today? 

    The short answers are "they are still here", "no".

    WCSF 

    There are a lot of people who are using the Web Client Software Factory.  It is definitely something you should check out, it is a great toolkit to build composite web applications.  Part of what it does, among many other things, is the Pageflow Application Block designed to model navigation.   This is what most people are wondering about when they first hear about the pageflow sample we released.  Don't they do the same thing?  From a functional level, yes, they both provide a nice abstraction to model flow through an application.  This is a sample usage of WF to solve a similar problem in a little bit of a different way.  The model inside the WCSF is extensible, allowing a different provider of pageflow information, so it is probably even possible to put this pageflow sample inside of the WCSF, I have yet to give that a try.

    At a deeper level, there are some differences that stem from the implementation of the Pageflow Application Block as a state machine, which I will get into in my next post.

    Acropolis

    Acropolis, announced at TechEd last week, is still early on in its lifecycle, as part of the ".NET Client Futures".  I haven't spent a lot of time looking at it yet, but it is something else which will have a way to model the flow within an application, in this case between WPF forms. 

    Conclusion

    From a support perspective, it's important to note that this sample is unsupported (save for myself and a few others) as opposed to Acropolis and WCSF which have teams working on them.

    It is important to note that both WCSF and Acropolis aim to solve a much larger problem than simple UI navigation.  In that sense, in no way does the pageflow sample released represent a wholesale replacement of either of them.  The sample that we released aims to illustrate a way to use WF to solve the problem of UI navigation.  The important thing to take away from this is that it can be incredibly valuable to model the flow through one's application using a technology which allows separation from the UI.  This makes our UI more loosely coupled with the rest of our application, and increases the agility to react to changes in process / data required / need to track infomation. 

    Also, even if you don't want to use this for pageflow, tune in, because there are some pretty valuable things that any WF developer can learn by taking a look at this sample.  I will continue to discuss those as well.

  • mwinkle.blog

    Pageflow Questions: Why not a state machine?

    • 1 Comments

    Here's a comment from my initial post introducing the pageflow sample from wleong:

    NavigatorWorkflow looks like a state machine to me.  Why create a new workflow type?

    Tuesday, June 12, 2007 3:44 AM by wleong

    This is a good question.  There a couple of reasons why we create our own workflow type:

    • To more accurately model a process
    • Enable different execution semantics
    • Make development faster by focusing on the model, not the implementation details.

    For a little more background on the problem, see my previous post on "Different Execution Patterns for WF (or, Going beyond Sequential and State Machine)" that talks in a little more depth about the trouble one can encounter by only think about the two out of the box models.

    For this problem, in particular, all of the reasons are relevant.

    Accurately modeling a process

    It is very natural to think of UI navigation as a series of places we can be, and a set of transitions from any one of those places to another.  That's what the navigator workflow models.  There is a subtle difference from a state machine that plays to point 3 here.  This model allows me to not worry about putting an IEventActivity at the top of a state, then making some decision and then setting a new state.  We abstract away from defining the events, that's taken care of for us.  This lets us model the process naturally.  In a state machine, one could argue that modeling pageflow with transitions has me place one event and then have an IfElse activity that lets me decide which state to transition to.  This adds mental overhead to the model, moving me much closer to the implementation details (again, point 3).

    Enable Different Execution Semantics

    This was the primary reason I wrote the original article, my customer was doing some very unnatural acts in order to model their process in a sequential workflow.  In the case of UI navigation, we have the ability to be in multiple interactions at the same time.  The WF state machine has the (very natural) restriction that we can only be in one state at a given time (although there are certainly other state machine models that do not have that restriction). The pageflow sample allows you to be in multiple interactions at the same time, so think about filling out a mortgage application, I can be in the midst of many minor sub processes that all roll up into the larger application process, and I can jump from filling out my salary history to the details of the property I am buying by simply clicking on a tab at the top of the page.

    Make development faster by focusing on the model, not the implementation details.

    If we decide to use a state machine, we are coupling our model to the implementation and execution details of the state machine, which will cause us to do things that we don't need to do for this application.  The WF state machine is generic, any state can receive n different events and react to each on differently.  In pageflow, we know there is only one event, "GoForward" and then we have a set of rules to operate on that to determine where we transition to.  By spending the time to create our own root activity, we remove the burden of the implementation details from the workflow developer, allowing them to focus on the details of their process, not the configuration of a given activity.

  • mwinkle.blog

    Introducing the Pageflow Sample

    • 40 Comments

      Most people think of workflows as a tool to represent and automate back-end business processes. Back-end business processes normally require some user interaction but their main purpose is not to drive the user experience or manage the UI. However, there is a growing type of application that leverages workflow as a tool to drive the user interaction and drive the user experience of an interactive process. This type of technology is called page flow.

      Last year at TechEd, we showed off some bits we had been working on internally that were designed to make that possible, the ability to model the user interaction of an application using workflow. This approach provides developers the ability to continue managing the complexity of their application in a structure and scalable manner. It turned out that the code we showed at TechEd wasn't going to end up in any of the product releases, so the dev team requested permission to release that code as a sample of how one can implement a generic navigation framework using WF that can support multiple UI technologies (i.e. ASP.NET and WPF).  This year, I just finished giving a talk showing this off and talking about how it will be available today!

      Thanks go to Shelly Guo, the developer and Israel Hilerio, the PM who had worked on this feature, and to Jon Flanders for providing packaging and quality control

      Now for the good stuff, download the bits from here!

      Navigate to setup.exe and run the setup, this will copy the sample projects and the source code for the sample, as well as some new visual studio project templates.

      Now, let's open up a sample project, so navigate to the samples directory and open the ASPWorkflow sample, this will show off both an ASP.NET Front end as well as a WPF Controller (you can actually use the two together). Let's get to the good stuff right away, and open up the workflow file.

      Wow… what's going on here? It kind of looks like a state machine, but not really. What has been done here is to create a new base workflow type. Things like SequentialWorkflow and StateMachineWorkflow aren't the only ways to write workflows, they are just two common patterns of execution. A NavigatorWorkflow type has been created (and you can inspect the source and the architecture document to see what this does) and a WorkflowDesigner has been created for it as well (again, this source is available as a guide for those of you who are creating your own workflow types).

      Each of the activities you see on the diagram above is an InteractionActivity, representing the interaction between the user (via the UI technology of their choosing) and the process. A nice model is to think of the InteractionActivity as mapping to a page within a UI. The output property is the information that is sent to that page (a list of orders or addresses to display) and the input is the information that is received from the page when the user clicks "submit". The InteractionActivity is a composite activity, allowing one to place other activities within the activity to be executed when input is received. The interesting property of the InteractionActivity is the Transitions collection. By selecting this and opening its designer, we are presented with the following dialog:

      This allows us to specify n-transitions from this InteractionActivity or "page" to other InteractionActivities. And we can specify this via a WF activity condition. This way, we could forward orders greater than $1000 to a credit verification process, or orders containing fragile goods through a process to obtain insurance from a shipper. What's cool about this, my page does not know about that process, it just says "GoForward" and my process defines what comes next. This de-couples the pages from the logic of your process.

      We then need to wire things up in config:

      <configSections>
      <section name="NavigationManagerSettings" 
      type="Microsoft.Samples.Workflow.UI.NavigationManagerConfigSection, Microsoft.Samples.Workflow.UI, Version=1.0.0.0, Culture=neutral, PublicKeyToken=40B940EB90393A19"/>
      <section name="AspNavigationSettings" 
      type="Microsoft.Samples.Workflow.UI.Asp.AspNavigationConfigSection, Microsoft.Samples.Workflow.UI, Version=1.0.0.0, Culture=neutral, PublicKeyToken=40B940EB90393A19"/>
      </configSections>
      
      

      <NavigationManagerSettings StartOnDemand="false">
        <Workflow mode="Compiled" value="ASPUIWorkflow.Workflow1, ASPUIWorkflow"/>
        <!--<Workflow mode="XOML" value="WebSite/XAMLWorkflow.xoml" rulesFile="WebSite/XAMLWorkflow.rules" />-->
        <Services>
          <add type="System.Workflow.Runtime.Hosting.DefaultWorkflowCommitWorkBatchService, System.Workflow.Runtime, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
          <add type="System.Workflow.Runtime.Hosting.SqlWorkflowPersistenceService, System.Workflow.Runtime, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" ConnectionString="Initial Catalog=WorkflowStore;Data Source=localhost;Integrated Security=SSPI;" UnloadOnIdle="true"/>
        </Services>
      </NavigationManagerSettings>
      <AspNavigationSettings>
        <PageMappings>
          <add bookmark="Page1" location="/WebSite/Default.aspx"/>
          <add bookmark="Page2" location="/WebSite/Page2.aspx"/>
          <add bookmark="Page3" location="/WebSite/Page3.aspx"/>
          <add bookmark="Page4" location="/WebSite/Page4.aspx"/>
          <add bookmark="Page5" location="/WebSite/Page5.aspx"/>
          <add bookmark="LastPage" location="/WebSite/LastPage.aspx"/>
        </PageMappings>
        <ExceptionMappings>
          <add type="Microsoft.Samples.Workflow.UI.WorkflowNotFoundException" location="/WebSite/ErrorPage.aspx"/>
          <add type="Microsoft.Samples.Workflow.UI.WorkflowCanceledException" location="/WebSite/ErrorPage.aspx"/>
          <add type="System.ArgumentException" location="/WebSite/ErrorPage.aspx"/>
          <add type="System.Security.SecurityException" location="/WebSite/ErrorPage.aspx"/>
        </ExceptionMappings>
      </AspNavigationSettings>
      
      

       

      Finally, let's look inside an ASP.NET page and see what we need to do to interact with the process:

      AspNetUserInput.GoForward("Submit", userInfo, this.User);

      This code is specifying the action and is submitting a userInfo object (containing various information gathered from the page) to the InteractionActivity (in this case, it submits to the Page2 InteractionActivity). If we look at what we've configured as the Input for this InteractionActivity, we see the following, which we can then refer to in the transition rules in order to make decisions about where to go next:

      Plenty of other stuff we could talk about here, support for back button, persistence, etc and I could continue to ramble on about this in another record-length blog post, but I will stop here for now. I will continue to blog about this, look forward to hearing any and all types of feedback, and what you'd be interested in seeing in this. Moving forward, there aren't any formal plans around this, but if there is enough interest in the community, we could get it created as a project on codeplex. If that sounds intriguing either contact me through this blog, leave a comment so that I can gauge the interest in such a scenario.

      Go, grab the bits!  And, if you have feedback, please contact me.

  • mwinkle.blog

    Down In Orlando

    • 0 Comments

    David and I arrived in Orlando yesterday morning via the redeye for TechEd 2007. We're settled into the Port Orleans French Quarter, and will be heading on over to the conference center later today.  Once we get all checked in, I'll post some info on the talks I'll be giving, and the talks that I wouldn't want to miss.  For now, I'm out to enjoy the non-Seattle-like weather and adjust to the time change. 

  • mwinkle.blog

    HELP WANTED

    • 0 Comments

    My boss, James, has a post on his blog about two positions that are open on our team.  One focuses on Orcas evangelism, while the other is for IIS7.  Our team does a ton of cool stuff, and if you're interested, certainly drop James a line.  If you want to create, deliver, and scale your passion for .net (or IIS) to the world, give it a look!

  • mwinkle.blog

    Different Execution Patterns with WF (or, Going beyond Sequential and State Machine)

    • 7 Comments

    How do I do this?

    image

    A lot of times people get stuck with the impression that there are only two workflow models available: sequential and state machine. True, out of the box these are the two that are built in, but only because there are is a set of common problems that map nicely into their execution semantics. As a result of these two being "in the box," I often see people doing a lot of very unnatural things in order to fit their problem into a certain model.

    The drawing above illustrates the flow of one such pattern. In this case, the customer wanted parallel execution with two branches ((1,3) and (2,5)). But, they had an additional factor that played in here, 4 could execute, but it could only execute when both 1 and 2 had completed. 4 didn't need to wait for 3 and 5 to finish, 3 and 5 could take a long period of time, so 4 could at least start once 1 and 2 were completed. Before we dive into a more simple solution, let's look at some of the ways they tried to solve the problem, in an attempt to use "what's in the box."

    The "While-polling" approach

    image

     

    The basic idea behind this approach is that we will use a parallel activity, and in the third branch we will place a while loop that loops on the condition of "if activity x is done" with a brief delay activity in there so that we are not busy polling. What's the downside to this approach:

    • The model is unnatural, and gets more awkward given the complexity of the process (what do we do if activity 7 has a dependency on 4 and 5)
    • The polling and waiting is just not an efficient way to solve the problem
    • This Is a lot to ask a developer to do in order to translate the representation she has in her head (first diagram, with the model we are forcing into).

    The SynchScope approach

    WF V1 does have the idea of synchronizing some execution by using the SynchronizationScope activity. The basic idea behind the SynchronizationScope is that one can specify a set of handles that the activity must be able to acquire before allowing it's contained activities to execute. This let's us serialize access and execution. We could use this to mimic some of the behavior that the polling is doing above. We will use sigma(x, y, z) to indicate the synchronization scope and it's handles (just because I don't get to use nearly as many Greek letters as I used to).

    image

    This should work, provided the synchronization scopes can obtain the handles in the "correct" or "intended" order. Again, the downside here is that this gets to be pretty complex, how do we model 4 having a dependency on 3 and 2? Well, our first synchronization scope now needs to extend to cover the whole left branch, and then it should work. For the simple case like the process map I drew at the beginning, this will probably work, but as the dependency map gets deeper, we are going to run into more problems trying to make this work.

    Creating a New Execution Pattern

    WF is intended to be a general purpose process engine, not just a sequential or state machine process engine. We can write our own process execution patterns by writing our own custom composite activity. Let's first describe what this needs to do:

    • Allow activities to be scheduled based on all of their dependent activities having executed.

      • We will start by writing a custom activity that has a property for expressing dependencies. A more robust implementation would use attached properties to push those down to any contained activity
    • Analyze the list of dependencies to determine which activities we can start executing (perhaps in parallel)
    • When any activity completes, check where we are at and if any dependencies are now satisfied. If they are, schedule those for execution.

    So, how do we go about doing this?

    Create a simple activity with a "Preconditions" property

    In the future, this will be any activity using an attached property, but I want to start small and focus on the execution logic. This one is a simple Activity with a "Preconditions" array of strings where the strings will be the names of the activities which must execute first:

    public partial class SampleWithPreconProperty: Activity
    {
        public SampleWithPreconProperty()
        {
            InitializeComponent();
        }
    
        private string[] preconditions = new string[0];
    
        public string[] Preconditions
        {
            get { return preconditions; }
            set { preconditions = value; }
        }
    

    Create the PreConditionExecutor Activity

    Let's first look at the declaration and the members:

    [Designer(typeof(SequentialActivityDesigner),typeof(IDesigner))]
    public partial class PreConditionExecutor : CompositeActivity
    {
        // this is a dictionary of the executed activities to be indexed via
        // activity name
        private Dictionary<string, bool> executedActivities = new Dictionary<string, bool>();
    
        // this is a dictionary of activities marked to execute (so we don't 
        // try to schedule the same activity twice)
        private Dictionary<string, bool> markedToExecuteActivities = new Dictionary<string, bool>();
    
        // dependency maps
        // currently dictionary<string, list<string>> that can be read as 
        // activity x has dependencies in list a, b, c
        // A more sophisticated implementation will use a graph object to track
        // execution paths and be able to check for completeness, loops, all
        // that fun graph theory stuff I haven't thought about in a while
        private Dictionary<string, List<string>> dependencyMap = new Dictionary<string, List<string>>();
    

    We have three dictionaries, one to track which have completed, one for which ones are scheduled for execution, and one to map the dependencies. As noted in the comments, a directed graph would be a better representation of this so that we could do some more sophisticated analysis on it.

    Now, let's look at the Execute method, the one that does all the work.

    protected override ActivityExecutionStatus Execute(ActivityExecutionContext executionContext)
    {
        if (0 == Activities.Count)
        {
            return ActivityExecutionStatus.Closed;
        }
        // loop through the activities and mark those that have no preconditions
        // as ok to execute and put those in the queue
        // also generate the graph which will determine future activity execution.
        foreach (Activity a in this.Activities)
        {
            // start with our basic activity
            SampleWithPreconProperty preconActivity = a as SampleWithPreconProperty;
            if (null == preconActivity)
            {
                throw new Exception("Not right now, we're not that fancy");
            }
            // construct the execution dictionary
            executedActivities.Add(a.Name, false);
            markedToExecuteActivities.Add(a.Name, false);
            List<string> actDependencies = new List<string>();
            if (null != preconActivity.Preconditions)
            {
                foreach (string s in preconActivity.Preconditions)
                {
                    actDependencies.Add(s);
                }
            }
            dependencyMap.Add(a.Name, actDependencies);
        }
        // now we have constructed our execution map and our dependency map
        // let's do something with those, like find those activities with
        // no dependencies and schedule those for execution.
        foreach (Activity a in this.Activities)
        {
            if (0 == dependencyMap[a.Name].Count)
            {
                Activity executeThis = this.Activities[a.Name];
                executeThis.Closed += currentlyExecutingActivity_Closed;
                markedToExecuteActivities[a.Name] = true;
                executionContext.ExecuteActivity(this.Activities[a.Name]);
                Console.WriteLine("Scheduled: {0}", a.Name);
            }
        }
        return ActivityExecutionStatus.Executing;
    }

    Basically, we first construct the execution tracking dictionaries, initializing those to false. We then create the dictionary of dependencies. We then loop through the activities and see if there are any that have no dependencies (there has to be one, this would be a good point to raise an exception if there isn't. We record in the dictionary that this one has been marked to execute and then we schedule it for execution (after hooking the Closed event so that we can do some more work later). So what happens when we close?

    void currentlyExecutingActivity_Closed(object sender, ActivityExecutionStatusChangedEventArgs e)
    {
        e.Activity.Closed -= this.currentlyExecutingActivity_Closed;
        if (this.ExecutionStatus == ActivityExecutionStatus.Canceling)
        {
            ActivityExecutionContext context = sender as ActivityExecutionContext;
            context.CloseActivity();
        }
        else if (this.ExecutionStatus == ActivityExecutionStatus.Executing)
        {
            // set the Executed Dictionary
            executedActivities[e.Activity.Name] = true;
            if (executedActivities.ContainsValue(false) /* keep going */)
            {
                // find all the activities that contain the precondition 
                // and remove them, and then cycle through any that have 0 
                // preconditions (and have not already executed or are marked
                // to execute
                // who contains this precondition? 
                foreach (Activity a in this.Activities)
                {
                    // filter out those activities executed or executing
                    if (!(executedActivities[a.Name] || markedToExecuteActivities[a.Name]))
                    {
                        if (dependencyMap[a.Name].Contains(e.Activity.Name))
                        {
                            // we found it, remove it
                            dependencyMap[a.Name].Remove(e.Activity.Name);
                            // if we now have no dependencies, let's schedule it
                            if (0 == dependencyMap[a.Name].Count)
                            {
                                a.Closed += currentlyExecutingActivity_Closed;
                                ActivityExecutionContext context = sender as ActivityExecutionContext;
                                markedToExecuteActivities[a.Name] = true;
                                context.ExecuteActivity(a);
                                Console.WriteLine("Scheduled: {0}", a.Name);
                            }
                        }
                    }
                }
            }
            else //close activity
            {
                ActivityExecutionContext context = sender as ActivityExecutionContext;
                context.CloseActivity();
            }
        }
    }

     

    There are a few lines of code here, but it's pretty simple what's going on.

    • We remove the event handler
    • If we're still executing, mark the list of activities appropriately
    • Loop through and see if any of them have dependencies on the activity which just now completed being done
    • If they do, remove that entry from the dependency list and check if we can run it (if the count == 0). If we can, schedule it, otherwise keep looping.
    • If all the activities have completed (there is no false in the Executed list) then we will close out this activity.

    To actually use this activity, we place it in the workflow, place a number of the child activity types within it (again, with the attached property, you could put nearly any activity in there) and specify the activities that it depends on. Since I haven't put a designer on it, I just use the SequenceDesigner. Here's what it looks like (this is like the graph I drew above but kicks off with the "one" activity executing first:

    image

     

    Where can we go from here

    • Validation, remember all that fun graph theory stuff checking for cycles and completeness and no gaps. Yeah, we should probably wire some of that stuff up here to make sure we can actually execute this thing
    • Analysis of this might be interesting, especially as the process gets more complex (identifying complex dependencies, places for optimization, capacity stuff)
    • A designer to actually do all of this automatically. Right now, it is left as an exercise to the developer to express the dependencies by way of the properties. It would be nice to have a designer that would figure that out for you, and also validate so you don't try to do the impossible.
    • Make this much more dynamic and pull in the preconditions and generate the context for the activities on the fly.  This would be cool if you had a standard "approval" activity that you wanted to have a more configurable execution pattern.  You could build the graph through the designer and then use that to drive the execution

    I'm going to hold off on posting the code, as I've got a few of these and I'd like to come up with some way to put them out there that would make it easy to get to them and use them. You should be able to pretty easily construct your own activity based on the code presented here.

  • mwinkle.blog

    Paste XML as Serializable Type

    • 3 Comments

    Every now and then, there's a really cool feature that's buried somewhere that just hits you and makes you say "Wow, that's insanely helpful, why didn't somebody think of this sooner." 

    I was playing around with the BizTalk Services SDK, specifically the different web programming bits and pieces and stumbled up on the Paste As Serializable visual studio add in (navigate to BizTalk Services SDK\Samples\Web\Tooling). 

    Let's say you're interested in programming against some services that return normal, plain XML. Well, nobody likes writing code to query XML, Linq to XML is fun, but I'd just really like to mess around with some objects and not really deal with how it looks over the wire or under the hood.

    This great little utility lets you select some snippet of XML (say I get it from here, because I am working on a Twitter mashup).  Copy the XML to the clipboard (here's a snippet of what the XML looks like).

    <statuses>
      <status>
        <created_at>Wed May 02 16:54:45 +0000 2007</created_at>
        <id>47434042</id>
        <text>has advil &amp; beineigts in the back of Steve and Don's talk in 4101B.  Anybody needing either is welcome to some, the pastries will go fast.</text>
        <user>
          <id>5440022</id>
          <name>Matt Winkler</name>
          <screen_name>mwinkle</screen_name>
          <location>Seattle</location>
          <description>workflow technical evangelist</description>
          <profile_image_url>http://assets2.twitter.com/system/user/profile_image/5440022/normal/Matt_Winkler-72dpi.jpg?1177350091</profile_image_url>
          <url>http://blogs.msdn.com/mwinkle</url>
          <protected>false</protected>
        </user>
      </status>

    Go into Visual Studio into a code file and open up the edit menu.

    image

    There, clearly indicated by the "this is just an alpha icon," is the option to "Paste XML as Serializable Type".  Here's what it outputs:

       [System.Xml.Serialization.XmlRootAttribute(Namespace = "", ElementName = "statuses")]
        public class StatusesClass
        {
    
            private System.Collections.Generic.List<StatusClass> statusField;
    
            [System.Xml.Serialization.XmlElementAttribute(Namespace = "", ElementName = "status")]
            public virtual System.Collections.Generic.List<StatusClass> Status
            {
                get
                {
                    return this.statusField;
                }
                set
                {
                    this.statusField = value;
                }
            }
    
            [System.Xml.Serialization.XmlRootAttribute(Namespace = "", ElementName = "status")]
            public class StatusClass
            {
    
                private string created_atField;
                private string idField;
                private string textField;
                private UserClass userField;
                [System.Xml.Serialization.XmlElementAttribute(Namespace = "", ElementName = "created_at")]
                public virtual string Created_at
                {
                    get
                    {
                        return this.created_atField;
                    }
                    set
                    {
                        this.created_atField = value;
                    }
                }
    

    ...

     

    Remainder of code truncated for the sake of the readers. 

    This lets me do some cool stuff like this:

    static void Main(string[] args)
    {
        WebHttpClient twc = new WebHttpClient();
        twc.UriTemplate = "http://twitter.com/statuses/user_timeline/mwinkle.xml";
        StatusesClass sc = twc.Get().GetBody<StatusesClass>();
        Console.WriteLine("Got {0}", sc.Status.Count);
        
    }

    And get my result back in a nice typed object that I can then use elsewhere in my code (or at least debug)

     

    image

    Steve, thanks, this thing rocks!

  • mwinkle.blog

    Orcas Beta 1 Samples (WF, WCF)

    • 1 Comments

    Beta 1 samples have been posted.  In this release there are separate installs for WF and WCF (and the workflow services are in the WCF one, go figure!)

     

    From Laurence's blog:

    This is the only version of the new samples that works at this time. The version that comes with the VS Orcas Beta1 offline Help does not work.

    New samples in this release:

    WF Samples\Technologies\Rules And Conditions\Order Processing Policy

                WCF Technology Samples\Basic\Ajax

                WCF Technology Samples\Basic\Syndication

    WCF Technology Samples\Basic\WorkflowServices

    You can download the zip files through any of the samples.

    To setup and run the Orcas Beta1 samples:

    1. Setup:

    a. Check out the Setup Instructions for the Ajax, Syndication, and Workflow Services samples.

    i. Use the Setup scripts under the “OrcasSetup” dir in the downloaded WCF zip file. In contrast, the “Setup” dir includes the scripts necessary for the samples already released with WCF 3.0

    b. On Win2K3, if you see a plain text page when connecting to http://localhost/NetFx35Samples/service.svc, you need to run:

    “%SystemDrive%\Windows\Microsoft.NET\Framework\v2.0.50727\aspnet_regiis.exe” -i –enable

    "%WINDIR%\Microsoft.Net\Framework\v3.0\Windows Communication Foundation\ServiceModelReg.exe" -i

    c. The WCF Samples setup script Setupvroot.bat will not run unless MSMQ is installed or the NetMsmqActivator service is disabled

    2. Ajax samples:

    a. For the Simple and Post Ajax service samples, you will need to completely refresh your session to be able to reload ClientPage.htm from one service to another as there is an issue with IE. Or simply rename ClientPage.htm in one of the samples

    3.  Workflow Services:

    a. WorkflowServiceUtility is a reference necessary to the CalculatorClient and DuplexWorkflowServices samples

    b. CalculatorClient is the client for both DurableService and StateMachineWorkflowService. Click on “C” to stop the session (it becomes red) before switching services

    c. The Conversations client is the window that has “Press enter to submit order”

    d. In DuplexWorkflowServices, only the ServiceHost and ClientHost projects need to be started

    Technorati Tags: , , ,
  • mwinkle.blog

    Hello World (WF Services) in Spanish

    • 0 Comments

    I noticed on Matias' blog that Ezequiel posted a hello world tutorial in Spanish based on the March CTP.  

  • mwinkle.blog

    Dynamically Generating an Operation Contract in Orcas using WF

    • 3 Comments

    This kicks off a set of posts where I'll be discussing some of the interesting features coming out in Orcas.

    I want to focus on this post on the Receive Activity, and a nice little feature in the designer that lets you create a contract on the fly, without having to drop into code and write a decorated interface.  This allows us to divide the world into two approaches:

    • One where I design my contract first, and then start creating a workflow to implement the operations on the contract.
    • Design my workflow first, and have it figure out the contract for me (this is what I will focus on in more detail)

    Designing a Contract First

    This is what most WCF folks will be familiar with:

    [ServiceContract]
    public interface IOrderProcessing
    {

    [OperationContract]
    bool SubmitOrder(Order order);

    [OperationContract]
    Order[] GetOrders(int customerId);
    }

    When I drop a receive activity onto a workflow, I can now import this contract:

    This will bring up a type chooser that lets me pick my service contract:

    This imports all of the details and we can see the the operation picker

    If we look at the activity properties we now see the parameters to the method.  The (ReturnValue) is the object that we need to return in the operation.  The order parameter is the message that is going to be passed when the method is called.  I can take that and bind that to values in my workflow or whatever I want to do with it.

     

    Designing a Workflow First

    The other approach we can take is to create the contract as we create the workflow.  That's right, we don't need to create the contract explicitly in code.  To do that, drop a receive activity onto the designer and double click.  Instead of selecting "Import Contract" select "Add Contract".  This will create a new contract with a basic operation.  By selecting the contract or the operation we can name it something a little nicer.

    By selecting the operation, I can customize all of its behavior.  I can create parameters to be passed in, I can set the types of those parameters (as well as the return type of the operation). 

    It's relevent to point out that I can select any type that would be valid in a WCF contract.  The drop down list displays the basic types, but by selecting "Browse Type" I am brought into a type picker where I can select custom types.  As you can see below I have created a "CancelOrder" operation that takes in an order, the reason and who authorized the cancellation.

    When I click ok, my activity has had new dependency properties added to it, as can be seen in the property grid for the activity.

     

    So what's happening here?

    In the workflow I created, I used a code separation workflow, so I have an .xoml file which contains the workflow definition.  Let's take a quick peek at how the receive activity is defined (note, some of the xml is truncated, if you view in an rss reader or copy and past you can see all the details, I'll work on updating the blog layout):

        <ns0:ReceiveActivity x:Name="receiveActivity2">
    <ns0:ReceiveActivity.ServiceOperationInfo>
    <ns0:OperationInfo PrincipalPermissionRole="administrators" Name="CancelOrder" ContractName="MyContract">
    <ns0:OperationInfo.Parameters>
    <ns0:OperationParameterInfo Attributes="Out, Retval" ParameterType="{x:Type p9:Boolean}" Name="(ReturnValue)" Position="-1" xmlns:p9="clr-namespace:System;Assembly=mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
    <ns0:OperationParameterInfo Attributes="In" ParameterType="{x:Type p9:Order}" Name="order" Position="0" xmlns:p9="clr-namespace:WcfServiceLibrary1;Assembly=WcfServiceLibrary1, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
    <ns0:OperationParameterInfo Attributes="In" ParameterType="{x:Type p9:String}" Name="reason" Position="1" xmlns:p9="clr-namespace:System;Assembly=mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
    <ns0:OperationParameterInfo Attributes="In" ParameterType="{x:Type p9:String}" Name="authorizedBy" Position="2" xmlns:p9="clr-namespace:System;Assembly=mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
    </ns0:OperationInfo.Parameters>
    </ns0:OperationInfo>
    </ns0:ReceiveActivity.ServiceOperationInfo>
    </ns0:ReceiveActivity>

    Here you can see that within the metadata of the activity, we have the definition for the contract and the operation that I defined.  When we spin up a WorkflowServiceHost to host this workflow as a service, the host will inspect the metadata for the workflow, looking for all of the endpoints and will create them.  You can also see within the OperationInfo element that I am able to define the PrincipalPermissionRole defining the role of allowed callers, taking advantage of the static security checks which I will talk about in another post.  So, defined declaratively in the XAML is the contract for the operations.  I didn't need to write the interface or the contract explicitly, I was able to write a workflow, specifiy in the workflow how it will communicate and then let the WorkflowServiceHost figure out the nitty gritty details of how create the endpoints.  The other part here that is important to mention is that the config will play a part determining what the transport channel and other such details will be.  Within the config, when we set up an endpoint we need to specify the contract is "MyContract", or the name that we assigned when we created it.

    Summary

    We talked about the way that we can implement contracts that already exist in a receive activity, as well as how we can use the designer to actually create our contract while we are designing the workflow.  The WorkflowServiceHost does the heavy lifting here in order to enable this little bit of nifty-ness.

  • mwinkle.blog

    So Many Cool Things Going On

    • 0 Comments

    Just a quick summary:

    • Orcas Beta 1 is out
    • BPEL folks are looking for feedback
    •  labs.biztalk.net is live (Check out Clemens, Dennis and John's posts)
      • Maybe at MIX we can talk about how this might make things interesting?
      • I imagine that I will probably do more than a couple of samples based on this
    • I'll be at MIX next week and am looking forward to meeting up with anyone who is attending.  I'll be in the mashup's area and I'd love to see folks using WF to power some mashups.
  • mwinkle.blog

    Starting and Transacting

    • 0 Comments

    Two quick links before I run back to my day job.

    • Paul posts a Web Workflow Starter Kit.  Check it out for a good sample on hosting in ASP.NET, managing workflow data, and doing a task that a lot of people immediately think of when they start thinking workflow.
    • Jason posts a really cool "Developer Meets Server" screencast showing how WCF transactions can be flowed in and down to the Transactional NTFS capabilities.  Run, don't walk, to check this thing out here.
  • mwinkle.blog

    Two Cool Technologies, One Great Solution

    • 0 Comments

    My peer, David, has a great screencast posted on Channel9 that shows off a solution from FullArmor that incorporates WF and PowerShell and one really cool looking designer.

    On my list of cool things to check out when I have free time (currently June 2015) is PowerShell.  It's a great tool for devs to make their apps much more managable, by both devs and our dear friends, the IT Pro.  David has built some samples that build right on top of things to return collections of data in order to PowerShell enable them.

    Check it out here

  • mwinkle.blog

    Orcas WF and WCF Samples

    • 1 Comments

    The first pass at a number of samples for WF and WCF in Orcas has been posted here.

    This has samples for all of the features discussed in my last post, as well as some of the cool rules stuff that Moustafa talks about here.

  • mwinkle.blog

    WCF and WF in "Orcas"

    • 15 Comments

    The wheels of evangelism never stop rolling.  Just a few months ago I was blogging that .NET 3.0 was released.  I've been busy since then, and now I can talk about some of that.  Today, the March CTP of Visual Studio "Orcas" was released to the web.  You can get your fresh hot bits here.  Samples will be coming shortly. Thom has a high level summary here.

    UPDATE: Wendesday, 2/28/2007 @ 11pm.  The readme file is posted here, a few minor corrections have been made to the caveats below.

    More updates... corrections to another caveat (a post-build event is required to get the config to be read).

    A Couple of Minor Caveats

    Since this is a CTP, it's possible that sometimes the wrong bits end up in the right place at the wrong time. Here are a few things to be aware of (not intended to be a comprehensive list):

    • Declarative Rules in Workflows:  There is an issue right now where the .rules file does not get hooked into the build process correctly.
      • Solution: Use code conditions, or load declarative rules for policy activiites using the RulesFromFile activity available at the community site
    • WF Project templates are set to target the wrong version: As a result, trying to add assemblies that are 3.0.0.0 or greater will not be allowed.
      • Solution: Right click the project, select properties, and change the targeted version of the framework to 3.0.0.0 or 3.5.0.0
    • A ServiceHost may not read config settings because the app config does not get copied to the bin (update: only on server 2003):  You will get an exception that "no application endpoints can be found"
      • Add the following Post Build Event "copy “$(ProjectDir)\app.config” $(TargetName).config "
      • Solution: For the time being, configure the WorkflowServiceHost in code (using AddServiceEndpoint() and referencing the WorkflowRuntime property to configure any services on the workflow runtime
      • This also means that a number of the workflow enabled services samples will not work out of the box.  Replace the config based approach with the code based approach and you will be fine.  I will try to post modified versions of these to the community site shortly.
    • WorkflowServiceHost exception on closing: You will get an exception that "Application image file could not be loaded... System.BadImageFormatException:  An attempt was made to load the program with an incorrect format"
      • Solution: Use the typical "These are not the exceptions you are looking for" jedi mind trick.  Catch the exception and move along in your application, as if there is nothing to see here.
    • Tools from the Windows SDK that you've come to know and love, like SvcConfigEditor and SvcTraceViewer are not available on the VPC. 
      • Solution: Copy these in from somewhere else and they will work fine. The SvcConfigEditor will even pick up the new bindings and behaviors to configure the services for some of the new functionality.

    The CTP is not something that is designed for you to go into production with, it's designed to let you explore the technology. There is no go-live license associated with this, it's for you to learn more about the technology. Since most of these issues have some work around, this shouldn't prevent you from checking these things out (because they are some kind of neat).

    New Features In WF and WCF in "Orcas"

    Workflow Enabled Services

    We've been talking about this since we launched at PDC 2005.  There was a session at TechEd 2006 in the US and Beijing that mentioned bits and pieces of this.  One of the key focus areas is the unification of WCF and WF.  Not only have the product teams joined internally, the two technologies are very complementary.  So complementary that everyone usually asks "so how do I use WCF services here?" when I show a workflow demo.  That's fixed now! 

    Workflow enabled services allow two key things:

    • Easily consume WCF services inside of a workflow
    • Expose a workflow as a WCF service

    This is accomplished by the following additions built on top of v1:

    • Messaging Activities (Send and Receive)
      • With designer support to import or create contracts
    • WorkflowServiceHost, a derivation of ServiceHost
    • Behavior extensions that handles message routing and instantiation of workflows exposed via services.
    • Channel extensions for managing conversation context.

     The Send and Receive activites live inside of the workflow that we define.  The cool part of the Receive activity is that we have a contract designer, so you don't have to dive in and create an interface for the contract, you can just specifiy it right on the Receive activity, allowing you a "workflow-first" approach to building services. 

    Once we've built a workflow, we need a place to expose it as a service.  We use the WorkflowServiceHost which is a subclass of ServiceHost in order to host these workflow enabled services.  The WorkflowServiceHost takes care of the nitty-gritty details of managine workflow instances, routing incoming messages to the appropriate workflow and performing security checks as well.  This means that the code required to host a workflow as a WCF service is now reduced to four lines of code or so.  In the sample below, we are not setting the endpoint info in code due to the issue mentioned above.

       1:  WorkflowServiceHost wsh = new WorkflowServiceHost(typeof(MyWorkflow));
       2:  wsh.Open();
       3:  Console.WriteLine("Press <Enter> to Exit");
       4:  Console.ReadLine();
       5:  wsh.Close();

    To support some of the more sophisticated behavior, such as routing messages to a running workflow, we introduce a new channel extension responsible for managing context.  In the simple case, this context just contains the workflowId, but in a more complicated case, it can contain information similar to the correlation token in v1 that allows the message to be delivered to the right activity (think three receives in parallel, all listing on the same operation).  Out of the box there is the wsHttpContextBinding and the netTcpContextBinding which implicitly support the idea of maintaining this context token.  You can also roll your own binding and attach a Context element into the binding definition.

    The Send activity allows the consumption of a service, and relies on configuraiton to detemrine exactly how we will call that service.  If the service we are calling is another workflow, the Send activity and the Receive activity are aware of the context extensions and will take advantage of them. 

    With the Send and Receive actiivty, it gets a lot easier to do workflow to workflow communicaiton, as well as more complicated messaging patterns. 

    Another nice feature of the work that was done to enable this is that we know have the ability to easily support durable services.  These are "normal" WCF services written in code that utilize an infrastructure similar to the workflow persistence store in order to provide a durable storing of state between method calls.

    As you can imagine, I'll be blogging about this a lot more in the future.

    JSON / AJAX Support

    While there has been a lot of focus on the UI side of AJAX, there still remains the task of creating the sources for the UI to consume.  One can return POX (Plain Old Xml) and then manipulate it in the javascript, but that can get messy.  JavaScript Object Notation (JSON) is a compact, text-based serialization of a JavaScript object.  This lets me do something like:

    var stuff = {"foo" : 78, "bar" : "Forty-two"};
    document.write("The meaning of life is " + stuff.bar);

    In WCF, we can now return JSON with a few switches of config.  The following config:

       1:  <service name="CustomerService">
       2:      <endpoint contract="ICustomers"
       3:        binding="webHttpBinding"
       4:        bindingConfiguration="jsonBinding"
       5:        address="" behaviorConfiguration="jsonBehavior" />
       6:  </service>
       7:   
       8:  <webHttpBinding>
       9:          <binding name="jsonBinding" messageEncoding="Json" />
      10:  </webHttpBinding>
      11:   
      12:  <behaviors>
      13:     <endpointBehaviors>
      14:          <behavior name ="jsonBehavior">
      15:            <webScriptEnable />
      16:          </behavior>
      17:      </endpointBehaviors>
      18:   </behaviors>

    will allow a function like this:

       1:  public Customer[] GetCustomers(SearchCriteria criteria)
       2:  {
       3:     // do some work here
       4:     return customerListing;
       5:  }

    to return JSON when called.  In JavaScript, I would then have an instance of a CustomerOrder object to manipulate.  We can also serialize from JavaScript to JSON so this provides a nice way to send parameters to a method.   So, in the above method, we can send in the complex object SearchCriteria from our JavaScript.  There is an extension to the behavior that creates a JavaScript proxy.  So, by referencing /js as the source of the script, you can get IntelliSense in the IDE, and we can call our services directly from our AJAX UI.

    We can also use the JSON support in other languages like Ruby to quickly call our service and manipulate the object that is returned.

    I think that's pretty cool.

    Syndication Support

    While we have the RSS Toolkit in V1, we wanted to make syndication part of the toolset out of the box.  This allows a developer to quickly return a feed from a service.  Think of using this as another way to expose your data for consumption.  We have introduced a SyndicationFeed object that is an abstraction of the idea of a feed that you program against.  We then leave it up to config to determine if that is an ATOM or RSS feed (and, would it be WCF if we didn't give you a way to implement a custom encoding as well?)  So this is cool if you just want to create a simple feed, but it also allows you to create a more complicated feed that has content that is not just plain text.  For instance, the digg feed has information about the submission, the flickr feed has info about the photos.  Your customer feed may want to have an extension that contains the customer info that you will allow your consumers to have access to.  The SyndicationFeed object allows you to create these extensions and then the work of encoding it to the specific format is taken care of for you.  So, let's seem some of that code (note, this is from the samples above):

       1:  public SyndicationFeed GetProcesses()
       2:  {
       3:      Process[] processes = Process.GetProcesses();
       4:   
       5:      //SyndicationFeed also has convenience constructors
       6:      //that take in common elements like Title and Content.
       7:      SyndicationFeed f = new SyndicationFeed();            
       8:   
       9:      //Create a title for the feed
      10:      f.Title = SyndicationContent.CreatePlaintextTextSyndicationContent("Currently running processes");
      11:      f.Links.Add(SyndicationLink.CreateSelfLink(OperationContext.Current.IncomingMessageHeaders.To));
      12:   
      13:      //Create a new RSS/Atom item for each running process
      14:      foreach (Process p in processes)
      15:      {
      16:          //SyndicationItem also has convenience constructors
      17:          //that take in common elements such as Title and Content
      18:          SyndicationItem i = new SyndicationItem();
      19:   
      20:          //Add an item title.
      21:          i.Title = SyndicationContent.CreatePlaintextTextSyndicationContent(p.ProcessName);
      22:   
      23:          //Add some HTML content in the summary
      24:          i.Summary = new TextSyndicationContent(String.Format("<b>{0}</b>", p.MainWindowTitle), TextSyndicationContentKind.Html);
      25:          
      26:          //Add some machine-readable XML in the item content.
      27:          i.Content = SyndicationContent.CreateXmlSyndicationContent(new ProcessData(p));
      28:   
      29:          f.Items.Add(i);
      30:      }
      31:   
      32:      return f;
      33:  }

    And the config associated with this would be:

     

       1:  <system.serviceModel>
       2:    <services>
       3:      <service name="ProcessInfo">
       4:         <endpoint address="rss"
       5:             behaviorConfiguration="rssBehavior" binding="webHttpBinding"
       6:             contract="HelloSyndication.IDiagnosticsService" />
       7:        <endpoint address="atom"
       8:             behaviorConfiguration="atomBehavior" binding="webHttpBinding"
       9:             contract="HelloSyndication.IDiagnosticsService" />    
      10:      </service>
      11:    </services>
      12:    <behaviors>
      13:      <endpointBehaviors>
      14:        <behavior name="rssBehavior">
      15:          <syndication version="Rss20"/>
      16:        </behavior>
      17:        <behavior name="atomBehavior">
      18:          <syndication version="Atom10"/>
      19:        </behavior>
      20:      </endpointBehaviors>
      21:    </behaviors>  
      22:  </system.serviceModel>

    This config will actually create an RSS and an ATOM endpoint.  The feed returned would have the process information embedded as: (in this case in ATOM)

       1:  <entry>
       2:    <id>fe1f1d2e-d676-417d-85bf-b7969dd07661</id>
       3:    <title type="text">devenv</title>
       4:     <summary type="html">&lt;b&gt;Conversations - Microsoft Visual Studio&lt;/b&gt;</summary>
       5:     <content type="text/xml">
       6:       <ProcessData xmlns="http://schemas.datacontract.org/2004/07/HelloSyndication" 
       7:              xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
       8:              <PeakVirtualMemorySize>552157184</PeakVirtualMemorySize>
       9:              <PeakWorkingSetSize>146124800</PeakWorkingSetSize>
      10:              <VirtualMemorySize>456237056</VirtualMemorySize>
      11:        </ProcessData>
      12:     </content>
      13:  </entry>

    We can also use the Syndication support to consume feeds!

       1:  SyndicationFeed feed = new SyndicationFeed();
       2:  feed.Load(new Uri("http://blogs.msdn.com/mwinkle/atom.xml"));
       3:  foreach (SyndicationItem item in feed.Items)
       4:  {
       5:     // process the feed here
       6:  }

    In the case where there have been extensions to the feed, we can access those as the raw XML or we can attempt to deserialize into an object.  This is accomplished in the reverse of the above:

       1:  foreach (SyndicationItem i in feed.Items)
       2:  {
       3:      XmlSyndicationContent content = i.Content as XmlSyndicationContent;
       4:      ProcessData pd = content.ReadContent<ProcessData>();
       5:   
       6:      Console.WriteLine(i.Title.Text);
       7:      Console.WriteLine(pd.ToString());
       8:  }
     

    HTTP Programming Support

    In order to enable both of the above scenarios (Syndication and JSON), there has been work done to create the webHttpBinding to make it easier to do POX and HTTP programming.

    Here's an example of how we can influence this behavior and return POX.  First the config:

       1:  <service name="GetCustomers">
       2:    <endpoint address="pox" 
       3:              binding="webHttpBinding" 
       4:              contract="Sample.IGetCustomers" />
       5:  </service>

    Now the code for the interface:

       1:  public interface IRestaurantOrdersService
       2:  {
       3:     [OperationContract(Name="GetOrdersByRestaurant")]
       4:     [HttpTransferContract(Method = "GET")]  
       5:     CustomerOrder[] GetOrdersByRestaurant();  
       6:  }

    The implementation of this interface does the work to get the CustomerOrder objects (a datacontract defined elsewhere).  And the returned XML is the datacontract serialization of CustomerOrder (omitted here for some brevity).  With parameters this gets more interesting as these are things we can pass in via the query string or via a POST, allowing arbitrary clients that can form URL's and receive XML to consume our services.

    Partial Trust for WCF

    I'm not fully up to date on all of the details here, but there has been some work done to enable some of the WCF functionality to operate in a partial trust environment.  This is especially important for situations where you want to use WCF to expose a service in a hosted situation (like creating a service that generates an rss feed off of some of your data).  I'll follow up with more details on this one later.

    WCF Tooling

    You now get a WCF project template that also includes a self hosting option (similar to the magic Visual Studion ASP.NET hosting).  This means that you can create a WCF project, hit F5 and have your service available.  This is another are where I will follow up later on.

    Wrap Up

    So, what now? 

    • Grab the bits
    • Explore the new features
    • Give  us feedback (through my blog or a channel9 wiki I am putting together now)! What works, doesn't work, what do you like, not like, etc.
    • Look forward to more posts, c9 videos and screencasts on the features in Orcas.
Page 4 of 6 (148 items) «23456