• mwinkle.blog

    Improving ModelItem with Dynamic

    • 0 Comments

    Hi all, I’m back, and apologize for the delay, things have been crazy here and I’ve taken a little time off following shipping VS2010, so I’m getting back into it.  I’d like to talk about one of my favorite late additions to the WF4 designer programming model.  We added this change sometime in the RC/RTM timeframe in response to some feedback that we had gotten about the verbose-ness of programming against ModelItem.

    If you recall from this post, ModelItem exists to serve as a INotifyPropertyChanged (INPC) aware proxy to the underlying instance being edited within the designer. In exchange for this fairly flexible proxy layer, you get in return a pretty verbose way in which one has to interact to actually get to the data.  Often times there are fairly long incantations of myModelItem.Properties[“foo”].Value.Properties[“bar”].Value.GetCurrentValue() in order to actually return the data (or be able to set the value).  From the beginning, we had decided we wanted to optimize for the WPF XAML consumer of the model item, and so we chose to implement ICustomTypeDescriptor so that WPF binding would be pretty simple (the above would result in a binding syntax of ModelItem.foo.bar). 

    One quick note, looking at the definition of ModelItem itself will show that only the INPC interface is implemented.  However, ModelItem is an abstract type, and the implementation subclass ModelItemImpl contains the definition and implementation of the following interfaces ModelItem, IModelTreeItem, ICustomTypeDescriptor, IDynamicMetaObjectProvider

    As we worked with some of our customers, it became clear that they were frequently using ModelItem in code as well, and so the programming model was becoming more of a problem (we had somewhat hoped that the majority of customers would be using XAML).  We initially went with the idea that we would create a series of Design time peers to all of the common types in WF (making a SequenceDesign, ParallelDesign, etc) which provided an identical programming surface to its peer, but under the covers in the property getters and setters would redirect to an underlying ModelItem.  This has the advantage that we could create the appearance of a strongly typed API surface, but has the downside that it introduces an extra type for every activity we ship, and it requires customers to create two types as well, and not just for activities, you would want one for customer too.  The other approach we kicked around was using the Dynamic capabilities introduced in .NET 4.

    It’s important to note that these are ultimately complimentary approaches, that is, it’s possible in a future framework release (or in a codeplex project if anyone is looking to for a little fun on the side) you could release the strongly typed design time peers. From a timing perspective, it was not likely that we could introduce 30+ new types in the RC/RTM release, and we had questions about the long term sustainability of requiring an additional type for activities.  So, we went ahead with the plan for dynamic support.   In order to do this, we had to add an implementation of IDynamicMetaObjectProvider.  Bill Wagner has a good article here on the basics of doing this.  This requires us to implement one method, GetMetaObject.  In our case, we have an internal, nested ModelItemMetaObject type which inherits from System.Dynamic.DynamicMetaObject. 

    In our case, there are basically two important methods that we care about, BindGetMember and BindSetMember.  These wire up to two methods on ModelItemImpl to get and set based on the property name.  The net result of this is that the following code

    root.Properties["Residence"].
            Value.
            Properties["StreetAddress"].
            Value.GetCurrentValue()

    Can now be written as

    dynamic mi = root;
    Console.WriteLine(root.Residence.StreetAddress)

    Now, with anything there are some tradeoffs, the biggest one is that this is dynamic, I don’t get compile time checking, and I could have a type that will result in an exception being thrown (that said, that problem exists in the verbose form as well).  This does let me write much more succinct code when programming the model item tree though for both gets and sets.  As an added bonus, we found there is actually a decent perf improvement from the WPF XAML data binding that now leverages the IDynamicMetaObjectProvider (mentioned in ScottGu’s post here) in the way that the runtime will only compute the way to resolve the property once (as opposed to every time when it goes through the ICustomTypeDescriptor code path).

     

    Enjoy!

  • mwinkle.blog

    Thoughts on Flowchart

    • 0 Comments

    Last night I saw that Maurice had a few tweets that caught my attention about flowchart, but this is the one that I want to talk about:

    Come to think of it I also really mis a parallel execution in a flowchart. But other than that flowcharts rock! #wf4 #dmabout 13    hours ago via web

    I thought about replying on twitter, but it’s a bit of a longer topic.   When we were building the flowchart for VS2010, we considered a number of possible capabilities, but ultimately the capabilities customers were looking for fell into two categories:

    • Arbitrary rework patterns
    • Coordination patterns

    Arbitrary Rework

    We can describe the model of execution in the current flowchart to be one that supports arbitrary rework.  You could also refer to this as GOTO.   One benefit (and downside) of breaking out of the tightly scoped, hierarchical execution model that we see with the Sequential style of workflows (and in most procedural languages) is the fact there exists more freedom in terms of defining the “next” step to be taken.  The simplest way to think about this is the following flowchart:

    image

    Now, this isn’t particularly interesting, and most folks who look at this will simply ask “why not a loop?”  which in this case is a valid question.  As a process gets more sophisticated (or if humans tend to be involved), the number of possible loops required gets rather tricky to manage in a sequential flow (consider the following rework scenario which includes “next steps” across conditional boundaries, and from some branches but not others.

    image

    Now, mathematically, we can describe the machinery in both of these machines as a set of valid transitions, and it is likely possible that there exists a complete mapping from any flowchart into procedural.

    Coordination

    The second pattern we consistently saw was the use of a free form style of execution in order to build complex coordination patterns.  The example I consistently return to (pointing back to a blog post I made back in the WF3 days)

    image


    Here I want to be able to coordinate a few different things, namely that 3 executes when 1 completes, 5 when 2, 4 when 1 AND 2, 6 when 3 AND 4 AND 5 complete.  Here we’ve created a dependency graph that can’t be handled with the procedural constructs that we have.  How could this happen? Imagine we’re trying to model the ranking process for a series of horse races.  Task 3 can happen when Race 1 completes, as Task 5 can happen when Race 2 completes.  Task 4 represents some work that requires the data from both Races.  When those 3 tasks (3, 4, and 5) complete, I can move forward and take some action (like bet on the next race). 

    This type of pattern can be very powerful, and is often described by petri-nets.  There exists a multitude of join semantics from simple AND joins to conditional joins (ranging from a simple “count” to a condition based on data). 

    How’d we get to where we are today?

    Today, the flowchart in the .NET framework only supports the former pattern, that of arbitrary rework.  How’d we get there.  While we found a lot of value in both patterns what we found when we tried to compose them, we often got into places that became very hard to have predictable execution semantics.   The basic crux of the design issue gets to the “re-arming” of a join.  If I decide to jump back to task 3 at some point, what does it mean for 3 to complete?  Do I execute 6 again right away, do I wait for 4 and 5 to complete a second time?  What happens if I jump to 3 a second time?  Now, there certainly exist formal models for this, and things like Petri-net define a set of semantics for these things.  What we found though was that any expression of the model that took these things into account required a pretty deep understanding of the process model, and we lost one of the key benefits we see from a flowchart, which is that it is a simple, approachable model.  This is not to say that we don’t think that the Coordination pattern is useful, or that petri-nets don’t ultimately hold the unified approach,  we just did not have the time to implement a coordination pattern in VS 2010, and creating an approachable petri-net backed flowchart modeling style was something we’d need more time to get correct. 

    Where does this leave us?

    If you’re looking at this and saying, “but I want coordination (or a full blown petri-net)”, there are a couple of options:

    • Roll up your sleeves and write that activity.  In the blog post above, I outline how you could go about building the coordination pattern, and with the new activity model, this would be simpler.  The harder part is still writing a designer, and there is now some good guidance on free form designers in the source of the state machine on codeplex. 
    • Provide feedback on connect, when you do this it opens a bug in our bug database that our team is looking at daily.  Describe your scenario, what you’re trying to do, and what you’d like to see.  This type of customer feedback is vital to helping us plan the right features for the next release.  
Page 1 of 1 (2 items)