• mwinkle.blog

    Q & A on Advanced Workflow Services talk


    Martin posted an interesting question here on my last post:


    The first thing that we need to do in order to enable this duplex messaging to occur is that the "client" workflow has to explicitly provide its context token to the service so that the service can address the appropriate instance of the client workflow.

    Note, in the real world, you'll probably need to supply more than just the context token, you will need some address and binding information.


    Shouldn't we have this built into a custom binding? (or an extra binding element) So with every call from the client the (WF)context information is included. And the developer is not required to follow a (artificial) state machine.

    Note, at the time, when the service calls back, the endpoint (and the binding) of the client may have changed... So we may need dynamic name-endpoint resolution (sounds like DNS?)


    The question here is generally also asked as "wait, why do I need to explicitly provide a context token and change the contract to have this context thing?"  This is a common question, as changing the contract to reflect implementation details is generally a no-no.  There's one part I left out as well, so let me add that here:

    In the real world, one may also wish to not change the contract (or may not have the ability to).  In that case, we still need to explicitly provide the context token and endpoint information in order to allow me to call back.  There are a few ways to do this, of varying complexity and implication:

    • Put this into the message header and have the other side extract this information and use it the same way.
      • There are two downsides to this approach:
        • It still requires management of the other side to agree upon where the context token is being place.
        • The WF Messaging activities don't give me an easy way to reach in and get to header information, but one could certainly look at some infrastructure level extensions to manage this.  This idea of making duplex easier is one thing that Ed will be talking about in his Oslo workflow services talk at PDC
    • Create a custom binding element.
      • There is one downside with this approach:
        • You're creating a custom channel, custom binding element, and all the other stuff that goes along with creating a channel.  This is very hard work.  If the answer is "you've got to write a channel to do it," we need to do a better job making it easier (see earlier point about Ed's talk).
      • If that's the behavior that you want, you are certainly welcome do go down that path, it would be great to hear about your experiences doing it!
      • The upside to this approach:
        • You're creating a layer of near infinite extensibility, allowing you to handle simple things to the complex dynamic endpoint resolution behavior, once you invest the cost once to create the channel that would sit there and do that.

    This is also the same approach one could take with using an implicit, or content based correlation scheme.  In that case, you create an intermediary that is responsible for translating message contents into the "addressing" information for the instance.  That intermediary can be a service, it could be a channel, and once you put that intermediary in place, you are free to do as sophisticated or as simple work as possible.

  • mwinkle.blog

    Regarding Re-use of Context-aware Proxies


    Yesterday, following my "What's the context for this conversation" presentation, I was approached with the following question:

    I am sharing a singleton client that I want to use to interact with multiple workflow instances, how do I change the context for each of them.

    Completely unbeknownst to me, Wenlong, one of the product team's more prolific bloggers, addressed this very topic in his post here, conveniently posted yesterday :-)

  • mwinkle.blog

    Tracking Gem in SDK Documentation


    I am always amazed when I find something cool in the SDK, it is a great source for all sorts of different details regarding WF.  The gem today originated with a customer question:

    The column 6 and 7 are EventArgTypeId and EventArgTypeId, which seem to be NULL.
    I would like to know if I can use these 2 fields (EventArgTypeId, EventArg) to track information when these events occurred. What is the use of these fields? I did not find much information on the web..

    I had a good guess what the fields were for, but one of our support engineers answered by pointing out this page in the SDK, which describes:

    The SQL tracking service in Windows Workflow Foundation lets you add tracking information about workflows and their associated activities. The SqlTrackingQuery class provides high-level access to the data that is contained in the tracking database. However, you can also use direct queries against the SQL tracking service database views for more detailed information. These views map directly to the underlying SQL tracking service table schemas.

    This then goes on to describe all of the fields inside the many tables of the SQL tracking service.  This is a great way to figure out how to write custom queries against the views and what to expect in what columns.  I've talked previously about the tracking service and databases here.


    The answer to the original email question by the way is that these two fields are written to when the event raised contains event args we want to capture, say an exception.  If we want to track specific pieces of data, my blog post talks about where we can find those.

  • mwinkle.blog

    Hosting Workflows in a Web Service


    This started as a discussion over on the forums, but I ended up writing a fairly long winded response, and for the sake of the whole internet, I'm going to place it here as well so it doesn't get lost in forum-based obscurity!

    The question related to hosting WF as a web service, but not in ASP.NET.  If you expose it as an asmx web service, either by selecting "publish as web service" or by rolling your own asmx web service project, they are going to still be hosted in IIS, but in a different application.

    Anyway, my article is located here.

  • mwinkle.blog

    SOA&BP Conference Announcements, "Oslo"


    This morning at the SOA&BP Conference, we talked about Oslo for the first time.  For me, this is a big day, as it marks the point where the rest of the world knows what a lot of people have been and will continue to be working on.  Robert Wahbe, the VP of Connected Systems, mentioned in the keynote that Oslo can be best viewed as a series of investments that span a number of release cycles.

    What does this mean for me, a WF developer (note, these are my interpretations).

    • A vehicle for further investments in WF and WCF.  There will be a ton of enhancements in order to enable new scenarios, take the idea of modeling processes in an executable workflow to the next level, and drive performance and functional stuff.
    • Moving WF to the next level, by making it a first class citizen in this modeling world, by making rules and other artifacts get elevated into a way to modeled, managed, deployed and monitored.
    • Getting a chance to look at what people are doing with v1, and what lessons we can learn from it.  In Orcas, the stuff that we did was purely additive.  This longer release gives us a chance to enhance and improve and address things that we couldn't do in the Orcas timeframe.  There's some really exciting work going on here that I'm looking forward to talking about more in the future.
    • Finally, it gives us a better way to tell the WF hosting story, in that we will have a host and way to manage and deploy and execute WF and WCF in an host we will deliver, rather than requiring a "build on your own" approach (which still remains an option for folks who have specific hosting requirements).

    Our marketing folks always get nervous when we start talking about "revolutionary" technology (although, maybe it would get us some more Apple 1984 like commercials :-) ).  I've always seen workflow as a very transformational technology.  I see the things that are coming in Oslo as a very natural, evolutionary step, in the process of what I believe has been, and will continue to be a revolutionary way of making us be more productive developers.

    Finally, given some of the past history people have had with version numbers, I would not get caught up in the version numbers mentioned in press release.  As one of the marketing guys told me, "The quotes mean something," which, translated means "The numbers are just placeholders indicating a major release beyond where we are currently at."

  • mwinkle.blog

    Getting Up and Running with Piggybank on HDInsight


    It’s been a fun couple of weeks launching HDInsight, and I’m going to be getting back into doing some more technical blogging.  There are a few easy topics off the bat that we’ve heard requested from customers.  The first one involves Piggybank, which is a user contributed collection of useful Pig user defined functions (UDF’s).

    This assumes your machine is set up with:

    • HDInsight (grab from WebPI here)
    • Java build tools (Ant and Ivy on your command path)

    Next, let’s build Piggybank by grabbing the Pig source code and checking out the 0.9 branch


    At this point, you should now have a pig directory, move to that and type ant in order to build.

    Next, navigate to .\contrib\piggybank\java, and again, type ant in order to build.  This will produce piggybank.jar.

    Next, open your HDInsight console window and type pig.  This brings up Grunt, the interactive pig shell.

    At this point, you can now use the following in your script:


    REGISTER C:\Your\Path\To\piggybank.jar ;

    foo = FOREACH entry GENERATE org.apache.pig.piggybank.evaluation.string.UPPER(item_name);


    At this point, you can now take advantage of all of the functions in piggybank, and if you’re interested in contributing your own, details are here.

  • mwinkle.blog

    Writing a Map/Reduce Job for Hadoop using Windows PowerShell


    I had a little bit of time on my hand and wanted to whip up a quick sample using PowerShell for a Hadoop job.

    This uses the Hadoop streaming capability, which essentially allows for mappers and reducers to be written as arbitrary executables that operate on standard input and output.

    The .ps1 scripts are pretty simple, these operate over a set of airline data that looks like this:


    The schema here is a comma separated set of US flights with delays.

    The goal of my job is to pull out the airlines, the number of flights, and then some very basic (min and max) statistics on the arrival and departure delays. 

    My mapper:

       1:  function Map-Airline 
       2:  {
       3:      [Console]::Error.WriteLine( "reporter:counter:powershell,invocations,1")
       4:      $line = [Console]::ReadLine()
       5:      while ($line -ne $null) 
       6:      {
       7:          [Console]::WriteLine($line.Split(",")[1] + "`t" + $line)
       8:          [Console]::Error.WriteLine("reporter:counter:powershell,record,1")
       9:          $line = [Console]::ReadLine()
      10:      }  
      11:  }
      14:  Map-Airline 

    My reducer:

       1:  function Reduce-Airlines
       2:  {
       3:      $line = ""
       4:      $oldLine = "<initial invalid row value>"
       5:      $count = 0
       6:      $minArrDelay = 10000
       7:      $maxArrDelay = 0
       8:      $minDepDelay = 10000 
       9:      $maxDepDelay = 0 
      11:      $line = [Console]::ReadLine()
      13:      while  ($line -ne $null)
      14:      {
      15:          if (($oldLine -eq $line.Split("`t")[0]) -or ($oldLine -eq "<initial invalid row value>"))
      16:          {
      17:              $flightRecord = $line.Split("`t")[1].Split(',')
      18:              if ([Int32]::Parse($flightRecord[0]) -ne 0) 
      19:              {
      20:                  $minArrDelay = [Math]::Min($minArrDelay, $flightRecord[0])
      21:              }
      22:              if ([Int32]::Parse($flightRecord[3]) -ne 0) 
      23:              {
      24:                  $minDepDelay = [Math]::Min($minDepDelay, [Int32]::Parse($flightRecord[3]))
      25:              }
      26:              $maxArrDelay = [Math]::Max($maxArrDelay, $flightRecord[0])
      27:              $maxDepDelay = [Math]::Max($maxDepDelay, $flightRecord[3])
      28:              $count = $count+ 1
      29:              [Console]::Error.WriteLine( "reporter:counter:powershell,"+$oldLine + ",1")
      30:          }
      31:          else
      32:          {
      33:              [Console]::WriteLine($oldLine + "`t" + $count + "," + $minArrDelay + "," +$maxArrDelay + "," + $minDepDelay +","+ $maxDepDelay)
      34:              $count = 1
      35:              $minArrDelay = 10000
      36:              $maxArrDelay = 0
      37:              $minDepDelay = 10000 
      38:              $maxDepDelay = 0 
      39:              [Console]::Error.WriteLine("reporter:counter:powershell,"+$oldLine + ",1")           
      40:          }
      41:          $oldLine = $line.Split("`t")[0]
      42:          $line = [Console]::ReadLine()
      43:      }
      44:      [Console]::WriteLine($oldLine + "`t" + $count + "," + $minArrDelay + "," +$maxArrDelay + "," + $minDepDelay +","+ $maxDepDelay)
      45:  }
      47:  Reduce-Airlines

    One thing to note on the reducer is that we use the $oldLine variable in order to keep tabs on when our group of results is moving to the next one.  When using Java, your reduce function will be invoked once per group and so you can reset the state at the beginning of each of those.  With streaming, you will never have groups that split reducers, but your executable will only be spun up once per reducer (which, in the sample here, is one).  You can also see that I’m writing out to STDERR in order to get a few counters recorded as well.

    The next trick is to get these to execute.  The process spawned by the Streaming job does not know about .ps1 files, it’s basically just cmd.exe.  To get around that we will also create a small driver .cmd file and upload the file with the –file directive from the command line.



    @call c:\windows\system32\WindowsPowerShell\v1.0\powershell -file ..\..\jars\AirlineMapper.ps1



    @call c:\windows\system32\WindowsPowerShell\v1.0\powershell -file ..\..\jars\AirlineReducer.ps1


    The ..\..\jars directory is where the –file directive will place the files when they execute

    And now we execute:

    hadoop jar %HADOOP_HOME%\lib\hadoop-streaming.jar 
        -mapper d:\dev\_full_path_\AirlineMapperPs.cmd 
        -reducer d:\dev\_full_path_\AirlineReducerPs.cmd 
        -input fixed_flights 
        -output psMapReduce3 
        -file d:\dev\_full_path_\AirlineMapper.ps1 
        -file d:\dev\_full_path_\AirlineReducer.ps1

    And we get our results.

    There is still some work to be done here, I’d like to make it a little easier to get these running (so possibly wrapping submission in a script which takes care of the wrapping for me).  Also, on Azure, we either need to sign the scripts, or log into each of the machines and allow script execution.  As I get that wrapped up, I’ll drop it along side our other samples.  We’ll also work to make this easier to get up and running on Azure if that’s interesting for folks.

  • mwinkle.blog

    WF, WCF and CardSpace training materials posted


    We've put a fair amount of content out on the community site.  4 days worth of presentations, labs and demos to be specific.  Check it out here.  It's about 50MB worth of downloads, the page shows 11.4kb in errror, that's just the size of the license page.

    The content breaks down like this:

    • Demos
    • Labs
    • Presentations
      • Day 1
        • Overview on the technologies
      • Day 2 - WCF
        • Contracts
        • Bindings and Behaviors
        • Security, Reliability and Consistency
      • Day 3 - WF
        • Custom Activities
        • Hosting
        • Workflow and Communication
        • State Machine Workflows
      • Day 4 - Identity and CardSpace
        • Identity Metasystem
        • CardSpace

    Download the content and let us know what you think!

  • mwinkle.blog

    Dogs and Cats Living Together, Mass Hysteria (or, Source Available for the .NET Framework)


    The title quote brought to you courtesy of the original Ghostbusters film.  As Scott just announced on his blog, we're making the source available under the Microsoft Reference License (MS-RL).

    This is cool.  But the really cool part of this is that there will be symbol server in the cloud that will let you dynamically, and auto-magically, download the symbols and source on demand from us. This means you can now keep stepping into code beyond when you get to DataAdapter.Update() for instance and trace all the way down the stack.  This is going to make it a lot easier to dive deep into debugging to see what is really going on when you hand off a bunch of parameters to a method in the .NET Framework.  I can think of a number of times this would have been incredibly helpful in tracing down those "oh, I should have set parameter x to something that could have been cast to a y"

    I'll be doing a channel9 video today or tomorrow, any questions for the team, let me know!

  • mwinkle.blog

    WinFX Name Change


    S. Somasegar just announced the WinFX name change.  It's now the .NET Framework 3.0.  This is a name change for the technology, which still has all of the great parts: Windows Presentation Foundation, Windows Communciation Foundation, Windows CardSpace, and Windows Workflow Foundation. I think that this makes it clear that WinFX is not a separate entity from .NET, the two have always been tied very closely together, and the naming now reflects this. 

    We'll be rolling out a new version of www.windowsworkflow.net shortly, watch this space for more details!


  • mwinkle.blog

    .NET 3.5 Beta Exams


    Trika dropped me an email asking me to point out the nice fact that if you're interested in taking a beta certification exam for WPF, WCF or WF, they've extended the deadline out a few weeks.  This means you can take a beta version of the test (FOR FREE), and if you pass it counts as if you passed the actual test, but you can get an early jump on the certification train this way.

    Check it out here.

  • mwinkle.blog

    Two Cool Technologies, One Great Solution


    My peer, David, has a great screencast posted on Channel9 that shows off a solution from FullArmor that incorporates WF and PowerShell and one really cool looking designer.

    On my list of cool things to check out when I have free time (currently June 2015) is PowerShell.  It's a great tool for devs to make their apps much more managable, by both devs and our dear friends, the IT Pro.  David has built some samples that build right on top of things to return collections of data in order to PowerShell enable them.

    Check it out here

  • mwinkle.blog

    Greetings from Barcelona


    I (and my entire team except for our boss) am here in beautiful Barcelona for TechEd Developer.  This is definitely one of my favorite events, because of the location, the amenities, and most importantly, the people. 

    I'm giving a number of talks throughout the week (see below), so if you're here, please drop on by.  If you don't make it to the talks, I've been known to frequent the bar on the first floor of the Hilton right next to the convention center :-)   There's also a great lineup of talks from other folks as well that I will probably be dropping into.  It will be a good week, and then on Thursday, following my last talk, I will be taking a bit of holiday until next Tuesday.


    Session list:

    SBP08-IS Windows Workflow Foundation (WF) Open Microphone Talk

    Matt Winkler

    • “Should I let a business analyst compose workflows?”
    • “Why is this class sealed?”
    • “How can I extend the designer to fit my scenario?”
    • “How would I build and use a repository of business rules in non-WF applications?”
    • “I think that WF is great.”
    • “I think that WF i... more

    Wed Nov 7 15:45 - 17:00 Room 130

    SBP09-IS Windows Workflow Foundation Performance

    Matt Winkler

    A key to understanding performance of any system is an understanding of the tradeoffs one can make. In a more relaxed, casual environment come and join us for a conversation about performance. This chalk talk will be organized around a series of performance considerations for Windows Workflow Foundation and how those c... more

    Thu Nov 8 17:30 - 18:45 Room 125 , Thu Nov 8 10:45 - 12:00 Room 128

    SBP304 Implementing Workflow Enabled Services and Durable Services using .NET Framework 3.5

    Matt Winkler , Justin Smith

    Inside .NET Framework 3.5, there is new functionality allowing Windows Communication Foundation (WCF) services to be built and consumed from a Windows Workflow Foundation (WF) workflow. This session will introduce the feature, discussing motivation, scenarios, and reasons for using this feature, an architectural overvi... more

    Tue Nov 6 17:00 - 18:15 Room 114

    SBP313 What is the Context of this Conversation? Enabling Long Running Conversations in Workflow Services

    Matt Winkler

    The .NET Framework 3.5 will introduce the functionality to call services from Windows Workflow Foundation (WF), and to expose workflows as a Windows Communication Foundation (WCF) service. A common pattern is to have a workflow serve as the coordinator between a number of other processes (including workflows). This tal... more

    Wed Nov 7 10:45 - 12:00 Room 114

    Matt Winkler , David Aiken

    Come with your questions for Matt and David about the Dinner Now demo. If there is something that you need to do in .NET Framework 3.5 but don’t know how, come and ask! more

    Tue Nov 6 10:45 - 12:00 Room 130

    WIN302 .NET Framework 3.5 End-to-End: Putting the Pieces Together - Part 1

    Matt Winkler , David Aiken

    Do you build .NET Applications? In this session, learn how to use the .NET Framework to build better end-to-end solutions using the DinnerNow.NET Sample application. From a Windows Presentation Foundation (WPF) client to a Windows Communication Foundation (WCF) service tier driven by Windows Workflow Foundation (WF), w... more

    Mon Nov 5 17:45 - 19:00 Auditorium


    WIN303 .NET Framework 3.5 End-to-End: Putting the Pieces Together - Part 2

    Matt Winkler , David Aiken

    Do you build .NET Applications? In this session, learn how to use the .NET Framework to build better end-to-end solutions using the DinnerNow.NET Sample application. From a Windows Presentation Foundation (WPF) client to a Windows Communication Foundation (WCF) service tier driven by Windows Workflow Foundation (WF), w... more

    Tue Nov 6 09:00 - 10:15 Auditorium

  • mwinkle.blog

    A Quick Update


    I was out of town for most of this week, so I am busy catching up on things here. 

    First, I would like to thank Jon for this post, pointing out some areas of improvement which can be made in the designer re-hosting sample that I pointed to.  Paul then let me know that Vihang's article on MSDN had been updated as well.  Vihang's article is a great introduction to a lot of the issues that you will encounter in designer rehosting.

    Other than that, I'm getting ready to head off to TechEd, where the WF team will be in full force.  If you're looking to find out anything about worklow, stop by our sessions or chalk talks. If you're plate is already full with other things to do, stop on by the Connected Systems TLC in the developer section (I think it's "blue").  There will be plenty of product team members hanging around to talk about any workflow questions you might have.  I'll be the guy wearing the blue Microsoft shirt, so I should be pretty easy to find! 

    Oh, and make sure to check out www.windowsworkflow.net on Monday morning, we've got a little bit of an update coming :-);

  • mwinkle.blog

    24 Workflow Screencasts Posted to wf.netfx3.com


    The other day, someone directed me to Mike Taulty's blog (http://mtaulty.com/blog/ ) where he has some fantastic posts about using Windows Workflow Foundation (check out his post on long running activities).  Mike also has put together a ton (24 as of today) screencasts on using WF. 

    These have been added to a new screencasts directory in the file area.  Dive in and check them out.  If you've got screencasts that you're doing, feel free to upload those as well.  Once they get approved, they will be added into this directory.

  • mwinkle.blog

    RC Madness


    Moustafa has the scoop, right before a three day weekend here in the US, here's the RC of the .NET Framework 3.0 made available.


    And from what I hear, there is a Go-Live license for this as well!

  • mwinkle.blog

    Designer Rehosting Survey


    We're heads down working on the next version of the WF designer.  One of the areas we're considering investing in is the area of designer re-hosting.  The V1 designer had this really cool feature where you can rehost the designer inside any winforms app.  I've been surprised by the number of customers I've talked to who are very interested in doing this to either allow end users to edit process, or simply to enable the visualization of the workflow.

    We're really interested in why and how you are using designer rehosting (or, even more importantly, why you may have decided to not rehost the designer).

    We've got a survey posted here, and would appreciate a few minutes of your time to fill it out.

  • mwinkle.blog

    Orcas WF and WCF Samples


    The first pass at a number of samples for WF and WCF in Orcas has been posted here.

    This has samples for all of the features discussed in my last post, as well as some of the cool rules stuff that Moustafa talks about here.

  • mwinkle.blog

    Worst Presentation... Ever.


    I would like to take this moment to apologize for all of the attendees who were at our WIN302 session this afternoon here in Barcelona.  Moments before we were scheduled to begin, a very nasty power issue hit our room, causing the lights to go out, all of the equipment on stage to shut down, and reset all of the audio equipment (replaced with a series of rather nasty sounding "pops".)  Our demo machine, which we had just spent the last few hours getting set "just so," was also a casualty of this.

    Following 10 minutes of working with stage crews, audio techs, and David frantically trying to get the demo machine back to a usable state, we decided to begin the talk.  I had counted 5 minutes since someone ran down onto the stage yelling into a walkie talkie, so I figured we were in the clear.

    David was still working on the demo, so I began the talk, and quickly needed to fill time while David worked on the demo machine. 

    In short, by the time we got back to being ready, things were all jabberwockied up, and I was most certainly off my game, and as a result found myself rambling when I should have been focused, grasping for phrasing when I should have been driving the message, and stumbling in a talk where I had hoped to be knocking it out of the park.

    I want to apologize to the attendees, because you deserved a much better talk than the one you got (and David and I are going to make it up in part 2, tomorrow). 

    Reading through the feedback was pretty hard, this is a crowd that has very high expectations, and today did not meet that bar.

    Just when you think you have things all ready to go.

    How could we have done better?

    • A backup machine, set in exactly the same fashion as the first machine would have still not been particularly pleased with the power issue.
    • I need a way to be able to save the state of all of my open visual studio windows and script out so I can run one script that opens all of the instances, and all of the right files (setting to the right spot would be nice as well).
    • Not freaked out.  We had just gotten set and ready to go, and the power thing really knocked me off kilter. 

    So, we walk away and we learn something, and we'll be back to do it again tomorrow.  Everyone has these nightmare conference stories, but that still doesn't make things better.

  • mwinkle.blog

    Activities Survey


    So, we're getting close to .NET 3.5 getting out the door, and as always we're thinking about what comes next.  The activities team is looking how to make activities better going forward, and they are very interested in what you have to say, how you have been using them and how you'd like to use them.

    This feedback will help us understand how to shape the future of things and how we should think about the pain points you are experiencing.

    Fill survey out here!

  • mwinkle.blog

    Must-See Workflow Sample


    First, a quick welcome to the world of wokflow blogging to Sergey, a developer on the WF team.  Sergey has just posted the Workflow Manager sample on wf.netfx3.com.  This is a sample application that allows you to inspect workflows that are running, change them using the dynamic update features.

    Go to the website, download the sample, and see what you can do with designer re-hosting, dynamic update, and the tracking service.

    This sample requires the June CTP of .NET 3.0


  • mwinkle.blog

    New Screencasts for WF


    I just posted two screencasts over on channel9.

    I've got another one in the works and I will try to get these out with some regularity!

  • mwinkle.blog

    Testing from Microsoft Word 2007


    I’m pleased to find out that Word 2007 produced this! More details available here.

  • mwinkle.blog

    A Few Additional Treats @ PDC09


    One fun thing we will be doing this year is using the theater in our lounge/booth area to have a few chalk style presentations.  These will be brief 30 minute demo/ q&a / conversation sessions with somebody from the product team.  These will dive a little deeper and be more interactive about topics we know you’re interested in.  We’d love you to come and just ask a ton of questions here:

    Thursday, 11:00-11:30 / Future Directions for State Machine Workflows  / Alan Ko

    Discuss options for using the State Machine modeling style on WF4, and see a demonstration of a WF4-based State Machine design experience.

    Thursday, 1-1:30 / Migrating WF 3.5 Workflows to WF 4 / Bob Schmidt

    Learn some tips and techniques for migrating your WF 3.5 workflows to the new WF4 runtime, and see a demonstration of a tool that helps automate this process.

    Thursday, 2:00-2:30 / WF Rehosting Deep Dive / Kushal Shah

    See how WF’s rehostable runtime, designer, and debugger can add powerful capabilities to your applications

    I’m excited because we’re going to use this as an opportunity to show some cool stuff that we’re thinking about.  Usual disclaimers apply, some of the stuff here we will show are prototypes, ideas we are looking for feedback, etc, etc.   These are things that we think are important, and your feedback helps us understand what you really need to make these things useful for you

    Stop on by the booth if you’re interested in any of the above topics!

  • mwinkle.blog

    Tracing Rules Execution


    So, we've got some examples of using the WF Rules Engine outside the confines of a workflow (see here, and here).  One feature you get inside of a workflow is that the rules engine will utilize the tracking service, if one has been provided, to log out information regarding rule execution.  Moustafa shows how you can use System.Diagnostics tracing in order to have the same effect when using the rules outside a workflow. 

    Take a look at this sample, especially if you are interested in incorporating the WF Rules Engine in a non-workflow application.  This would be something nice to attach a config flag or other setting to in order to enable rule tracing, which may be helpful if you are trying to debug a complex rules scenario.

    I'm working on putting some other rules samples together, if there are specific scenarios you are interested in, drop me a line and I'll see what we can do!

Page 4 of 6 (148 items) «23456