A Tour of Various TPL Options

A Tour of Various TPL Options

  • Comments 5

The Task Parallel Library (TPL) exposes various options that give you more control over how tasks get scheduled and executed:

  • Choose whether to optimize for fairness or for overheads when scheduling tasks.
  • Specially mark tasks known to be long-running to help the scheduler execute efficiently.
  • Create a task as attached or detached to the parent task.
  • Schedule continuations that run if a task threw an exception or got cancelled.
  • Run tasks synchronously.

Joseph E. Hoag's article A Tour of Various TPL Options explains these options in detail, accompanied by examples of correct and incorrect uses.

(This paper and many more are available through the Parallel Computing Developer Center on MSDN at http://msdn.microsoft.com/en-us/concurrency/ee851578.aspx.)

Leave a Comment
  • Please add 3 and 4 and type the answer here:
  • Post
  • Strange. Why is the default TaskContinuationOption "run the continuation regardless of the initial task's completion status"? Wouldn't NotOnFaulted or OnlyOnRanToCompletion be more reasonable defaults? After all, if my first Task loads data from the database, and my second Task number-crunches that data, there's no sense in running the second Task if the first one threw an exception.

  • Joe,

    Typically, the user needs to do something in all three cases, regardless of whether the task completed, threw an exception, or was cancelled. In your example, you probably need to report to the user that there was an issue querying the database.

    So, often you'll have a switch in the continuation action that will do different things depending on the Status property of the task.


  • I really don't understand.

    First of all, the "report to the user" exception handler should be at the top level, around the code that's calling Wait() -- it shouldn't be duplicated in every single Task you write.

    Second of all, if there's special exception handling to be done for a particular Task, shouldn't it go inside that Task, instead of being split out into the next Task in the sequence? Separating that out feels like you're violating encapsulation. If you, for example, don't want an SqlException to propagate outside your Task, but instead want to wrap it in your own library's exception type, then you should catch it inside your Task, not in the next Task in line.

    Third of all, if you're checking for errors at the beginning of every single Task (except the first one), isn't that pretty much just reverting back to the C programming model where you check all the return codes? I prefer structured exception handling. We have the technology.

    I can only think of one case where "run the continuation regardless of what happened to the previous tasks" would make sense, and that's when you're translating a finally block into Tasks. In all the other cases I can think of, if a prerequisite didn't complete, then there's no point in running the continuation; the only meaningful error handling it could do is "Did the previous task get an exception? It did? Okay, I'll start by throwing an exception." That's effectively what happens in the equivalent procedural code.

    I don't know. Maybe I'm thinking too narrowly. Our code is very procedural, and I'm trying to think in terms of breaking it into composable Tasks, and then plugging those together, but with some of them running in parallel. Since I'm thinking in terms of code that started out procedural, exception handling is very clear-cut: if the preceding code didn't run, then I won't run either. (There's some leeway in that for independent tasks that run in parallel, of course, but as soon as you hit code that depends on a prior result, you stop if that prior result didn't get calculated.) Are you thinking in different terms than that? I just can't visualize cases where your "manually check for exceptions" would do more good than harm.

  • Hi Joe-

    All good questions and comments.

    First, if you can handle the exceptions inside the body of the task, great, but that's not always possible.  For example, some tasks don't have delegates executing where you could even put a try/catch, such as tasks representing asynchronous I/O-based operations (e.g. tasks wrapping a Begin/End APM call).

    It's also often the case that you want the result or exception marshaled to a different context, for example:

    Task.Factory.StartNew(() =>


       ... // do work in background, might throw

       return result;

    }).ContinueWith(completed =>


       // runs on UI thread

       try { txtResult.Text = completed.Result }

       catch(Exception exc) { MessageBox.Show(exc.ToString()); }

    }, TaskScheduler.FromCurrentSynchronizationContext());

    For cases where you've launched multiple asynchronous operations, it's typical to want to operate on their results or exceptions en mass, rather than dealing with each individually, and this applies to ContinueWhenAll.

    There are also a variety of important cases where you want to use continuations simply for the purpose of knowing about completion, rather than doing anything with the result/exception, for example keeping a count of the number of outstanding operations yet to complete.

    And there are a good number of cases where the Task object is exposed from a shared construct and where you want anyone that registers a continuation with the task to understand its success or failure.  For example, consider an AsyncCache<T> type, where the Get method returns a Task<T> instead of a T, due to the value being generated asynchronously.  Multiple consumers may retrieve the same Task<T> instance, and then want to be notified when the data is available.  It's typical that they're unable to make meaningful forward progress until the data is available, in which case alerting them to the fact that it will never be available (due to exception) is also very valuable and something they need to know.

    From an API perspective, it was also cleaner coming up with understanble options if "always" was the default.  For example, if the default were instead NotOnFaulted, what would it mean for someone to explicitly specify "NotOnCanceled"... does that replace the NotOnFaulted, or does it augment it?  Different people answer differently.  If, however, the default is "it always fires", it's clear that specifying "NotOnCanceled" overrides that.

    These are just a few examples, but I hope it helps to understand the reasoning.  Of course, you don't have to use the default... you can explicitly register your continuations with whatever TaskContinuationOption you want, which could also help to make the code more readable.

  • Okay, I think I follow -- several of those points make a lot of sense. Thanks for the clarification!

Page 1 of 1 (5 items)