All about Async/Await, System.Threading.Tasks, System.Collections.Concurrent, System.Linq, and more…
One of the primary reasons we've released CTPs of Parallel Extensions is to solicit feedback on the design and functionality it provides. Does it provide all of the APIs you need to get your job done? Are there scenarios you wished the APIs supported and that you need to work around in klunky ways? And so forth. We've received some terrific feedback thus far, we've already made changes based on it, we're currently making changes based on it, and we'll continue making changes based on it moving forward.
We continually have discussions internally about additional support we could provide. Frequently these discussions result in our needing to know more about our customers' needs. There's a practically unlimited amount of functionality we could bake into Parallel Extensions, but each additional piece not only requires design, development, testing, and support, but it also can complicate other aspects of the design, potentially slow down other primary scenarios, and so forth. In order to focus our efforts, we need feedback.
Two such discussions occurred recently, and any feedback you provide would be useful in our deciding how to move forward.
The first is about shutting down a TaskManager instance. In the current implementation, we only support one kind of shutdown, which is triggered through a call to Dispose on TaskManager. This implementation is a synchronous invocation that blocks until all of the Tasks previously scheduled to the TaskManager have completed. However, there are other semantics we could potentially implement. For example, we could provide an asynchronous shutdown option, that allowed you to asynchronously call Shutdown, and the TaskManager would only be disposed of when all of the tasks scheduled to it completed. Or with a bit more internal reworking, we could support automatically canceling all of the tasks scheduled to the TaskManager. And again that could be done synchronously (canceling all and then waiting for all) or asynchronously (canceling all and then only cleaning up after the TaskManager when any that were currently executing completed). How useful would such capabilities be to you? In what scenarios would you find them useful or, more importantly, necessary? Would such capabilities be dangerous at all in your scenarios?
The second is about scheduling order. One of the benefits that a work-stealing scheduler (the kind of scheduler employed by the Task Parallel Library) provides is a distribution of work across all cores, where each core prefers to schedule new work to and pull work from its local queue(s). This scheduling can be made extremely efficient and can be made to improve locality and the like by using LIFO ordering, meaning that the task most recently scheduled is the one that will execute first. This is separate from work scheduled to the scheduler from other threads (such as an application's main thread), which will typically still be scheduled in a generally FIFO order. The question, then, is if there are scenarios you might have where you always want that FIFO-ish order, regardless of where the work is scheduled from. Such an option would likely decrease performance in some key scenarios, but it would also be more fair in terms of the order in which work gets executed. Do you have any scenarios that would require such a PreferFairness option? We'd love to hear about them if you do.
What are the implications for games and media, like HD motion pictures? Or both together?
Some sort of fairness is a must! It doesn't have to be a fifo; it should be ok to prefer "fresh" tasks as long as older tasks get a chance and the problem of possible starvation is taken into account. With the use of the TPL, we should be able to easily create a lot of small tasks without having to worry about scheduling and associated problems like starvation/fairness (the TPL should choose the most performant - but fair enough - approach to process all the tasks we throw at it). On the other hand, we also should have the possibility to force strong fifo order, in the case our model relies on that fact (e.g. when a task manager is used as a workqueue - though I would still prefer an own WPF-like Dispatcher for that purpose).
Thomas, thanks for the feedback. Can you elaborate on the scenarios where this freshness is important? For these scenarios, are you creating the tasks from within other tasks or from outside of the target scheduler's threads?
art_scott, it really depends on particular algorithms. Could you elaborate?
I originally got very interested in PEL because I thought that it would support a true fork()/join(). I've used it, and I like the library, but I think that fork()/join() would be a really great thing, especially given the number of people already familiar and comfortable with that paradigm. I also think that fork()/join() offers a much lower complexity level than needing to handle delegates/lambda.
Awhile back I write a application that needed to process a virtually unlimited amount of data. Essentially the data was being processes as fast as it came in. The application ran continuously with virtually no hope of catching up. As the data was processed its results became available for other parts of the system to use. If the data had been processed in a LIFO order than older data (if I understand this right) would never get processed. This would result in gaps and errors in the results. It was not essential that the data be processed in a strict FIFO manner but it was important that ALL data did get processed "near" each other. It sounds like this is an example where "Fairness" is prefered.
The TPL is designed to split efficiently CPU time consuming tasks on multi cores.
Is it planed to use the TPL to control concurrency processes and real time tasks too?
I think the Concurrency and Coordination Runtime (CCR) is in many aspects similar to the TPL (TaskManager/Dispatcher, Task/Task, Future/Port) and it would be nice to merge both in one library.
I think a way to influence the scheduling of tasks is a must.
It's not only about FIFO or LIFO, a mechanism using a prioritization metric comes handy most of the time...
my prefered impl. of such a metric uses mostly a combination of PRIO, SIZE and AGE of the task to decide what to execute next.
Don't know where to deposit this feedback: I would really like to see Parallel.Do return. It simply is useful.
Fair scheduling or even better a priority queue would be very useful.
Thanks for all of the feedback! This is great. For those folks asking for prioritization, can you provide some real-world examples of how you'd use it?
Stephan, the Do method has simply been renamed: it's now Parallel.Invoke rather than Parallel.Do.
I think it would be very useful to have both the sync, and asynch methods to stop the taskmanager, and most importantly it should support stopping even when the taskmanger is not done with all the tasks.
You want real-world example of a need for fair scheduling for a prioritization? There are hundreds of examples.
For one, just about every manufactoring segment has worked on this problem. I know of many papers devoted to this from the chemical processing industry.
In my world I have tens of thousands of tasks that need to be farmed out to dozens of seperate machines. I don't necessary know or care what the tasks do. I do know they each can take from near 0 time to hours. I also have a variety of prioritize that need to be honored. Some tasks take require order (pri 1 must come before pri 2). Some must complete _before_ a specific time. That one is much more tricky.
Recently I was asked to dynamically change the priority according to how much time it's been in the queue. In other words, the longer it's around the more likely it is to be processed sooner rather than later.
Great, thanks for the examples.
I would love to see some kind of priorization possibility for the fx library.
Right now im handling that myself, but I cant believe that this wouldnt benefit a lot of people out there.
Here is an easy example:
Im continuously querying data from a trading platform.
This data is the analyzed and if certain criterias are matched by the data actions are taken.
Before theses actions are taken, an update is issued for the relevant subset of data, to be sure that for example the prices havent chanaged since the last update.
This update has to be executed immediatly and can therefore not be handled by the continous update task queue.
Right now this is handled by a second TaskManager wich handles the higher priotitized updates.
The example is extremly simplified, if you are interested i can give a detailed description of this case.