Should I expose synchronous wrappers for asynchronous methods?

Should I expose synchronous wrappers for asynchronous methods?

Rate This
  • Comments 17

In a previous post Should I expose asynchronous wrappers for synchronous methods?, I discussed “async over sync,” the notion of using synchronous functionality asynchronously and the benefits that doing so may or may not yield. The other direction of “sync over async” is also interesting to explore.

Avoid Exposing Synchronous Wrappers for Asynchronous Implementations

In my discussion of “async over sync,” I strongly suggested that if you have an API which internally is implemented synchronously, you should not expose an asynchronous counterpart that simply wraps the synchronous method in Task.Run. Rather, if a consumer of your library wants to use the synchronous method asynchronously, they can do so on their own. The post outlines a variety of benefits to this approach.

Similar guidance applies in the reverse direction as well: if you expose an asynchronous endpoint from your library, avoid exposing a synchronous method that just wraps the asynchronous implementation. Doing so hides from the consumer the true nature of the implementation, and it should be left up to the consumer how they want to consume the implementation. If the consumer chooses to block waiting for the asynchronous implementation to complete, that’s up to the caller, and they can do so with their eyes wide open.

This is even more important for the “sync over async” case than it is for the “async over sync” case, because for “sync over async,” it can lead to significant problems with the application, such as hangs.

Basic Wrapping

To understand why, let’s examine what it looks like to wrap an asynchronous method with a synchronous one. How this is done, of course, depends on the asynchronous pattern employed by the method.

If I have an APM implementation:

public IAsyncResult BeginFoo(…, AsyncCallback callback, object state);
public TResult EndFoo(IAsyncResult asyncResult);

then a simple synchronous wrapper might look like:

public TResult Foo(…)
    IAsyncResult ar = BeginFoo(…, null, null);
    return EndFoo(ar);

In the APM pattern, if the EndXx method is provided with an IAsyncResult from the BeginXx method, and the IAsyncResult’s IsCompleted returns false, the call to EndXx needs to block until the operation is completed. Hence, we call BeginXx and then immediately call EndXx to block until the asynchronous operation has completed, at which point EndXx will be able to properly return the result of the operation or propagate its exception.

A more modern asynchronous implementation would use the Task-based Asynchronous Pattern:

public Task<TResult> FooAsync(…);

and for it a simple synchronous wrapper might look like:

public TResult Foo(…)
    return FooAsync(…).Result;

Here we’re accessing the return Task’s Result, which will wait for the result to be available before returning it.

Real-World Example

These seem like reasonable wrappers, and depending on the situation, they may be… but of course they may not be.

Let’s say for sake of example that BeginFoo/EndFoo and FooAsync are all good asynchronous implementations, using async I/O under the covers such that no threading resources are consumed for the vast majority of their execution; only at the very end of their execution do they need to do a small amount of processing in order to handle the results received from the asynchronous I/O, and they’ll do this processing internally by queuing the work to the ThreadPool. This is quite reasonable, and a very common phenomenon in asynchronous implementations. And, let’s say that someone has set an upper limit on the number of threads in the ThreadPool to 25 using ThreadPool.SetMaxThreads or via a configuration flag.

Now, you call the synchronous Foo wrapper method from a ThreadPool thread, e.g. Task.Run(()=>Foo()). What happens? Foo is invoked, it kicks off the asynchronous operation, and then immediately blocks the ThreadPool thread waiting for the operation to complete. When the async I/O eventually completes, it queues a work item to the ThreadPool in order to complete the processing, and that work item will be handled by one of the 24 free threads (1 of the threads is currently blocked in Foo). Great.

But now let’s say that instead of queuing one task to call Foo, you queue 25 tasks to call Foo. Each of the 25 threads in the ThreadPool will pick up a task, and invoke Foo. Each of those calls will start some asynchronous I/O and will block until the work completes. The async I/O for all of the 25 Foo calls will complete and result in the final processing work items getting queued to the pool. But the pool threads are now all blocked waiting for the calls to Foo to complete, which won’t happen until the queued work items get processed, which won’t happen until threads become available, which won’t happen until the calls to Foo complete. Deadlock!

If this seems far-fetched, know that this exact situation was quite common in .NET 1.x with a popular Framework method: HttpWebRequest.GetResponse. GetResponse was implemented almost exactly like Foo is in our example wrapper: it called the asynchronous BeginGetResponse, and then turned around and immediately waited on it using EndGetResponse. Further, in .NET 1.x, the maximum number of threads in the ThreadPool was, by default, a low number like 25. As a result, developers using HttpWebRequest.GetResponse would find themselves in deadlock territory; as a stop-gap measure, the implementation had a heuristic to see how many threads were available in the ThreadPool, and would throw an exception if it appeared a deadlock would result ( In .NET 2.0, HttpWebRequest.GetResponse was fixed to be a truly synchronous implementation, rather than wrapping BeginGetResponse/EndGetResponse.


The previous example may seem esoteric, but there is a scheduler in most apps today that has an extremely limited number of threads and that can easily be deadlocked with a situation like this: the UI. The blog post Await, and UI, and deadlocks! Oh my! describes how you can deadlock the UI thread in a similar situation.

Imagine the following: 

private void button1_Click(object sender, EventArgs e)

private void Delay(int milliseconds)

private async Task DelayAsync(int milliseconds)
    await Task.Delay(milliseconds);

While this may look innocent, invoking Delay from the UI thread like this is almost certainly going to deadlock. By default, await’ing a Task will post the remainder of the method’s invocation back to the SynchronizationContext from which the await was issued (even if that “remainder” is just the completing of the method). Here, the UI thread calls Delay, which calls DelayAsync, which awaits a Task that won’t complete for a few milliseconds. The UI thread then synchronously blocks in the call to Wait(), waiting for the Task returned from DelayAsync to complete. A few milliseconds later, the task returned from Task.Delay completes, and the await causes the remainder of DelayAsync’s execution to be posted back to the UI’s SynchronizationContext. That continuation needs to execute so that the Task returned from DelayAsync may complete, but that continuation can’t execute until the call to button1_Click returns, which won’t happen until the Task returned from DelayAsync completes, which won’t happen until the continuation executes. Deadlock!

Had Delay instead been invoked from a console app, or from a unit test harness that didn’t have a similarly constrictive SynchronizationContext, everything likely would have completed successfully.  This highlights the danger in such sync over async behavior: its success can be significantly impacted by the environment in which it’s used.  That’s a core reason why it should be left up to the caller to decide whether to do such blocking, as they are much more aware of the environment in which they’re operating than is the library they’re calling.

Async All the Way Down

The point here is that you need to be extremely careful when wrapping asynchronous APIs as synchronous ones, as if you’re not careful, you could run into real problems. If you ever find yourself thinking you need to call an asynchronous method from something that’s expecting a synchronous invocation (e.g. you’re implementing an interface which has a synchronous method on it, but in order to implement that interface you need to use functionality that’s only exposed asynchronously), first make sure it’s truly, truly necessary; while it may seem more expedient to wrap “sync over async” rather than to re-plumb this or that code path to be asynchronous from top to bottom, the refactoring is often the better long-term solution if it’s possible.

What if I really do need “sync over async”?

In some cases, for whatever reason, you may actually need to do “sync over async.” In such cases, there are some things you can do to ease the pain.

Test in Multiple Environments

Make sure you test your wrapper in a variety of environments: from a UI thread, from the ThreadPool, under stress on the ThreadPool with a low maximum set on the number of allowed threads, etc.

Avoid Unnecessary Marshaling

If at all possible, make sure the async implementation you’re calling doesn’t need the blocked thread in order to complete the operation (that way, you can just use normal blocking mechanisms to wait synchronously for the asynchronous work to complete elsewhere). In the case of async/await, this typically means making sure that any awaits inside of the asynchronous implementation you’re calling are using ConfigureAwait(false) on all await points; this will prevent the await from trying to marshal back to the current SynchronizationContext. As a library implementer, it’s a best practice to always use ConfigureAwait(false) on all of your awaits, unless you have a specific reason not to; this is good not only to help avoid these kinds of deadlock problems, but also for performance, as it avoids unnecessary marshaling costs. 

Offload to Another Thread

Consider offloading to a different thread, which is typically possible unless the method you’re invoking has some kind of thread affinity (e.g. it accesses UI controls). Let’s say you have methods like the following:

int Sync() // caller needs this to return synchronously
    return Library.FooAsync().Result;

// in a library; uses await without ConfigureAwait(false)
public static Task<int> FooAsync();

As described above, FooAsync is using await without a ConfigureAwait(false), and as you don’t own the code, you’re unable to fix that. Further, the Sync method you’re implementing is being called from the UI thread, or more generally from a context prone to deadlocking due to a limited number of participating threads (in the case of the UI, that limited number is one). Solution? Ensure that the await in the FooAsync method doesn’t find a context to marshal back to. The simplest way to do that is to invoke the asynchronous work from the ThreadPool, such as by wrapping the invocation in a Task.Run, e.g.

int Sync()
    return Task.Run(() => Library.FooAsync()).Result;

FooAsync will now be invoked on the ThreadPool, where there won’t be a SynchronizationContext, and the continuations used inside of FooAsync won’t be forced back to the thread that’s invoking Sync().

Consider a Nested Message Loop

If you do find that you need to invoke an asynchronous method and “block” waiting for it (to satisfy a synchronous contract), and if that method marshals back to the current thread, and if changing the method’s implementation won’t work (e.g. because you don’t have the ability to modify it), and if offloading won’t work (e.g. because the asynchronous method is thread affine), you’re in a tight spot.  One saving grace may be that there are multiple ways “blocking” behavior can be achieved.

The simplest and most general approach to blocking is to just to wait on a synchronization primitive. This is typically what happens when you call an EndXx method on the IAsyncResult returned from the BeginXx method, with the EndXx method using the IAsyncResult’s AsyncWaitHandle to wait until the IAsyncResult transitions to a completed state. The primitive’s waiting implementation may itself have some smarts that could help in some limited situations. For example, if you Wait on a Task that is backed by a delegate (e.g. it was created with Task.Run, Task.Factory.StartNew, etc.) and that’s waiting to run, it’s possible that TPL will decide to execute the Task on the current thread as part of the Wait call, a behavior we often describe as inlining. Typically, however, such specialized behaviors only apply in corner cases.

If you’re blocking the UI thread, it’s likely that the UI’s message loop won’t be pumping all the messages necessary to keep the UI responsive and to eventually process the message that would allow the thing you’re waiting on to complete and to avoid the deadlock. For such cases, as a last ditch effort, you could consider “blocking” via a “nested message loop”.

A nested message loop is just what it sounds like. If, for example, you’re executing as part of a button click event handler, you’re being called from a message loop that’s dispatching the processing for a button click message it received. If that button click handler then itself synchronously spins up its own message loop, that inner loop is nested within the outer one. Here, for example, is a nested message loop in Windows Forms that spins waiting for a task to complete; the UI will remain responsive, since we’re forcing messages in the queue to be drained via the repeated calls to Application.DoEvents (warning: this is just for demonstrative purposes, and I’m not recommending you do this… see “Keeping your UI Responsive and the Dangers of Application.DoEvents” for more details):

static T WaitWithNestedMessageLoop<T>(this Task<T> task)
    while (!task.IsCompleted)
    return task.Result;

Some UI frameworks have built-in support for nested message loops, and in a more efficient and robust manner than the ad-hoc spinning with DoEvents I’ve done above.  WPF, for example, uses the DispatchFrame and Dispatcher.PushFrame constructs to process a nested message loop.  In the following example, the nested loop will exit when the task completes and sets the DispatcherFrame’s Continue property to false.

static T WaitWithNestedMessageLoop<T>(this Task<T> task)
    var nested = new DispatcherFrame();
    task.ContinueWith(_ => nested.Continue = false, TaskScheduler.Default);
    return task.Result;

Even when a framework has a built-in notion of nested message loops, they’re far from an ideal solution.  It’s much better to asynchronously wait whenever possible, allowing the higher-level message loop to handle all of the processing.  This is just something you can consider if you’re truly in a bind.


Getting back to the original question for this blog post: should you expose synchronous wrappers for asynchronous methods? Please try hard not to; leave it up to the consumer of the your method to do so if they must.  You should consider the possibility that someone will consume your functionality synchronously, so code accordingly, with as few dependences on the threading environment as possible.  And at the very least, if you must expose a synchronous wrapper that will just block waiting for an asynchronous implementation to complete, please document it accordingly so that a consumer knows what they’re consuming and can plan accordingly. If you must consume an asynchronous method in a synchronous manner, do so with your eyes wide open, being careful to think through potentially problematic situations like deadlocks.

Leave a Comment
  • Please add 6 and 8 and type the answer here:
  • Post
  • Hi Stephen,

    Will it be a good idea if we try to create parallel code flow of synchronous methods instead of wrapping it over asynchronous methods?

    I am not sure but it will enable developers to use any of the paradigm independently. Thoughts?



  • Pratik: I'm not quite sure what you're asking.  Can you elaborate?  What are examples of the two things you're looking to compare?  Thanks.

  • I think the most interesting part will be: "How should I write Unit-Test for async methods" - right now this can be a major pain - for example testing ViewModel-Code can be a show stopper because if you dispatch your UI-code to the calling thread (test-runner) and wait for it ... well you get it - deadlocking...

  • RE: "As a library implementer, it’s a best practice to always use ConfigureAwait(false) on all of your awaits, unless you have a specific reason not to"

    This crystalizes a worry I've been having with async all along: the default marshalling of continuations to the current synchronization context (e.g. UI thread). Is this really the best default? It seems from the above discussion that it could cause a lot of problems.

    Surely *most* code in an application should be library code, not UI code. If that is the case, then surely await should default to NOT doing any thread marshalling.

    In a related matter: Is there any way to configure continuations to run on a specific scheduler? I suppose you just have to await your first operation, let the continuation run on whatever thread it would normally, then manually run the next bit on a specific scheduler (and await it to carry on with the next step as before). I guess it doesn't really matter a lot unless you need to switch schedulers multiple times in a single flow of code.

  • Hi Stephen, I really like your articles,

    Have you looked at my issue which I posted in the AsyncCTP Forum,

    I try to use a similar pattern like :

    I think during the adaptation of first pieces of Async coding maybe programmers have some hassles but it could be a golden change.

    I moved to TPL at the same time with AyncCTP programming,

    I saw Different strategies doing this, but here I have problems adapting myself and the codes to.

    If you help on this it would be really appreciated

  • James:

    Regarding the default marshaling behavior... This was something we discussed at length, since it's true that most code at the application/UI-layer will want automatic marshaling back to the original context and most business/data-access logic won't require it.  We ended up marshaling by default for several reasons.  One is that async/await allows you to write a method like you normally would if writing synchronous code, and synchronous code remains on one thread for the duration of its processing; by automatically marshaling, we get closer to that semantic.  Another reason is that it helps most developers fall into the "pit of success"; almost all code in the UI layer will need this, and most code in other layers isn't noticeably harmed by this... it's only if you explicitly try to block waiting for an async operation to complete that you potentially get into problems, and in general we don't want people to do that from their UI thread, i.e. developers should await rather than Wait() from their UI thread.

    Regarding running continuations on a specific scheduler... If you want all continuations in the method to run on a particular scheduler or context, you can invoke that method such that it sees that scheduler or context as current, e.g. by scheduling the invocation of the method to a particular TaskScheduler, or using SynchronizationContext.SetSynchronizationContext to set a context as current.  If you want to hop back and forth between schedulers, you can write a simple custom awaiter with an IsCompleted that returns false and an OnCompleted that runs the supplied Action continuation in the right place, e.g. by scheduling it to a particular TaskScheduler or by Post'ing it to a particular SynchronizationContext.


    I replied to your thread in the forums.

  • Thanks for the detailed response, Stephen

  • I have a sync method which wraps in async method. So taking example from above:

    int Sync() // caller needs this to return synchronously


       Task<int> rslt = Library.FooAsync();

       return rslt.Result;


    // in a library

    public static  async Task<int> FooAsync();

    Sync method is called by thread. Let’s say it gets called on click of button on .aspx page. Would calling of Sync() method put the ASP.Net thread back to thread pool when it reaches on statement Library.FooAsync() within Sync method

  • @Rajeev: The thread calling Sync() will block in the call to Result, and it will remain blocked (unable to do other work) until the async operation completes, at which point the thread will continue executing and return to the caller of Sync.  Note, too, that if FooAsync uses the current synchronization context to post back continuations (which it will do by default), this is likely to deadlock your ASP.NET request.

  • @Stephen, Thanks for quick reply. Just a follow up question what if I have following in my sync method ( being called by thread)

    int Sync()


    Task<int> t= Library.FooAsync();

    t.ContinueWith((rslt) => {Response.Write(rslt.Result)});



    Would my thread go back to threadpool when it encounters t.wait()? Or is it just await operator ( like await myAsyncMethod() ) that determines if thread goes back to threadPool? Ideally, I would be doing all my calls async but I am struggling with some legacy interfaces / code.

    Thanks again for your time!!

  • @Rajeev: No, when you call t.Wait(), you're still blocking the thread until the Task returned from FooAsync completes.

  • Hi Stephen,

    I am watching many videos from your team and reading blogs, searching for answer for proper async / await usage. However, all of your examples start with an event, which you mark as async void (fire & forget), and all works flawlessly.

    What if we have a WPF / MVVM application, where due to some user action a ViewModel property's setter is fired (through binding). How can we invoke an async method in such case, that should not consume thread pool thread (its a database call)? I do not care if it is fire & forget, I just don't want to consume new thread using Task.Run( () => AsyncMethod())?

  • I am the author of FalconUDP - a .NET library for sending small messages frequently used by .NET games. Attempting to port the library to NETFX_CORE is proving difficult as the existing API is all synchronous - I use a non-blocking Socket and calls to send and receive UDP datagrams return immediately - which is very imporant as the consumers of the library  (games call it from their update loop).

    However the only way to send UDP datagrams in WinRT is using the DatagramSocket and and sending data is only possible using async methods..... I do not think this is necessary as sending UDP datagrams is just an in-memory operation - it is connectionless and the system takes care of actually sending it out over the network interface once you given it the bytes to send, i.e. there is nothing to await!

    So I now have to either 1) break my existing API surface and expose async method that will always run sync anyway, or 2) keep the sync API and wrap up the async calls - your blog strongly suggests I shouldn't do!

    Please advise.

  • @markmnl: Are you sure the WinRT methods are actually completing synchronously?  Regardless, what I wrote in this post is guidance, and I do call out that there could be circumstances where it's difficult to avoid such sync-over-async wrapping, e.g. when you have existing surface area you're implementing.  If you must, you must, and then you'll just want to ensure you have appropriate testing to make sure you're not introducing deadlocks and the like, e.g. if the WinRT implementation didn't complete synchronously and tried to invoke its completion event back on the origin thread (I don't know its implementation).

  • @Stephen: You wrote:

    "If you must, you must, and then you'll just want to ensure you have appropriate testing to make sure you're not introducing deadlocks and the like"

    Actually it will not work because markmnl is writing a library and has no control over his users' code. If any code calls markmnl's library from a pool thread it could deadlock because of thread starvation. And this is becoming increasingly likely seeing how pervasive the .NET thread pool has become.

    I now have the feeling the .NET asynchronous paradigm is flawed, in the same way that exception handling is flawed in C++. What I mean is that safe asynchronous code is almost infeasible to achieve in a non-trivial program, in the same way that exception safe C++ code is almost infeasible to achieve in a non-trivial program. It requires a global understanding of the program and therefore does not scale with the number of code lines and the number of third-party libraries.

    The problem comes from the reliance on the thread pool, which acts as a single shared limited resource.

    The only way to make a safe asynchronous program is to never call Task.Wait (or Task.Result or other similar methods) but there could be circumstances where it's difficult to avoid such sync-over-async wrapping, as you admit.

    Am I too pessimistic?

Page 1 of 2 (17 items) 12