Building Async Coordination Primitives, Part 5: AsyncSemaphore

Building Async Coordination Primitives, Part 5: AsyncSemaphore

Rate This
  • Comments 5

In my last few posts, I covered building an AsyncManualResetEvent, an AsyncAutoResetEvent, an AsyncCountdownEvent, and an AsyncBarrier.  In this post, I’ll cover building an AsyncSemaphore class.

Semaphores have a wide range of applicability.  They’re great for throttling, for protected access to a limited set of resources, and more.  There are two public semaphore types in .NET: Semaphore (which wraps the Win32 equivalent) and SemaphoreSlim (which provides a lightweight counterpart built around Monitors).  Here we’ll build a simple async version, with the following shape:

public class AsyncSemaphore
{
    public AsyncSemaphore(int initialCount);
    public Task WaitAsync();
    public void Release();
}

The member variables we’ll need look almost exactly the same as what was needed for the AsyncAutoResetEvent, and for primarily the same reasons.  We need to be able to wake individual waiters, so we maintain a queue of TaskCompletionSource<bool> instances, one per waiter.  We need to keep track of the semaphore’s current count, so that we know how many waits can complete immediately before we need to start blocking.  And as we’ll see, there’s the potential for a fast path, so we maintain an already completed task that we can use repeatedly when the opportunity arises.

private readonly static Task s_completed = Task.FromResult(true);
private readonly Queue<TaskCompletionSource<bool>> m_waiters = new Queue<TaskCompletionSource<bool>>();
private int m_currentCount;

The constructor will just initial the count based on the caller’s request:

public AsyncSemaphore(int initialCount)
{
    if (initialCount < 0) throw new ArgumentOutOfRangeException("initialCount");
    m_currentCount = initialCount;
}

Our WaitAsync method will also be reminiscent of the WaitAsync we wrote for AsyncAutoResetEvent.  We take a lock so as to ensure all of the work we do happens atomically and in a synchronized manner with the Release method we’ll write shortly.  Then, there are two possible paths.  If the current count is above zero, there’s still room left in the semaphore, so this wait operation can complete immediately and synchronously; in that case, we decrement the count and we return the cached completed task (this means that waiting on our semaphore in an uncontended manner requires no allocations).  If, however, the current count is 0, then we create a new TaskCompletionSource<bool> for this waiter, add it to the list, and return its Task to the caller.

public Task WaitAsync()
{
    lock (m_waiters)
    {
        if (m_currentCount > 0)
        {
            --m_currentCount;
            return s_completed;
        }
        else
        {
            var waiter = new TaskCompletionSource<bool>();
            m_waiters.Enqueue(waiter);
            return waiter.Task;
        }
    }
}

The Release side of things will also look very familiar.  If there are any waiters in the queue, we dequeue one and complete its TaskCompletionSource<bool>, but we do so outside of the lock, for the same reason we did so with AsyncAutoResetEvent.  If there aren’t any waiters, then we simply increment the count.

public void Release()
{
    TaskCompletionSource<bool> toRelease = null;
    lock (m_waiters)
    {
        if (m_waiters.Count > 0)
            toRelease = m_waiters.Dequeue();
        else
            ++m_currentCount;
    }
    if (toRelease != null)
        toRelease.SetResult(true);
}

Next time, we’ll take a look at how to use such an AsyncSemaphore to implement a scoped mutual exclusion locking mechanism.

Leave a Comment
  • Please add 3 and 4 and type the answer here:
  • Post
  • When using locks in situations like these, could you also replace them with a ConcurrentExclusiveSchedulerPair?

  • Hi JesperEC-

    (You beat me to it... I'm planning to discuss this in an upcoming post ;)

    Yes, though they can each lead to a different style of programming, and which you choose for a given situation will depend on that situation's particular needs. There is also one important behavioral difference I'll discuss in an upcoming post.  Again, neither is necessarily better, it'll just depend on your particular needs.

  • Upcoming coverage: Nice!

    ConcurrentExclusiveSchedulerPair seems like locks turns inside out to me, and I figured the tradeoff was lightweight but kernel time vs user-mode but with more allocations. I look forward to reading more on this.

  • Hi,

    Nice post !

    I was wondering if this article was written before SemaphoreSlim class in .NET Framework 4.5 was introduced ?

    I just realized that SemaphoreSlim is now offering a WaitAsync method, therefore I'd like to know if I should use this custom AsyncSemaphore in my code or if I can use the SemaphoreSlim class ? Is that the same behavior ?

  • @Darkey: Yes, this post was written before WaitAsync was added to SemaphoreSlim.  In general you should use SemaphoreSlim instead of the AsyncSemaphore in this post.

Page 1 of 1 (5 items)