The synchronization primitives are handled off to the runtime by the IHostSyncManager interface.  We’ve already provided this to the runtime through our GetHostManager callback on IHostControl.  Our IHostSyncManager is implemented on our IHostTaskManager and for the most part we just create objects and hand them off.  For example CreateCrst looks like:

 

      *ppCrst = (IHostCrst *)new CHostCriticalSection(this);

      if(NULL == *ppCrst)

      {

            return(E_OUTOFMEMORY);

      }

      (*ppCrst)->AddRef();

 

And none of them are very different.  Our SetCLRSyncManager does nothing.  A more sophisticated host could use the ICLRSyncManager to help perform deadlock detection.

 

The CoopFiber implements 4 synchronization primitivies: Auto Events, Manual Events, Critical Sections, and a Semaphore.  All of these with the exception of the critical section are just thin wrappers over the equivalent OS API.  For example looking at CHostManualEvent::Wait:

 

      DWORD result;

      if(option & WAIT_ALERTABLE)

      {

            result = WaitForSingleObjectEx(m_hEvent, dwMilliseconds, true);

      }

      else

      {

            result = WaitForSingleObject(m_hEvent, dwMilliseconds);

      }

 

      switch(result)

      {

            case WAIT_OBJECT_0:

                  return(S_OK);

            case WAIT_ABANDONED:

                  return(HOST_E_ABANDONED);                

            case WAIT_IO_COMPLETION:

                  CHostTask::GetCurrentTask()->FlagSet(TASK_FLAG_ALERTED, false);

                  return(HOST_E_INTERRUPTED);

            case WAIT_TIMEOUT:

                  return(HOST_E_TIMEOUT);

            default:

                  _ASSERTE(!"Shouldn't reach here");

                  return(E_FAIL);        

      }

 

We can see an implementation that is nearly identical to Join we saw last time.  Both the auto event and semaphore implementations are nearly identical.  More sophisticated hosts could perform deadlock detection below these events (for reader/writer locks and monitors built on top of the events), or they could choose to schedule other fibers on these threads.  For this simple implementation we simply block the current thread.

 

The critical section is the only synchronization primitive which doesn’t simply wrap the OS API.  This is because the critical section is owned by a specific thread (unlike the events and semaphore).  Because of this the critical section must be aware of the fibers in addition to threads.  This will prevent one fiber from acquiring the critical section, and having another fiber re-acquire on the same thread.

 

The essence of the critical section acquire lives in TryEnter, which Enter builds on:

 

 

      if(m_holderTask == NULL)

      {

            CritSecHolder cs(&m_critSec);

 

            // no one holds the critical section

            if(m_holderTask == NULL && WaitForSingleObject(m_hEvent, 0) == WAIT_OBJECT_0)

            {

                  // we've acquired the crit section

                  m_holderTask = curTask;

                  m_dwEnterCount = 1;

                  *pbSucceeded = TRUE;

                  return(S_OK);

            }

      }

      else if(m_holderTask == curTask)

      {

            CritSecHolder cs(&m_critSec);

            // we already hold the critical section

            if(m_holderTask == curTask)

            {

                  m_dwEnterCount++;

                  *pbSucceeded = TRUE;

                  return(S_OK);

            }                

      }

 

      return(S_OK);

 

There’s a couple of points to note.  First, we use a critical section to protect our state.  We don’t want to worry about other threads entering.  Also we use an event to block the task if the critical section cannot be acquired.  A more sophisticated fiber scheduler would want to re-schedule another fiber rather than block the current thread.

 

Those are the core synchronization primitives that we’re using in CoopFiber.  You can see by limiting the scope of what CoopFiber does we were able to merely rely upon the OS APIs in most circumstances.  Next time I’ll start discussing the managed / unmanaged fiber API interface.