In his blog, Eric Eilebrecht explains why when writing multithreaded applications today we should stick to the weak ECMA memory model instead of CLR’s much stronger memory model.
In principal, I have no issue with using a weaker model than the CLR memory model but my main concern is that “at what cost are we prepared to endorse a weaker model?”
Writing correct lock-free or low-lock data structures can be challenging to say the least, even when relying on a strong memory model such as CLR’s. The complexity could increase a few fold when targeting a weaker memory model.
Consciously or more often instinctively we write code based on a set of assumptions (requirements, APIs, memory models and hardware architecture). These assumptions however do change. Not always these assumptions are documented or easily transferable to other developers.
Imagine the following simplified implementation of a lock-free stack:
public class LockFreeStack<T> where T : class
private LinkedListNode<T> _head;
public void Push(T item)
var newNode = new LinkedListNode<T>();
newNode.Item = item;
newNode.NextNode = _head;
//_head = newNode;
} while (!SyncHelper.CompaneAndSwape<LinkedListNode<T>>(
ref _head, newNode.NextNode, newNode));
public T Pop()
node = _head;
if (node == null)
//_head = node.NextNode;
ref _head, node, node.NextNode));
It makes a number of assumptions. For instance, it assumes that you are not interested in knowing the length of the stack. Adding a Count property to this class would require a massive rethink of its lock-free algorithm (this may add a possible second write to a shared memory at the time of Pop and Push). It might even require a full rewrite of the class. Therefore when assumptions change, it is not always trivial to refactor and repurpose the code. Utmost level of care must be taken to ensure that issues and bugs are avoided.
In my experience, it is often simpler to rewrite the code and use as much testing as possible on multiprocessor machines.
PingBack from http://geeklectures.info/2007/12/28/which-memory-model/
I didn't understand what the first reason was. Can you clarify? I get that writing lock-free code is hard, and making even minor functionality changes often requres a complete rewrite. I don't get how the memory model impacts that - you're going to have to think about all the same things regardless of which model you assume (unless it's the "sequential consistency" model, which doesn't really exist).
Your second point illustrates why the CLR can never change its default memory model to be weaker. (Though an opt-in model, via something like an attribute on weak-model-aware methods, could maybe be done.)
My main point in the blog you referenced is that you're going to have to think about all the same ordering issues no matter what. The only difference is whether you think "those writes won't be reordered, because the memory model guarantees that two writes to the same memory location will never be reordered," or whether you think instead, "those writes won't be reordered because that variable is volatile," or maybe "they won't be reordered because I put a Thread.MemoryBarrier between them."
No matter what, you still have to think about it.
What I like about code that assumes a weak model is that a) There's less to keep track of while you're doing this thinking (because weaker models have fewer special cases), and b) you can read the code and see where the author thought ordering mattered, and where he didn't. This tends to lead to more correct, maintainable, code - and it has the nice side-effect of allowing you to more easily port the code to other CLI implementations (though I'll grant that most folks may never do that).