In this article, Vance Morrison describes some of the issues involved in writing managed multithreaded code that avoids the use of locks.  In particular, he discusses the impact of memory models on lock-free (or low-lock) programming.

For managed code running on the CLR, there are two models that matter:

  1. The ECMA CLI memory model
  2. The Microsoft CLR memory model

The ECMA model is very "weak," meaning that there are many possibilities for reordering of memory operations.  The CLR model incorporates the ECMA model, but strengthens it with additional rules that prevent reorderings. According to Chris Brumme, the CLR memory model was strengthened because it was thought that a stronger model was easer to code to. 

So, when you're writing your own managed code, you should code to the strong CLR model, to make your life easier, right?  I'm not sure.

The process of writing correct lock-free code is inherently difficult.  You have to think very carefully about every single memory operation in your algorithm, and how each might interact with operations on other threads.  You must meticulously identify the required orderings.  Then you must consult the rules of the underlying memory model to find a way to enforce each ordering requirement.

In a weak memory model, such as the ECMA standard, most ordering requirements will not be automatically satisfied by the rules of the model; you will have to do something explicit (insert a memory barrier, mark a variable as volatile, etc) to enforce your requirements.  In a strong model, many requirements will be automatically satisfied - but some will not, and you still need to think about each one and make a decision about what needs to be done (or not done).

In my experience, programming to a strong model is not appreciably easier than a weak model.  If anything, it's a bit harder, because there are so many more rules to keep in mind.  Add to that the fact that the CLR model is not standardized, and is not portable to other CLI implementations (such as our own Compact Framework), and it's hard for me to see why anyone would want to target this model.

So what is the value of the CLR memory model?

The key is in the statement above, that "many requirements will be automatically satisfied."  If, in the course of your analysis, you should happen to miss something (don't worry, it happens to everybody), you have a chance of being "saved" by the underlying strong memory model, and your code will run correctly. The CLR memory model acts as a sort of "safety net," just in case you make a mistake.

In my opinion, this is a good thing to have.  Lock-free/low-lock programming is so hard to get right, that it's good to know that the runtime may be able to save me from a mistake.

My advice: code to the ECMA model for portability, and take comfort in the fact that you may be able to get away with some mistakes.