Memory is usually a shared resource on multithreaded systems therefore access to it must be regulated and fully specified. This specification is often called a “Memory Model”.
Optimisations performed by compilers and the emergence of multi-core processors are some of the factors testing the agility of today’s memory models. The simplest such model is the Sequential Consistency model in which all reads and writes from all threads are effectively queued and sequentially performed. Although it is simple but it hampers scalability and performance. Read this great MSDN magazine article on memory models.
I don’t want to bore you by revisiting some of the basic concepts of multiprocessor machines. However let me also remind you that processor buffers and caches heavily restrict memory models. Caches effectively can move reads and writes. For instance the value for a memory location present in a processor cache can be a copy of an earlier value for that location in the main memory, and reading that value effectively brings the read earlier in time.
Here I have tried to summarise the memory model rules for CLR 2.0 (.NET 2.0, 3.0 and 3.5) as I know them (Joe also covers the same topic here):
1. Reads and writes cannot be introduced
2. Reads cannot move before entering a lock and writes cannot move after exiting a lock
3. No reads or writes can move before a volatile read or after a volatile write
4. All writes have the effect of volatile write
5. Duplicate adjacent reads/writes from/to the same location from the same thread can be reduced to one read/write
6. Reads cannot move before a write to the same location from the same thread
7. Reads and writes cannot move after a full-barrier (such as Thread.MemoryBarrier).
So what happened to the following rules?
- “Writes cannot move past other writes from the same thread” is prevented by rule 4
- “Data dependence among reads and writes is never violated” is enforced by rules 4 and 6
- “All writes have release semantics” is enforced by rule 4
- “Reads or writes from a given thread to a given location cannot pass a write from the same thread to the same location” is enforced by rules 4
Now that we know the rules, wouldn’t it be nice to have a tool that could automatically check our .NET code to ensure that any valid reordering still keeps the semantic intact and to suggest possible additions of barriers and volatile operations? Well there is such a tool but as far as I understand, this one heavily relies on the weaker memory model defined by EMMA CLI spec and it is not complete. Also if you are interested in knowing more about memory model verification, take a look at this.
The other interesting aspect of CLR 2.0 is that it guarantees that: read and write access to properly aligned (default behaviour) memory location no larger than the native Integer (32 bit processor = 4-byte aligned, 64 bit processor = 8-byte aligned) is atomic. This is defined in section 12.6 of the ECMA CLI Spec. Therefore you can be certain that at least 4-byte assignments in CLR 2.0 are atomic.
Why do you need to know this? Well, writing good concurrent code particularly lock-free or low-lock data structures rely heavily on developers having a deep understanding of the memory model. Also, memory models can change so if you stick to a weaker memory model today such as the ECMA memory model it is more likely that your application stays forward-compatible.
PingBack from http://geeklectures.info/2007/12/28/clr-20-memory-model/
Another very simple pattern builds on the foundation of the Safe-Unsafe Cache pattern .  What is