A reader wrote in recently and asked me why the non-caching version of the following code was faster than the caching version.  This goes against the intuition of any longtime C\C++ developer.  The reader found Mark’s blog that confirmed his findings, but wanted to know *why* it was true…   

 

//caching example

void m(Object[] ar) {

    int len = ar.Length; // cache length

    for (int i = 0; i < len; i++) {

        // do something with ar[i]

    }

}

 

 

//non-caching example

void m(Object[] ar) {

    for (int i = 0; i < ar.Length; i++) {

        // do something with ar[i]

    }

}

 

The time for a loop like this is often noticeably affected by the time it takes to do an array bounds check each time you index into the array.  This is an important feature of managed code as it greatly decreases the risk of common security vulnerabilities known as buffer overflows. 

 

But if you observe the code carefully, you will see that the checks are redundant.  The loop itself starts at 0 (the low bound of the array) and stops before the end of the array.  So there is no need to check those conditions again.  If we recognize the pattern, the engine can check just once at the top of the loop to be sure 0 and ar.Length are within bounds of the array.  This is a process known as hoisting.  That is we hoist or pull up the range checks out of the loop and check them once rather than many times. 

 

So the reason the "non-caching" sample is faster is that the engine recognizes this pattern and not the pattern for the caching example.  Could we teach the JIT about the other example?  Sure, and maybe we will someday, but every new special case we do increases the JIT'ing time, so we have to find a balance.

 

The moral here is not "learn all the JIT-tricks" as those change from release to release.  The lesson here is more along the lines of measure, measure, measure when you are doing perf work -- and don't try to outsmart the system ;-)

 

Please check out the managed code section of Improving .NET Application Performance and Scalability book which our local perf expert contributed heavily to. 

 

Hope that was helpful!