NGEN images are not guaranteed to be used by the runtime. There’s a tradeoff with NGen that is always being evaluated where robustness of images is balanced against better performance when images are used. At one extreme, NGEN images are very robust and will always be used. In this case, however, a huge number of the minor (and not-so-minor) perf enhancements are not possible. At the other extreme NGen images are highly tuned for performance, but become increasingly fragile in the face of a changing environment. To be clear, “fragile” in this case means “might not be used by the runtime”. When we have made design decisions trending towards the latter option, it has been justified with the following:
1) NGen is all about performance.
2) We always have the JIT to fall back to if NGEN images become unusable.
With Whidbey, we also have a third mitigating factor:
3) The NGen tool provides new features (ngen update) which start to help with image invalidation by being able to re-generate all the images for an application which might have been invalidated.
One particular case where native images can be fragile has to do with the managed security system, particularly with link and inheritance demands. These types of demands are typically evaluated when the JIT tries to compile a method which calls another method with a link demand. (or alternatively we build a type which inherits from another type with an inheritance demand). The evaluation of the demand triggers the managed security system to make a decision based on the current policy state of the machine and the evidence on the assembly being executed.
As with many runtime features, NGEN introduces some interesting complications to this system. Evaluating these security demands at the point where they are needed can be expensive both in terms of actually performing the demand evaluation as well as lost opportunities to provide optimizations. For example, a call to a method with a LinkDemand could have been inlined. Evaluating the LinkDemand at runtime would mean that the call cannot be inlined. It seems reasonable that at NGEN time the evidence on the assembly could be evaluated and the result of the evaluation recorded into the native image. This means that at run time the security system would not have to be invoked to process the demand; instead the code would execute as if the demand had not even existed. Essentially we're burning an assumption about the security state into the native image.
But what if the security policy state had changed since the assembly was NGEN’ed? Consider a case with an assembly, A.dll, which calls a method in another assembly which is protected with a link demand for a permission in the “Intranet” permission set. A.dll is in the “Intranet” zone, and is NGEN’ed, which means NGEN creates code which executes as if the link demands succeeded. Some time later the security policy is changed such that A.dll now belongs to the “Internet” zone, which does not have the permission it previously had. Now, if you run the code in A.dll, you would not want that NGEN image code to assume the link demand succeeded, because that would be a violation of the current security policy.
To correct this problem, NGEN has to keep a list of all the “security assumptions” made in the code in a native image. Essentially this can be considered a list of permissions the native image requires to run. When the CLR loader tries to load a native image for an assembly, one of the steps it must take is to make sure the assembly currently has all the permissions in that list. If any one of these permission checks fails, then the loader simply rejects the native image and falls back to JIT compilation of the code in the assembly.
What’s the point here? There are plenty of cases where NGEN images work in partial trust. But realize that changing the security state of the system can potentially be a factor in invalidating native images. With Whidbey, running “ngen.exe update” command after making security policy changes is insurance against native images being unusable.