Both Java and .NET have several things in common - their runtimes are both able to execute code written in a machine-independent "assembly language". As we know, this code is represented in binary format: bytecode in the Java world, and IL in .NET. The generic idea of a "bytecode" is pretty old, in fact UCSD Pascal had a similar concept called P-Code back in 1970s, and then later Smalltalk built on the same idea.

Usually, these generic instructions are not executed directly, but rather they are translated into machine code on the fly, in a process called JIT. The acronym stands for Just-In-Time, which is similar with the back-end phase of a C/C++ compiler. This is also not a new idea, and if I remember correctly, Smalltalk-80 implemented it for the first time in a succesful manner.

As a side note, in the case of .NET, these instructions were specifically designed to enable a simpler language but also a potentially faster JIT process. For example, in IL you have a single, virtual "add" instruction which adds whatever two numeric operands are present on the stack, irrespective of their types. The runtime will perform the right optimization since the argument types can be deduced anyway from the metadata information. This contrasts somewhat with the Java approach where you have numerous flavors of "add", one for each possible pair of integer types.

On the other side, the fact that you intensively need the metadata during the JIT also implies that it is pretty hard to design a ".NET/IL processor" since you need to understand both the .NET metadata and IL at the same time. OK, maybe it is not impossible but certainly hard. The Java bytecode, on the other side, was initially designed to be run on a processor, not in a JIT environment.

But with JIT we have now another challenge. First, as soon as you stop the execution of a process, you lose all the optimization information gathered in the previous run. And you have to JIT again and again, each time the process starts. Second, when your process starts, you lose some time compiling the IL/bytecode.

A natural idea is to cache the compiled images on disk, so at the next start you will just load them and start from there. Even more than that - there is an optimization called Pre-JIT that allows CLR (starting with 1.0) to pre-compile a .NET assembly ahead of time, and persist the generated in machine-dependent executable. Pre-JIT helps for example to get better load times for GUI-style apps.

I am wondering why no Java virtual machines do not have something similar these days?

[update: fixing the link about UCSD Pascal]