In a prior post, "Protecting against Pointer Subterfuge (Kinda!)" I described the algorithm we used to encode and decode long-lived pointers in memory to make them harder to exploit after a buffer overrun.

A couple of days after the post, I received an email from Steven Alexander about some research ("Defeating compiler-level buffer overflow protection") he had performed on the entropy used to encode the pointers, and pointed out that we may have a weakness – there was a reasonable chance the upper byte of the 32-bit random number could be zero. Steven sent me a link to his paper, and I suggested to a couple of kernel developers here that we tweak the algorithm. Based on the feedback from Steven, we have now changed the implementation in Windows Vista.

The entropy for the cookie was derived from:

  • The higher portion of the system tick count (100 nano-second tick count since Jan 01, 1601) ^
  • The lower portion the system tick count ^
  • The interrupt time (number of ticks during interrupts) ^
  • The number of system calls since system boot.

The entropy for the cookie is now derived from:

  • The higher portion of the system tick count (100 nano-second tick count since Jan 01, 1601) ^
  • The lower portion the system tick count ^
  • The interrupt time (number of ticks during interrupts) ^
  • The number of system calls since system boot ^
  • The (ULONG)rdtsc CPU value ^
  • The memory manager page fault count

This result is then rotated right on encode using Cookie % (sizeof (ULONG_PTR) * 8). In psuedocode on a 32-bit CPU, encoding looks like this:

Ptrenc = (Ptrclear ^ Cookie) >>> (Cookie % 32)

The reason for the rotation is to make it harder to target partial pointer overwrites, because target bits are in an unknown position in the encoded pointer.