Corrupted Heap Termination Redux

Corrupted Heap Termination Redux

  • Comments 2

Hi, Michael here.

In a previous post I explained how to use HeapSetInformation correctly. In short there's an option when calling this function that will terminate your application if the heap manager detects some form of heap corruption, or the potential to cause heap corruption.

I would recommend you read the previous post before continuing.

You guessed it, the number one email I got after this post was, "So, what sort of corruption will terminate my app?"

So for all those who emailed me, here's a list:

  • Corruption of an uncommitted range (region inside heap segments which are reserved but not committed)
  • Heap header corruption, for example the heap header checksum is invalid. This can be a single header, or multiple headers.
  • Walk of the large virtual blocks shows corruption (all blocks above about 512Kb on x86 and 1Mb on 64 bit are not allocated from segments; they are direct virtual allocations, the heap just holds a list of them along with some metadata to assure consistency with the rest of the heap. They are chained in a double linked list so corruption can be detected by walking the list.)
  • Buffer overrun: the next block header size does not match the expected current block size.
  • Buffer underrun: same as above, but the previous block header size does not match the expected current block size.
  • Attempting to free a free'd block (double-free bug)
  • Attempting to free a non 8-byte aligned block.
  • Passing a bogus heap handle, it could simply be an invalid heap or a handle to a different heap.
  • Corruption of free block list. A bit of a catch-all, including: writing after free, overrunning a previous and managing to step over the list entry.

But there is one huge and critically important caveat to using the defense: it only works if you use the Windows heap manager. You might be surprised to learn that many applications actually implement their own heap functionality for various reasons, often legacy reasons based on historically poor performance of operating system heap managers. A great deal of performance work was performed on the Windows Vista and Windows Server 2008 heap managers, but the work performed is way beyond the scope of this document. Another common scenario is to allocate a huge block of memory from the operating system and then perform custom allocation within that heap block. Again, if you do this, you will not get benefit from using the heap corruption termination capability and you will still be subject to repeatable heap based attacks.

Another down side of not using the native Windows heap manager (or if you use your own sub-allocation mechanism) is you cannot take advantage of Windows leak-detection tools because you are not using the Windows heap in the way it's meant to be used, or you're not using the Windows heap at all.

With all this said, I realize that moving off a custom heap to another heap is never an easy task, but if you want to take advantage of this defense, you should add "Move off our custom heap" to the list of development tasks.

Comments
  • I just added a post over on the SDL blog about heap corruption and process termination as well as come

  • One reason people allocate a block of memory from the OS and then do custom sub-allocations out of it is to implement a fixed block size (aka small block) allocator. If you know you're going to be doing many allocations that are all the same fixed size, this can be more efficient, in theory.

Page 1 of 1 (2 items)
Leave a Comment
  • Please add 6 and 8 and type the answer here:
  • Post