August, 2004

  • The Old New Thing

    Why is the virtual address space 4GB anyway?

    • 58 Comments

    The size of the address space is capped by the number of unique pointer values. For a 32-bit processor, a 32-bit value can represent 232 distinct values. If you allow each such value to address a different byte of memory, you get 232 bytes, which equals four gigabytes.

    If you were willing to forego the flat memory model and deal with selectors, then you can combine a 16-bit selectors value with a 32-bit offset for a combined 48-bit pointer value. This creates a theoretical maximum of 248 distinct pointer values, which if you allowed each such to address a different byte of memory, yields 256TB of memory.

    This theoretical maximum cannot be achieved on the Pentium class of processors, however. On reason is that the lower bits of the segment value encode information about the type of selector. As a result, of the 65536 possible selector values, only 8191 of them are usable to access user-mode data. This drops you to 32TB.

    The real limitation on the address space using the selector:offset model is that each selector merely describes a subset of a flat 32-bit address space. So even if you could get to use all 8191 selectors, they would all just be views on the same underlying 32-bit address space.

    (Besides, I seriously doubt people would be willing to return the the exciting days of segmented programming.)

    In 64-bit Windows, the 2GB limit is gone; the user-mode virtual address space is now a stunning 8 terabytes. Even if you allocated a megabyte of address space per second, it would take you three months to run out. (Notice however that you can set /LARGEADDRESSAWARE:NO on your 64-bit program to tell the operating system to force the program to live below the 2GB boundary. It's unclear why you would ever want to do this, though, since you're missing out on the 64-bit address space while still paying for it in pointer size. It's like paying extra for cable television and then not watching.)

    Armed with what you have learned so far, maybe you can respond to this request that came in from a customer:

    Oen of our boot.ini files has a /7GB switch. Our consultant told us that we should set it to 1GB less than the system memory. Since we have 8GB, 8GB - 1GB = 7GB. The consultant said that setting this value allows an application to allocate more than 2GB of memory. We would like Microsoft to comment on this analysis.
  • The Old New Thing

    Myth: The /3GB switch expands the user-mode address space of all programs

    • 46 Comments

    Only programs marked as /LARGEADDRESSAWARE are affected.

    For compatibility reasons, only programs that explicitly indicate that they are prepared to handle a virtual address space larger than 2GB will get the larger virtual address space. Unmarked programs get the normal 2GB virtual address space, and the address space between 2GB and 3GB goes unused.

    Why?

    Because far too many programs assume that the high bit of user-mode virtual addresses is always clear, often unwittingly. MSDN has a page listing some of the ways programs make this assumption. One such assumption you may be making is taking the midpoint between two pointers by using the formula (a+b)/2. As I noted in a previous exercise, this is subject to integer overflow and consequently can result in an erroneous pointer computation. Consequently, you can't just take an existing program that you didn't write, mark it /LARGEADDRESSAWARE, and declare your job done. You have to check with the authors of that program that they verified that their code does not make any 2GB assumptions. (And the fact that the authors didn't mark their program as 3GB-compatible strongly suggests that no such verification has occurred. If it had, they would have marked the program /LARGEADDRESSAWARE!)

    Marking your program /LARGEADDRESSAWARE indicates to the operating system, "Go ahead and give this program access to that extra gigabyte of user-mode address space," and as a result, addresses in the third gigabyte become possible return values from memory allocation functions. If you set the "Top down" flag in the memory manager allocation preferences mask (search for "top down"), you can instruct the memory manager to allocate high-address memory first, thereby forcing your program to deal with those addresses sooner than it normally would. This is very handy when testing your program in a /3GB configuration since it forces the troublesome memory addresses to be used sooner than normal.

    Exercise: Find the bug in the following function. Hint: What's today's topic?

    #define BUFFER_SIZE 32768
    BOOL  IsPointerInsideBuffer(const BYTE *p, const BYTE *buffer)
    {
      return p >= buffer && p - buffer < BUFFER_SIZE;
    }
    
  • The Old New Thing

    Why do some structures end with an array of size 1?

    • 41 Comments

    Some Windows structures are variable-sized, beginning with a fixed header, followed by a variable-sized array. When these structures are declared, they often declare an array of size 1 where the variable-sized array should be. For example:

    typedef struct _TOKEN_GROUPS {
        DWORD GroupCount;
        SID_AND_ATTRIBUTES Groups[ANYSIZE_ARRAY];
    } TOKEN_GROUPS, *PTOKEN_GROUPS;
    

    If you look in the header files, you'll see that ANYSIZE_ARRAY is #define'd to 1, so this declares a structure with a trailing array of size one.

    With this declaration, you would allocate memory for one such variable-sized TOKEN_GROUPS structure like this:

    PTOKEN_GROUPS TokenGroups =
       malloc(FIELD_OFFSET(TOKEN_GROUPS, Groups[NumberOfGroups]));
    
    and you would initialize the structure like this:
    TokenGroups->GroupCount = NumberOfGroups;
    for (DWORD Index = 0; Index = NumberOfGroups; Index++) {
      TokenGroups->Groups[Index] = ...;
    }
    

    Many people think it should have been declared like this:

    typedef struct _TOKEN_GROUPS {
        DWORD GroupCount;
    } TOKEN_GROUPS, *PTOKEN_GROUPS;
    

    (In this article, code that is wrong or hypothetical will be italicized.)

    The code that does the allocation would then go like this:

    PTOKEN_GROUPS TokenGroups =
       malloc(sizeof(TOKEN_GROUPS) +
              NumberOfGroups * sizeof(SID_AND_ATTRIBUTES));
    

    This alternative has two disadvantages, one cosmetic and one fatal.

    First, the cosmetic disadvantage: It makes it harder to access the variable-sized data. Initializing the TOKEN_GROUPS just allocated would go like this:

    TokenGroups->GroupCount = NumberOfGroups;
    for (DWORD Index = 0; Index = NumberOfGroups; Index++) {
      ((SID_AND_ATTRIBUTES *)(TokenGroups + 1))[Index] = ...;
    }
    

    The real disadvantage is fatal. The above code crashes on 64-bit Windows. The SID_AND_ATTRIBUTES structure looks like this:

    typedef struct _SID_AND_ATTRIBUTES {
        PSID Sid;
        DWORD Attributes;
        } SID_AND_ATTRIBUTES, * PSID_AND_ATTRIBUTES;
    

    Observe that the first member of this structure is a pointer, PSID. The SID_AND_ATTRIBUTES structure requires pointer alignment, which on 64-bit Windows is 8-byte alignment. On the other hand, the proposed TOKEN_GROUPS structure consists of just a DWORD and therefore requires only 4-byte alignment. sizeof(TOKEN_GROUPS) is four.

    I hope you see where this is going.

    Under the proposed structure definition, the array of SID_AND_ATTRIBUTES structures will not be placed on an 8-byte boundary but only on a 4-byte boundary. The necessary padding between the GroupCount and the first SID_AND_ATTRIBUTES is missing. The attempt to access the elements of the array will crash with a STATUS_DATATYPE_MISALIGNMENT exception.

    Okay, you may say, then why not use a zero-length array instead of a 1-length array?

    Because time travel has yet to be perfected.

    Zero-length arrays did not become legal Standard C until 1999. Since Windows was around long before then, it could not take advantage of that functionality in the C language.

  • The Old New Thing

    Myth: Without /3GB a single program can't allocate more than 2GB of virtual memory

    • 40 Comments

    Virtual memory is not virtual address space (part 2).

    This myth is being perpetuated even as I write this series of articles.

    The user-mode virtual address space is normally 2GB, but that doesn't limit you to 2GB of virtual memory. You can allocate memory without it being mapped into your virtual address space. (Those who grew up with Expanded Memory or other forms of bank-switched memory are well-familiar with this technique.)

    HANDLE h = CreateFileMapping(INVALID_HANDLE_VALUE, 0,
                                 PAGE_READWRITE, 1, 0, NULL);
    

    Provided you have enough physical memory and/or swap file space, that 4GB memory allocation will succeed.

    Of course, you can't map it all into memory at once on a 32-bit machine, but you can do it in pieces. Let's read a byte from this memory.

    BYTE ReadByte(HANDLE h, DWORD offset)
    {
     SYSTEM_INFO si;
     GetSystemInfo(&si);
     DWORD chunkOffset = offset % si.dwAllocationGranularity;
     DWORD chunkStart = offset - chunkOffset;
     LPBYTE pb = (LPBYTE*)MapViewOfFile(h, FILE_MAP_READ, 0,
          chunkStart, chunkOffset + sizeof(BYTE));
     BYTE b = pb[chunkOffset];
     UnmapViewOfFile(pb);
     return b;
    }
    

    Of course, in a real program, you would have error checking and probably a caching layer in order to avoid spending all your time mapping and unmapping instead of actually doing work.

    The point is that virtual address space is not virtual memory. As we have seen earlier, you can map the same memory to multiple addresses, so the one-to-one mapping between virtual memory and virtual address space has already been violated. Here we showed that just because you allocated memory doesn't mean that it has to occupy any space in your virtual address space at all.

    [Updated: 10:37am, fix minor typos reported in comments.]

  • The Old New Thing

    Myth: You need /3GB if you have more than 2GB of physical memory

    • 38 Comments

    Physical memory is not virtual address space.

    In my opinion, this is another non sequitur. I'm not sure what logical process led to this myth. It can't be a misapprehension of a 1-1 mapping between physical memory and virtual memory, because that mapping is blatantly not one-to-one. You typically have far more virtual memory than physical memory. Free physical memory doesn't have any manifestation in any virtual address space. And shared memory has manifestations in multiple virtual address spaces yet correspond to the same physical page.

    Though this brings up a historical note.

    In Windows/386, the kernel just so happened to map all physical memory into the kernel-mode virtual address space. There was a function _MapPhysToLinear. You gave it a physical memory range and it returned the base of a range of linear addresses that could be used to access that physical memory. Some driver developers discovered that the kernel mapped all of physical memory and just handed out pointers into that single mapping. As a result, they called _MapPhysToLinear(0, 0x1000) and whenever they wanted to access physical memory in the future, they just added the address to the return value from that single call. In other words, they assumed that

     _MapPhysToLinear(p, x) = _MapPhysToLinear(0, x) + p 

    In Windows 95, the memory manager was completely rewritten and the above coincidence was no longer true. To conserve kernel-mode virtual address space, physical memory was now mapped linearly only as necessary.

    Of course, the drivers that relied on the old behavior were now broken because the undocumented behavior they relied upon was no longer present.

    As a result, when it starts up, Windows 95 looks around to see if any drivers known to rely on this undocumented behavior are loaded. (Windows 3.1 didn't support dynamically-loaded kernel drivers so looking at boot time was sufficient.) If so, then it went ahead and mapped all of physical memory into the kernel-mode virtual address space to keep those driver happy. This wasted virtual address space but kept your machine running.

    I can already hear people saying, "Microsoft shouldn't have made those buggy drivers work. They should have just let the computer crash in order to put pressure on the authors of those drivers to fix their bugs." This assumes, of course, that the cause of the crash could be traced back to the buggy driver in the first place. A very common manifestation of a stray pointer in kernel mode is memory corruption, which means that the component that crashes is rarely the one that caused the problem in the first place.

    For example, nearly all Windows 95 bluescreen crashes in VMM(01) are caused by memory corruption. VMM(01) is the non-swappable part of the Windows 95 kernel which is where the memory manager lives. If a driver corrupts the kernel-mode heap, a bluescreen in the memory manager is typically how the corruption manifests itself.

  • The Old New Thing

    Summary of the recent spate of /3GB articles

    • 36 Comments

    A table of contents now that the whole thing is over. I hope.

    I'm not sure how successful this series has been, though, for it appears that even people who have read the articles continue to confuse virtual address space with physical address space. (Or maybe this person is merely mocking a faulty argument? I can't tell for sure.)

  • The Old New Thing

    Writing your own menu-like window

    • 32 Comments

    Hereby incorporating by reference the "FakeMenu" sample in the Platform SDK. It's in the winui\shell\fakemenu directory.

    For those who don't have the Platform SDK, what are you doing writing Win32 programs without the Platform SDK? Download it if it didn't come with your development tools.

    If for some reason you don't want the Platform SDK yet you want to write Win32 programs (I bet you're the sort of person who throws away the manual as soon as you buy something), you can look at the version that Chris Becke has stashed away on this page.

  • The Old New Thing

    The oft-misunderstood /3GB switch

    • 32 Comments

    It's simple to explain what it does, but people often misunderstand.

    The /3GB switch changes the way the 4GB virtual address space is split up. Instead of splitting it as 2GB of user mode virtual address space and 2GB of kernel mode virtual address space, the split is 3GB of user mode virtual address space and 1GB of kernel mode virtual address space.

    That's all.

    And yet people think it does more than that.

    I think the problem is that people think that "virtual address space" means something other than just "virtual address space".

    The term "address space" refers to how a numerical value (known as an "address") is interpreted when it is used to access some type of resource. There is a physical address space; each address in the physical address space refers to a byte in a memory chip somewhere. (Note for pedants: Yes, it's actually spread out over several memory chips, but that's not important here.) There is an I/O address space; each address in the I/O address space allows the CPU to communicate with a hardware device.

    And then there is the virtual address space. When people say "address space", they usually mean "virtual address space".

    The virtual address space is the set of possible pointer values (addresses) that can be used at a single moment by the processor. In other words, if you have an address like 0x12345678, the virtual address space determines what you get if you try to access that memory. The contents of the virtual address space changes over time, for example, as you allocate and free memory. It also varies based on context: each process has its own virtual address space.

    Saying that 2GB (or 3GB) of virtual address space is available to user mode means that at any given moment in time, out of the 4 billion virtual addresses available in a 32-bit value, 2 billion (or 3 billion) of them are potentially usable by user-mode code.

    Over the next few entries, I'll talk about the various consequences and misinterpretations of the /3GB switch.

  • The Old New Thing

    Why .shared sections are a security hole

    • 30 Comments

    Many people will recommend using shared data sections as a way to share data between multiple instances of an application. This sounds like a great idea, but in fact it's a security hole.

    Proper shared memory objects created by the CreateFileMapping function can be secured. They have security descriptors that let you specify which users are allowed to have what level of access. By contrast, anybody who loads your EXE or DLL gets access to your shared memory section.

    Allow me to demonstrate with an intentionally insecure program.

    Take the scratch program and make the following changes:

    #pragma comment(linker, "/SECTION:.shared,RWS")
    #pragma data_seg(".shared")
    int g_iShared = 0;
    #pragma data_seg()
    
    void CALLBACK TimerProc(HWND hwnd, UINT, UINT_PTR, DWORD)
    {
      int iNew = g_iShared + 1;
      if (iNew == 10) iNew = 0;
      g_iShared = iNew;
      InvalidateRect(hwnd, NULL, TRUE);
    }
    
    BOOL
    OnCreate(HWND hwnd, LPCREATESTRUCT lpcs)
    {
        SetTimer(hwnd, 1, 1000, TimerProc);
        return TRUE;
    }
    
    void
    PaintContent(HWND hwnd, PAINTSTRUCT *pps)
    {
      TCHAR sz[2];
      wsprintf(sz, TEXT("%d"), g_iShared);
      TextOut(pps->hdc, 0, 0, sz, 1);
    }
    

    Go ahead and run this program. It counts from 0 to 9 over and over again. Since the TimerProc function never lets g_iShared go above 9, the wsprintf is safe from buffer overflow.

    Or is it?

    Run this program. Then use the runas utility to run a second copy of this program under a different user. For extra fun, make one of the users an administrator and another a non-administrator.

    Notice that the counter counts up at double speed. That's to be expected since the counter is shared.

    Okay, now close one of the copies and relaunch it under a debugger. (It's more fun if you let the administrator's copy run free and run the non-administrator's copy run under a debugger.) Let both programs run, then break into the debugger and change the value of the variable g_iShared to something really big, say, 1000000.

    Now, depending on how intrusive your debugger is, you might or might not see the crash. Some debuggers are "helpful" and "unshare" shared memory sections when you change their values from the debugger. Helpful for debugging (maybe), bad for my demonstration (definitely).

    Here's how I did it with the built-in ntsd debugger. I opened a command prompt, which runs as myself (and I am not an administrator). I then used the runas utility to run the scratch program as administrator. It is the administrator's copy of the scratch program that I'm going to cause to crash even though I am just a boring normal non-administrative user.

    From the normal command prompt, I typed "ntsd scratch" to run the scratch program under the debugger. From the debugger prompt, I typed "u TimerProc" to disassemble the TimerProc function, looking for

    01001143 a300300001       mov     [scratch!g_iShared (01003000)],eax
    
    (note: your numbers may differ). I then typed "g 1001143" to instruct the debugger to execute normally until that instruction is reached. When the debugger broke, I typed "r eax=12341234;t" to change the value of the eax register to 0x12341324 and then trace one instruction. That one-instruction trace wrote the out-of-range value into shared memory, and one second later, the administrator version of the program crashed with a buffer overflow.

    What happened?

    Since the memory is shared, all running copies of the scratch program have access to it. ALl I did was use the debugger to run a copy of the scratch program and change the value of the shared memory variable. Since the variable is shared, the value also changes in the administrator's copy of the program, which then causes the wsprintf buffer to overflow, thereby crashing the administrator's copy of the program.

    A denial of service is bad enough, but you can really do fun things if a program keeps anything of value in shared memory. If there is a pointer, you can corrupt the pointer. If there is a string, you can remove the null terminator and cause it to become "impossibly" long, resulting in a potential buffer overflow if somebody copies it without checking the length.

    And if there is a C++ object with a vtable, then you have just hit the mother lode! What you do is redirect the vtable to a bogus vtable (which you construct in the shared memory section), and put a function pointer entry in that vtable that points into some code that you generated (also into the shared memory section) that takes over the machine. (If NX is enabled, then the attack is much harder but still possible in principle.)

    Even if you can't trigger a buffer overflow by messing with variables in shared memory, you can still cause the program to behave erratically. Just scribbling random numbers all over the shared memory section will certainly induce "interesting" behavior in the program under attack.

    Moral of the story: Avoid shared memory sections. Since you can't attach an ACL to the section, anybody who can load your EXE or DLL can modify your variables and cause havoc in another instance of the program that is running at a higher security level.

  • The Old New Thing

    Why can't you treat a FILETIME as an __int64?

    • 27 Comments

    The FILETIME structure represents a 64-bit value in two parts:

    typedef struct _FILETIME {
      DWORD dwLowDateTime;
      DWORD dwHighDateTime;
    } FILETIME, *PFILETIME;
    

    You may be tempted to take the entire FILETIME structure and access it directly as if it were an __int64. After all, its memory layout exactly matches that of a 64-bit (little-endian) integer. Some people have written sample code that does exactly this:

    pi = (__int64*)&ft; // WRONG
    (*pi) += (__int64)num*datepart; // WRONG
    

    Why is this wrong?

    Alignment.

    Since a FILETIME is a structure containing two DWORDs, it requires only 4-byte alignment, since that is sufficient to put each DWORD on a valid DWORD boundary. There is no need for the first DWORD to reside on an 8-byte boundary. And in fact, you've probably already used a structure where it doesn't: The WIN32_FIND_DATA structure.

    typedef struct _WIN32_FIND_DATA {
        DWORD dwFileAttributes;
        FILETIME ftCreationTime;
        FILETIME ftLastAccessTime;
        FILETIME ftLastWriteTime;
        DWORD nFileSizeHigh;
        DWORD nFileSizeLow;
        DWORD dwReserved0;
        DWORD dwReserved1;
        TCHAR  cFileName[ MAX_PATH ];
        TCHAR  cAlternateFileName[ 14 ];
    } WIN32_FIND_DATA, *PWIN32_FIND_DATA, *LPWIN32_FIND_DATA;
    

    Observe that the three FILETIME structures appear at offsets 4, 12, and 20 from the beginning of the structure. They have been thrown off 8-byte alignment by the dwFileAttributes member.

    Casting a FILETIME to an __int64 therefore can (and in the WIN32_FIND_DATA case, will) create a misaligned pointer. Accessing a misaligned pointer will raise a STATUS_DATATYPE_MISALIGNMENT exception on architectures which require alignment.

    Even if you are on a forgiving platform that performs automatic alignment fixups, you can still run into trouble. More on this and other consequences of alignment in the next few entries.

    Exercise: Why are the LARGE_INTEGER and ULARGE_INTEGER structures not affected?

Page 1 of 3 (29 items) 123