November, 2007

  • The Old New Thing

    Why are INI files deprecated in favor of the registry?


    Welcome, Slashdot readers. Remember, this Web site is for entertainment purposes only.

    Why are INI files deprecated in favor of the registry? There were many problems with INI files.

    • INI files don't support Unicode. Even though there are Unicode functions of the private profile functions, they end up just writing ANSI text to the INI file. (There is a wacked out way you can create a Unicode INI file, but you have to step outside the API in order to do it.) This wasn't an issue in 16-bit Windows since 16-bit Windows didn't support Unicode either!
    • INI file security is not granular enough. Since it's just a file, any permissions you set are at the file level, not the key level. You can't say, "Anybody can modify this section, but that section can be modified only by administrators." This wasn't an issue in 16-bit Windows since 16-bit Windows didn't do security.
    • Multiple writers to an INI file can result in data loss. Consider two threads that are trying to update an INI file. If they are running simultaneously, you can get this:
      Thread 1Thread 2
      Read INI file
      Read INI file
      Write INI file + X
      Write INI file + Y
      Notice that thread 2's update to the INI file accidentally deleted the change made by thread 1. This wasn't a problem in 16-bit Windows since 16-bit Windows was co-operatively multi-tasked. As long as you didn't yield the CPU between the read and the write, you were safe because nobody else could run until you yielded.
    • INI files can suffer a denial of service. A program can open an INI file in exclusive mode and lock out everybody else. This is bad if the INI file was being used to hold security information, since it prevents anybody from seeing what those security settings are. This was also a problem in 16-bit Windows, but since there was no security in 16-bit Windows, a program that wanted to launch a denial of service attack on an INI file could just delete it!
    • INI files contain only strings. If you wanted to store binary data, you had to encode it somehow as a string.
    • Parsing an INI file is comparatively slow. Each time you read or write a value in an INI file, the file has to be loaded into memory and parsed. If you write three strings to an INI file, that INI file got loaded and parsed three times and got written out to disk three times. In 16-bit Windows, three consecutive INI file operations would result in only one parse and one write, because the operating system was co-operatively multi-tasked. When you accessed an INI file, it was parsed into memory and cached. The cache was flushed when you finally yielded CPU to another process.
    • Many programs open INI files and read them directly. This means that the INI file format is locked and cannot be extended. Even if you wanted to add security to INI files, you can't. What's more, many programs that parsed INI files were buggy, so in practice you couldn't store a string longer than about 70 characters in an INI file or you'd cause some other program to crash.
    • INI files are limited to 32KB in size.
    • The default location for INI files was the Windows directory! This definitely was bad for Windows NT since only administrators have write permission there.
    • INI files contain only two levels of structure. An INI file consists of sections, and each section consists of strings. You can't put sections inside other sections.
    • [Added 9am] Central administration of INI files is difficult. Since they can be anywhere in the system, a network administrator can't write a script that asks, "Is everybody using the latest version of Firefox?" They also can't deploy scripts that say "Set everybody's Firefox settings to XYZ and deny write access so they can't change them."

    The registry tried to address these concerns. You might argue whether these were valid concerns to begin with, but the Windows NT folks sure thought they were.

    Commenter TC notes that the pendulum has swung back to text configuration files, but this time, they're XML. This reopens many of the problems that INI files had, but you have the major advantage that nobody writes to XML configuration files; they only read from them. XML configuration files are not used to store user settings; they just contain information about the program itself. Let's look at those issues again.

    • XML files support Unicode.
    • XML file security is not granular enough. But since the XML configuration file is read-only, the primary objection is sidestepped. (But if you want only administrators to have permission to read specific parts of the XML, then you're in trouble.)
    • Since XML configuration files are read-only, you don't have to worry about multiple writers.
    • XML configuration files files can suffer a denial of service. You can still open them exclusively and lock out other processes.
    • XML files contain only strings. If you want to store binary data, you have to encode it somehow.
    • Parsing an XML file is comparatively slow. But since they're read-only, you can safely cache the parsed result, so you only need to parse once.
    • Programs parse XML files manually, but the XML format is already locked, so you couldn't extend it anyway even if you wanted to. Hopefully, those programs use a standard-conforming XML parser instead of rolling their own, but I wouldn't be surprised if people wrote their own custom XML parser that chokes on, say, processing instructions or strings longer than 70 characters.
    • XML files do not have a size limit.
    • XML files do not have a default location.
    • XML files have complex structure. Elements can contain other elements.

    XML manages to sidestep many of the problems that INI files have, but only if you promise only to read from them (and only if everybody agrees to use a standard-conforming parser), and if you don't require security granularity beyond the file level. Once you write to them, then a lot of the INI file problems return.

  • The Old New Thing

    VirtualLock only locks your memory into the working set


    When you lock memory with VirtualLock it locks the memory into your process's working set. It doesn't mean that the memory will never be paged out. It just means that the memory won't be paged out as long as there is a thread executing in your process, because a process's working set need be present in memory only when the process is actually executing.

    (Earlier versions of the MSDN documentation used to say this more clearly. At some point, the text changed to say that it locked the memory physically rather than locking it into the working set. I don't know who changed it, but it was a step backwards.)

    The working set is the set of pages that the memory manager will keep in memory while your program is running, because it is the set of pages that the memory manager predicts your program accesses frequently, so keeping those pages in memory when your program is executing keeps your program running without taking a ridiculous number of page faults. (Of course, if the prediction is wrong, then you get a ridiculous number of page faults anyway.)

    Now look at the contrapositive: If all the threads in your process are blocked, then the working set rules do not apply since the working set is needed only when your process is executing, which it isn't. If all your threads are blocked, then the entire working set is eligible for being paged out.

    I've seen people use VirtualLock expecting that it prevents the memory from being written to the page file. But as you see from the discussion above, there is no such guarantee. (Besides, if the user hibernates the computer, all the pages that aren't in the page file are going to get written to the hibernation file, so they'll end up on disk one way or another.)

    Even if you've magically managed to prevent the data from being written to the page file, you're still vulnerable to another process calling ReadProcessMemory to suck the data out of your address space.

    If you really want to lock memory, you can grant your process the SeLockMemoryPrivilege privilege and use the AWE functions to allocate non-pageable memory. Mind you, this is generally considered to be an anti-social thing to do on a paging system. The AWE functions were designed for large database programs that want to manage their paging manually. And they still won't prevent somebody from using ReadProcessMemory to suck the data out of your address space.

    If you have relatively small chunks of sensitive data, the solution I've seen recommended is to use CryptProtectData and CryptUnprotectData. The encryption keys used by these functions are generated pseudo-randomly at boot time and are kept in kernel mode. (Therefore, nobody can ReadProcessMemory them, and they won't get captured by a user-mode crash dump.) Indeed, this is the mechanism that many components in Windows 2003 Server to reduce the exposure of sensitive information in the page file and hibernation file.

    Follow-up: I've been informed by the memory manager folks that the working set interpretation was overly conservative and that in practice, the memory that has been virtually locked won't be written to the pagefile. Of course, the other concerns still apply, so you still have to worry about the hibernation file and another process sucking the data out via ReadProcessMemory.

  • The Old New Thing

    You just have to accept that the file system can change


    A customer who is writing some sort of code library wants to know how they should implement a function that determines whether a file exists. The usual way of doing this is by calling GetFileAttributes, but what they've found is that sometimes GetFileAttributes will report that a file exists, but when they get around to accessing the file, they get the error ERROR_DELETE_PENDING.

    The lesser question is what ERROR_DELETE_PENDING means. It means that somebody opened the file with FILE_SHARE_DELETE sharing, meaning that they don't mind if somebody deletes the file while they have it open. If the file is indeed deleted, then it goes into "delete pending" mode, at which point the file deletion physically occurs when the last handle is closed. But while it's in the "delete pending" state, you can't do much with it. The file is in limbo.

    You just have to be prepared for this sort of thing to happen. In a pre-emptively multi-tasking operating system, the file system can change at any time. If you want to prevent something from changing in the file system, you have to open a handle that denies whatever operation you want to prevent from happening. (For example, you can prevent a file from being deleted by opening it and not specifying FILE_SHARE_DELETE in your sharing mode.)

    The customer wanted to know how their "Does the file exist?" library function should behave. Should it try to open the file to see if it is in delete-pending state? If so, what should the function return? Should it say that the file exists? That it doesn't exist? Should they have their function return one of three values (Exists, Doesn't Exist, and Is In Funky Delete State) instead of a boolean?

    The answer is that any work you do to try to protect users from this weird state is not going to solve the problem because the file system can change at any time. If a program calls "Does the file exist?" and the file does exist, you will return true, and then during the execution of your return statement, your thread gets pre-empted and somebody else comes in and puts the file into the delete-pending state. Now what? Your library didn't protect the program from anything. It can still get the delete-pending error.

    Trying to do something to avoid the delete-pending state doesn't accomplish anything since the file can get into that state after you returned to the caller saying "It's all clear." In one of my messages, I wrote that it's like fixing a race condition by writing

    // check several times to try to avoid race condition where
    // g_fReady is set before g_Value is set
    if (g_fReady && g_fReady && g_fReady && g_fReady && g_fReady &&
        g_fReady && g_fReady && g_fReady && g_fReady && g_fReady &&
        g_fReady && g_fReady && g_fReady) { return g_Value; }

    The compiler folks saw this message and got a good chuckle out of it. One of them facetiously suggested that they add code to the compiler to detect this coding style and not optimize it away.

  • The Old New Thing

    Why do we even have the DefWindowProc function?


    Some time ago, I looked at two ways of reimplementing the dialog procedure (method 1, method 2). Commenter "8" wondered why we have a DefWindowProc function at all. Couldn't window procedures have followed the dialog box model, where they simply return FALSE to indicate that they want default processing to occur? Then there would be no need to export the DefWindowProc function.

    This overlooks one key pattern for derived classes: Using the base class as a subroutine. That pattern is what prompted people to ask for dialog procedures that acted like window procedures. If you use the "Return FALSE to get default behavior" pattern, window procedures would go something like this:

    BOOL DialogLikeWndProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
     switch (uMsg) {
     ... handle messages and return TRUE ...
     // We didn't have any special processing; do the default thing
     return FALSE;

    Similarly, subclassing in this hypothetical world would go like this:

    BOOL DialogLikeSubclass(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
     switch (uMsg) {
     ... handle messages and return TRUE ...
     // We didn't have any special processing; let the base class try
     CallDialogLikeWindowProc(PrevDialogLikeWndProc, hwnd, uMsg, wParam, lParam);

    This works as long as what you want to do is override the base class behavior entirely. But what if you just want to augment it? Calling the previous window procedure is analogous to calling the base class implementation from a derived class, and doing so is quite common in object-oriented programming, where you want the derived class to behave "mostly" like the base class. Consider, for example, the case where we want to allow the user to drag a window by grabbing anywhere in the client area:

    LRESULT CALLBACK CaptionDragWndProc(
        HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
     LRESULT lres;
     switch (uMsg) {
     case WM_NCHITTEST:
      lres = DefWindowProc(hwnd, uMsg, wParam, lParam);
      if (lres == HTCLIENT) lres = HTCAPTION;
      return lres;
     return DefWindowProc(hwnd, uMsg, wParam, lParam);

    We want our hit-testing to behave just like normal, with the only exception that clicks in the client area should be treated as clicks on the caption. With the DefWindowProc model, we can do this by calling DefWindowProc to do the default processing, and then modifying the result on the back end. If we had use the dialog-box-like model, there would have been no way to call the "default handler" as a subroutine in order to make it to the heavy lifting. We would be forced to do all the work or none of it.

    Another avenue that an explicit DefWindowProc function opens up is modifying messages before they reach the default handler. For example, suppose you have a read-only edit control, but you want it to look like a normal edit control instead of getting the static look. You can do this by modifying the message that you pass to DefWindowProc:

      if (GET_WM_CTLCOLOR_HWND(wParam, lParam) == m_hwndEdit)
       // give it the "edit" look
       return DefWindowProc(hwnd, WM_CTLCOLOREDIT, wParam, lParam);

    Another common operation is changing one color attribute of an edit control while leaving the others intact. For this, you can use DefWindowProc as a subroutine and then tweak the one attribute you want to customize.

      if (GET_WM_CTLCOLOR_HWND(wParam, lParam) == m_hwndDanger)
       // Start with the default color attributes
       LRESULT lres = DefWindowProc(hwnd, uMsg, wParam, lParam);
       // Change text color to red; leave everything else the same
       SetTextColor(GET_WM_CTLCOLOR_HDC(wParam, lParam), RGB(255,0,0));
       return lres;

    Getting these types of operations to work with the dialog box model would be a significantly trickier undertaking.

  • The Old New Thing

    Is DEP on or off on Windows XP Service Pack 2?


    Last time, we traced an IP_ON_HEAP failure to a shell extension that used an older version of ATL which was not DEP-friendly. But that led to a follow-up question:

    Why aren't we seeing this same crash in the main program as in the shell extension? That program uses the same version of ATL, but it doesn't crash.

    The reason is given in this chart. Notice that the default configuration is OptIn, which means that DEP is off for all processes by default, but is on for all Windows system components. That same part of the page describes how you can change to OptOut so that the default is to turn on DEP for all processes except for the ones you put on the exception list. There's more information on this excerpt from the "Changes to Functionality in Microsoft Windows XP Service Pack 2" document.

    The program that comes with the shell extension is not part of Windows, so DEP is disabled by default. But Explorer is part of Windows, so DEP is enabled for Explorer by default. That's why only Explorer encounters this problem.

    (This little saga does illustrate the double-edged sword of extensibility. If you make your system extensible, you allow other people to add features to it. On the other hand, you also allow other people to add bugs to it.)

    The saga of the DEP exception is not over, however, because it turns out I've been lying to you. More information tomorrow.

  • The Old New Thing

    Psychic debugging: IP on heap


    Somebody asked the shell team to look at this crash in a context menu shell extension.

    IP_ON_HEAP:  003996d0
    ChildEBP RetAddr
    00b2e1d8 68f79ca6 0x3996d0
    00b2e1f4 7713a7bd ATL::CWindowImplBaseT<
                               ATL::CWindow,ATL::CWinTraits<2147483648,0> >
    00b2e220 77134be0 USER32!InternalCallWinProc+0x23
    00b2e298 7713a967 USER32!UserCallWinProcCheckWow+0xe0
    eax=68f79c63 ebx=00000000 ecx=00cade10 edx=7770df14 esi=002796d0 edi=000603cc 
    eip=002796d0 esp=00cade4c ebp=00cade90 iopl=0         nv up ei pl nz na pe nc 
    cs=001b  ss=0023  ds=0023  es=0023  fs=003b  gs=0000             efl=00010206 
    002796d0 c744240444bafb68 mov     dword ptr [esp+4],68fbba44

    You should be able to determine the cause instantly.

    I replied,

    This shell extension is using a non-DEP-aware version of ATL. They need to upgrade to ATL 8 or disable DEP.

    This was totally obvious to me, but the person who asked the question met it with stunned amazement. I guess the person forgot that older versions of ATL are notorious DEP violators. You see a DEP violation, you see that it's coming from ATL, and bingo, you have your answer. When DEP was first introduced, the base team sent out mail to the entire Windows division saying, "Okay, folks, we're turning it on. You're going to see a lot of application compatibility problems, especially this ATL one."

    Psychic powers sometimes just means having a good memory.

    Even if you forgot that information, it's still totally obvious once you look at the scenario and understand what it's trying to do.

    The fault is IP_ON_HEAP which is precisely what DEP protects against. The next question is why IP ended up on the heap. Was it a mistake or intentional?

    Look at the circumstances surrounding the faulting instruction again. The faulting instruction is the window procedure for a window, and the action is storing a constant into the stack. The symbols of the caller tell us that it's some code in ATL, and you can even go look up the source code yourself:

    template <class TBase, class TWinTraits>
    LRESULT CALLBACK CWindowImplBaseT< TBase, TWinTraits >
      ::StartWindowProc(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam) {
        CWindowImplBaseT< TBase, TWinTraits >* pThis =
                  (CWindowImplBaseT< TBase, TWinTraits >*)
        pThis->m_hWnd = hWnd; 
        pThis->m_thunk.Init(pThis->GetWindowProc(), pThis); 
        WNDPROC pProc = pThis->m_thunk.GetWNDPROC(); 
        ::SetWindowLongPtr(hWnd, GWLP_WNDPROC, (LONG_PTR)pProc);
        return pProc(hWnd, uMsg, wParam, lParam);

    Is pProc corrupted and we're jumping to a random address on the heap? Or was this intentional?

    ATL is clearly generating code on the fly (the window procedure thunk), and it is in execution of the thunk that we encounter the DEP exception.

    Now, you didn't need to have the ATL source code to realize that this is what's going on. It is a very common pattern in framework libraries to put a C++ wrapper around window procedures. Since C++ functions have a hidden this parameter, the wrappers need to sneak that parameter in somehow, and one common technique is to generate some code on the fly that sets up the hidden this parameter before calling the C++ function. The value at [esp+4] is the window handle, something that can be recovered from the this pointer, so it's a handly thing to replace with this before jumping to the real C++ function.

    The address being stored as the this parameter is 68fbba44, which is inside the DLL in question. (You can tell this because the return address, which points to the ATL thunk code, is at 68f79ca6 which is in the same neighborhood as the mystery pointer.) Therefore, this is almost certainly an ATL thunk for a static C++ object.

    In other words, this is extremely unlikely be a jump to a random address. The code at the address looks too good. It's probably jumping there intentionally, and the fact that it's coming from a window procedure thunk confirms it.

    But our tale is not over yet. The plot thickens. We'll continue next time.

  • The Old New Thing

    The forgotten common controls: The GetEffectiveClientRect function


    The GetEffectiveClientRect function is another one in the category of functions that everybody tries to pretend doesn't exist. It's not as bad as MenuHelp, but it's still pretty awful.

    The idea behind the GetEffectiveClientRect function is that you have a frame window with a bunch of optional gadgets, such as a status bar or toolbar. The important thing is that these optional gadgets all reside at the borders of the window. In our examples, the toolbar goes at the top and the status bar goes at the bottom. You might also have gadgets on the left and right such as a navigation tree or a preview pane. They can also be stacked up against the border, such as an address bar and a toolbar. The important thing is that all the gadgets go around the border.

    The first parameter to the GetEffectiveClientRect function is the window whose effective client rectangle you wish to compute; no surprises there. The second parameter is a pointer to the rectangle that receives the result; again, hardly surprising. It's that third parameter, the array of integers, that is the weird one.

    The first two integers in the array are ignored. The remainder of the array consists of pairs of nonzero integers; the array is terminated by a pair consisting of zeroes. Of each pair, only the second integer is used; it is the control identifier of a child window of the window you passed in. If that child window is visible (in a special sense I'll explain later), then its window rectangle is subtracted from the parent window's client rectangle. After all the rectangles of visible children are subtracted away, what remains is the effective client rectangle.

    For example, suppose your window's client rectangle is 100×100 and there is a toolbar at (0, 0)–(100, 20) and a status bar at (0, 90)–(100, 100), both visible. The GetEffectiveClientRect starts with the full client rectangle (0, 0)–(100, 100), subtracts the two rectangles corresponding to the toolbar and status bar, resulting in (0, 20)–(100, 90).

    (0, 0) (100, 0)
    (0, 20) (100, 20)
    effective client
    (0, 90) (100, 90)
    status bar
    (0, 100) (100, 100)

    If the control IDs for the toolbar and status bar are 100 and 101, respectively, then the array you need to pass would be { *, *, ¤, 100, ¤, 101, 0, 0 } where * can be anything and ¤ can be any nonzero value.

    Continuing from the above example, if the status bar were hidden, then the effective client rectangle would be (0, 20)–(100, 100) because hidden windows are ignored when computing the effective client rectangle.

    Okay, first question: What is that special sense of visible I mentioned above? I didn't write simply visible because IsWindowVisible reports a window as visible only if the window and all its parents are visible. But all that GetEffectiveClientRect cares about is whether the window is visible in the sense that the WS_VISIBLE style is set. In other words, that the window would be visible if its parent is.

    Why does the GetEffectiveClientRect use this strange definition of visible? Because it wants to make it possible for you to get the effective client rectangle of a window while it is still hidden, the result being the effective client rectangle you would get once the window becomes visible. This is valuable because it allows you to do your calculations "behind the scenes" while the window is still hidden (for example, in your WM_CREATE handler).

    Second question: Why is the integer array so crazy? What's with all the ignored values and the "must be nonzero" values? Why can't it just be the array { 100, 101, 0 }?

    The format of the integer array is the same as the one used by the ShowHideMenuCtl function. The intent was that you could use the same array for both functions. The two functions do work well together: The ShowHideMenuCtl function do the work of letting the user toggle the toolbar and status bar on and off, and GetEffectiveClientRect lets you compute the client rectangle that results.

    That said, the GetEffectiveClientRect function is largely ignored nowadays. It doesn't do anything you couldn't already do yourself, and when you write your own version, you don't need to deal with that crazy integer array.

  • The Old New Thing

    What to do when the steering column is stuck and the ignition won't turn


    One evening, I parked pointing downhill and like the book says, I turned my wheels to the right before parking. But I turned it a bit too far, because when I returned to the car and inserted the key into the ignition, the key wouldn't turn.

    The wrong thing to do is to force the key until it breaks.

    Here's the right thing to do:

    Take the steering wheel and turn it left and right. One direction will have a little more play than the other. Grab the wheel with your left hand and apply pressure turning it in the direction with more play while using your right hand to turn the key in the ignition. (Or, if you're like me and can't figure it out, just wobble the wheel back and forth until the key unsticks.)

    Note: I scheduled this item independently of its partner. By an amazing coincidence, both items warn against forcing something until it breaks.

  • The Old New Thing

    Hidden gotcha: The command processor's AutoRun setting


    If you type cmd /? at a command prompt, the command processor will spit out pages upon pages of strange geeky text. I'm not sure why the command processor folks decided to write documentation this way rather than the more traditional manner of putting it into MSDN or the online help. Maybe because that way they don't have to deal with annoying people like "editors" telling them that their documentation contains grammatical errors or is hard to understand.

    Anyway, buried deep in the text is this little gem:

    If /D was NOT specified on the command line, then when CMD.EXE starts, it
    looks for the following REG_SZ/REG_EXPAND_SZ registry variables, and if
    either or both are present, they are executed first.
        HKEY_LOCAL_MACHINE\Software\Microsoft\Command Processor\AutoRun
        HKEY_CURRENT_USER\Software\Microsoft\Command Processor\AutoRun

    I sure hope there is some legitimate use for this setting, because the only time I see anybody mention it is when it caused them massive grief.

    I must be losing my mind, but I can't even write a stupid for command to parse the output of a command.

    C:\test>for /f "usebackq delims=" %i in (`dir /ahd/b`) do @echo %i

    When I run this command, I get

    System Volume Information

    Yet when I type the command manually, I get completely different output!

    C:\test>dir /ahd/b

    Have I gone completely bonkers?

    The original problem was actually much more bizarro because the command whose output the customer was trying to parse merely printed a strange error message, yet running the command manually generated the expected output.

    After an hour and a half of head-scratching, somebody suggested taking a look at the command processor's AutoRun setting, and lo and behold, it was set!

    C:\test>reg query "HKCU\Software\Microsoft\Command Processor" /v AutoRun
    HKEY_CURRENT_USER\Software\Microsoft\Command Processor
        AutoRun     REG_SZ  cd\

    The customer had no idea how that setting got there, but it explained everything. When the command processor ran the dir /ahd/b command as a child process (in order to parse its output), it first ran the AutoRun command, which changed the current directory to the drive's root. As a result, the dir /ahd/b produced a listing of the hidden subdirectories of the root directory rather than the hidden subdirectories of the C:\test directory.

    In the original formulation of the problem, the command the customer was trying to run looked for its configuration files in the current directory, and the cd\ in the AutoRun meant that the program looked for its configuration files in the root directory instead of the C:\test directory. Thus came the error message ("Configuration file not found") and the plea for help that was titled, "Why can't the XYZ command find a configuration file that's right there in front of it?"

    Like I said, I'm sure there must be some valid reason for the AutoRun setting, but I haven't yet found one. All I've seen is the havoc it plays.

  • The Old New Thing

    The importance of the FORMAT_MESSAGE_IGNORE_INSERTS flag


    You can use the FormatMessage message with the FORMAT_MESSAGE_FROM_SYSTEM flag to indicate that the message number you passed is an error code and that the message should be looked up in the system message table. This is a specific case of the more general case where you are not in control of the message, and when you are not in control of the message, you had better pass the FORMAT_MESSAGE_IGNORE_INSERTS flag.

    Let's look at what happens when you don't.

    #include <windows.h>
    #include <stdio.h>
    #include <tchar.h>
    int __cdecl main(int argc, char **argv)
     TCHAR buffer[1024];
     DWORD dwResult = FormatMessage(dwFlags, NULL, dwError,
                                    0, buffer, 1024, NULL);
     if (dwResult) {
      _tprintf(_T("Message is \"%s\"\n"), buffer);
     } else {
      _tprintf(_T("Failed! Error code %d\n"), GetLastError());
     return 0;

    If you run this program, you'll get

    Failed! Error code 87

    Error 87 is ERROR_INVALID_PARAMETER. What went wrong? Let's pass the FORMAT_MESSAGE_IGNORE_INSERTS flag to see what the message was. Change the value of dwFlags to


    and run the program again. This time you get

    Message is "%1 is not a valid Win32 application.

    Aha, now we see the problem. The message corresponding to ERROR_BAD_EXE_FORMAT contains an insertion %1. If you don't pass the FORMAT_MESSAGE_IGNORE_INSERTS flag, the FormatMessage function will insert the first parameter in the argument list (or argument array). But we didn't pass an argument list, so the function fails.

    Actually, we got lucky. If we had passed an argument list or argument array, the function would have inserted the corresponding string, even if the argument list we passed didn't have a string in the first position.

    If you are not in control of the format string, then you must pass FORMAT_MESSAGE_IGNORE_INSERTS to prevent the %1 from causing trouble. If somebody was being particularly evil, they might decide to give you a format string that contains a %9, which is almost certainly more insertions than you provided. The result is a buffer overflow and probably a crash.

    This may have been obvious to some people, in the same way that you shouldn't pass a string outside your control as the format string to the printf function, but I felt it worth mentioning.

Page 1 of 3 (29 items) 123