• The Old New Thing

    Why is there the message '!Do not use this registry key' in the registry?


    Under Software\Microsoft\Windows\Current­Version\Explorer\Shell Folders, there is a message to registry snoopers: The first value is called "!Do not use this registry key" and the associated data is the message "Use the SH­Get­Folder­Path or SH­Get­Known­Folder­Path function instead."

    I added that message.

    The long and sad story of the Shell Folders key explains that the registry key exists only to retain backward compatibility with four programs written in 1994. There's also a TechNet version of the article which is just as sad but not as long.

    One customer saw this message and complained, "That registry key and that TechNet article explain how to obtain the current locations of those special folders, but they don't explain how to change them. This type of woefully inadequate documentation only makes the problem worse."

    Hey, wow, a little message in a registry key and a magazine article are now "documentation"! The TechNet article is historical background. And the registry key is just a gentle nudge. Neither is documentation. It's not like I'm going to put a complete copy of the documentation into a registry key. Documentation lives in places like MSDN.

    But it seems that some people need more than a nudge; they need a shove. Let's see, we're told that the functions for obtaining the locations of known folders are SH­Get­Folder­Path and its more modern counterpart SH­Get­Known­Folder­Path. I wonder what the names of the functions for modifying those locations might be?

    Man that's a tough one. I'll let you puzzle that out for a while.

    Okay, here, I'll tell you: The corresponding functions go by the completely unobvious names SH­Set­Folder­Path and SH­Set­Known­Folder­Path.

    Sorry you had to use your brain. I'll let you get back to programming now.

  • The Old New Thing

    How does the C runtime know whether to use the static-linking or dynamic-linking version of the header file?


    In response to a description of what happens when you get dll­import wrong, nksingh asks, "This seems like a problem for the CRT. As far as I know, VC gives you the option of statically or dynamically linking the CRT. But it seems like the headers will have to make a choice to support one thing better than the other. Conditional compilation would work, but then people would have to remember to include a #define somewhere. Is this dllimport vs. static linking thing something the compiler could figure out on its own if you're doing Link-time codegen?"

    Let's start from the beginning.

    Yes, this would be a problem for the CRT since it wouldn't know whether to declare the functions as normal static functions or as dllimport-style functions, and the headers have to make a choice which way to go.

    And if you look at the headers, you can see that it is indeed done via conditional compilation.

    _CRTIMP int __cdecl fflush(FILE * _File);

    This magic _CRTIMP symbol is defined in crtdefs.h like so:

    /* Define _CRTIMP */
    #ifndef _CRTIMP
    #ifdef _DLL
    #define _CRTIMP __declspec(dllimport)
    #else  /* _DLL */
    #define _CRTIMP
    #endif  /* _DLL */
    #endif  /* _CRTIMP */

    Conditional compilation decides whether _CRTIMP expands to __declspec(dllimport) or to nothing at all, depending on whether the _DLL symbol is defined.

    And yet nobody bothers writing #define _DLL before they #include <stdio.h>. There must be something else going on.

    In fact, we can run some experiments to see what's going on.

    #ifdef _DLL
    #error "_DLL is defined"
    #error "_DLL is not defined"

    Save this as dummy.c and run a few tests.

    C:\tests> cl /MT dummy.c
    dummy.c(4) : fatal error C1189: #error :  "_DLL is not defined"
    C:\tests> cl /MD dummy.c
    dummy.c(2) : fatal error C1189: #error :  "_DLL is defined"

    Well how's about that. The compiler uses the /MT and /MD flag to decide whether or not to define the preprocessor symbol _DLL, which is the secret signal it passes to the crtdef.h header file to control the conditional compilation.

    The compiler has to use this technique instead of deferring the decision to link-time code generation because it cannot assume that everybody has enabled link-time code generation. (Indeed, we explicitly did not in our sample command lines.)

    If link-time code generation were enabled, then is this something that could be deferred until that point?

    In principle yes, because link-time code generation in theory could just make the .obj file a copy of the source file (and all the header files) and do all the actual compiling at link time. This is a sort of extreme way of doing it, but I guess it could've been done that way.

    On the other hand, it also means that the compiler folks would have to come up with a new nonstandard extension that means "This function might be a normal static function or it might be a dll­import function. I haven't decided yet; I'll tell you later."

    Seeing as how the CRT already has to solve the problem in the case where there is no link-time code generation, it doesn't seem worth the effort to add a feature to link-time-code generation that you don't actually need. It would be a feature for which the only client is the C runtime library itself, for which the C runtime library already requires a separate solution when link-time code generation is disabled, and for which that separate solution still works when link-time code generation is enabled.

    No engineering purpose is served by writing code just for the sake of writing code.

  • The Old New Thing

    You can extend the PROPSHEETPAGE structure with your own bonus data


    ... for when regular strength lParam just isn't enough.

    A little-known and even less-used feature of the shell property sheet is that you can hang custom data off the end of the PROPSHEETPAGE structure, and the shell will carry it around for you. Mind you, the shell carries it around by means of memcpy and destroys it by just freeing the underlying memory, so whatever you stick on the end needs to be plain old data. (Though you do get an opportunity to "construct" and "destruct" if you register a PropSheetPageProc callback, during which you are permitted to modify your bonus data and the lParam field of the PROPSHEETPAGE.)

    Here's a program that illustrates this technique. It doesn't do much interesting, mind you, but maybe that's a good thing: Makes for fewer distractions.

    #include <windows.h>
    #include <prsht.h>
    HINSTANCE g_hinst;
     int cWidgets;
     TCHAR szItemName[100];

    ITEMPROPSHEETPAGE is a custom structure that appends our bonus data (an integer and a string) to the standard PROPSHEETPAGE. This is the structure that our property sheet page will use.

    INT_PTR CALLBACK DlgProc(HWND hwnd, UINT uiMsg, WPARAM wParam, LPARAM lParam)
     switch (uiMsg) {
     case WM_INITDIALOG:
       SetDlgItemText(hwnd, 100, ppsp->szItemName);
       SetDlgItemInt(hwnd, 101, ppsp->cWidgets, FALSE);
      return TRUE;
     return FALSE;

    The lParam passed to WM_INITDIALOG is a pointer to the shell-managed copy of the PROPSHEETPAGE structure. Since we associated this dialog procedure with a ITEMPROPSHEETPAGE structure, we can cast it to the larger structure to get at our bonus data (which the shell happily memcpy'd from our copy into the shell-managed copy).

    HPROPSHEETPAGE CreateItemPropertySheetPage(
        int cWidgets, PCTSTR pszItemName)
     ZeroMemory(&psp, sizeof(psp));
     psp.dwSize = sizeof(psp);
     psp.hInstance = g_hinst;
     psp.pszTemplate = MAKEINTRESOURCE(1);
     psp.pfnDlgProc = DlgProc;
     psp.cWidgets = cWidgets;
     lstrcpyn(psp.szItemName, pszItemName, 100);
     return CreatePropertySheetPage(&psp);

    It is here that we associate the DlgProc with the ITEMPROPSHEETPAGE. Just to highlight that the pointer passed to the DlgProc is a copy of the ITEMPROPSHEETPAGE used to create the property sheet page, I created a separate function to create the property sheet page, so that the ITEMPROPSHEETPAGE on the stack goes out of scope, making it clear that the copy passed to the DlgProc is not the one we passed to CreatePropertySheetPage.

    Note that you must set the dwSize of the base PROPSHEETPAGE to the size of the PROPSHEETPAGE plus the size of your bonus data. In other words, set it to the size of your ITEMPROPSHEETPAGE.

    int WINAPI WinMain(HINSTANCE hInst, HINSTANCE hPrevInst,
                       LPSTR lpCmdLine, int nCmdShow)
     g_hinst = hinst;
       CreateItemPropertySheetPage(42, TEXT("Elmo"));
     if (hpage) {
      PROPSHEETHEADER psh = { 0 };
      psh.dwSize = sizeof(psh);
      psh.dwFlags = PSH_DEFAULT;
      psh.hInstance = hinst;
      psh.pszCaption = TEXT("Item Properties");
      psh.nPages = 1;
      psh.phpage = &hpage;
     return 0;

    Here is where we display the property sheet. It looks just like any other code that displays a property sheet. All the magic happens in the way we created the HPROPSHEETPAGE.

    If you prefer to use the PSH_PROPSHEETPAGE flag, then the above code could have been written like this:

    int WINAPI WinMain(HINSTANCE hInst, HINSTANCE hPrevInst,
                       LPSTR lpCmdLine, int nCmdShow)
     ZeroMemory(&psp, sizeof(psp));
     psp.dwSize = sizeof(psp);
     psp.hInstance = g_hinst;
     psp.pszTemplate = MAKEINTRESOURCE(1);
     psp.pfnDlgProc = DlgProc;
     psp.cWidgets = cWidgets;
     lstrcpyn(psp.szItemName, pszItemName, 100);
     PROPSHEETHEADER psh = { 0 };
     psh.dwSize = sizeof(psh);
     psh.dwFlags = PSH_PROPSHEETPAGE;
     psh.hInstance = hinst;
     psh.pszCaption = TEXT("Item Properties");
     psh.nPages = 1;
     psh.ppsp = &psp;
     return 0;

    If you want to create a property sheet with more than one page, then you would pass an array of ITEMPROPSHEETPAGEs. Note that passing an array requires all the pages in the array to use the same custom structure (because that's how arrays work; all the elements of an array are the same type).

    Finally, here's the dialog template. Pretty anticlimactic.

    CAPTION "General"
    FONT 8, "MS Shell Dlg"
        LTEXT "Name:",-1,7,11,42,14
        LTEXT "",100,56,11,164,14
        LTEXT "Widgets:",-1,7,38,42,14
        LTEXT "",101,56,38,164,14

    And there you have it. Tacking custom data onto the end of a PROPSHEETPAGE, an alternative to trying to cram everything into a single lParam.

    Exercise: Observe that the size of the PROPSHEETPAGE structure has changed over time. For example, the original PROPSHEETPAGE ends at the pcRefParent. In Windows 2000, there are two more fields, the pszHeaderTitle and pszHeaderSubTitle. Windows XP added yet another field, the hActCtx. Consider a program written for Windows 95 that uses this technique. How does the shell know that the cWidgets is really bonus data and not a pszHeaderTitle?

  • The Old New Thing

    What does the "l" in lstrcmp stand for?


    If you ask Michael Kaplan, he'd probably say that it stands for lame.

    In his article, Michael presents a nice chart of the various L-functions and their sort-of counterparts. There are other L-functions not on his list, not because he missed them, but because they don't have anything to do with characters or encodings. On the other hand, those other functions help shed light on the history of the L-functions. Those other functions are lopen, lcreat, lread, lwrite, lclose, and llseek. There are all L-version sort-of counterparts to open, creat, and read, write, close, and lseek. Note that we've already uncovered the answer to the unasked question "Why does llseek have two L's?" The first L is a prefix (whose meaning we will soon discover) and the second L comes from the function it's sort-of acting as the counterpart to.

    But what does the L stand for? Once you find those other L-functions, you'll see next door the H-functions hread and hwrite. As we learned a while back, being lucky is simply observing things you weren't planning to observe. We weren't expecting to find the H-functions, but there they were, and they blow the lid off the story.

    The H prefix in hread and hwrite stands for huge. Those two functions operated on so-called huge pointers, which is 16-bit jargon for pointers to memory blocks larger than 64KB. To increment your average 16:16 pointer by one byte, you increment the bottom 16 bits. But when the bottom 16 bits contain the value 0xFFFF, the increment rolls over, and where do you put the carry? If the pointer is a huge pointer, the convention is that the byte that comes after S:0xFFFF is (S+__AHINCR):0x0000, where __AHINCR is a special value exported by the Windows kernel. If you allocate memory larger than 64KB, the GlobalAlloc function breaks your allocation into 64KB chunks and arranges them so that incrementing the selector by __AHINCR takes you from one chunk to the next.

    Working backwards, then, the L prefix therefore stands for long. These functions explicitly accept far pointers, which makes them useful for 16-bit Windows programs since they are independent of the program's memory model. Unlike the L-functions, the standard library functions like strcpy and read operate on pointers whose size match the data model. If you write your program in the so-called medium memory model, then all data pointers default to near (i.e., they are 16-bit offsets into the default data segment), and all the C runtime functions operate on near pointers. This is a problem if you need to, say, read some data off the disk into a block of memory you allocated with GlobalAlloc: That memory is expressible only as a far pointer, but the read function accepts a near pointer.

    To the rescue comes the lread function, which you can use to read from the disk into your far pointer.

    How did Windows decide which C runtime functions should have corresponding L-functions? They were the functions that Windows itself used internally, and which were exported as a courtesy.

    Okay, now let's go back to the Lame part. Michael Kaplan notes that the lstrcmp and lstrcmpi functions actually are sort-of counterparts to strcoll and strcolli. So why weren't these functions called lstrcoll and lstrcolli instead?

    Because back when lstrcmp and lstrcmpi were being named, the strcoll and strcolli functions hadn't been invented yet! It's like asking, "Why did the parents of General Sir Michael Jackson give him the same name as the pop singer?" or "Why didn't they use the Space Shuttle to rescue the Apollo 13 astronauts?"

  • The Old New Thing

    What's up with the mysterious inc bp in function prologues of 16-bit code?


    A little while ago, we learned about the EBP chain. The EBP chain in 32-bit code is pretty straightforward because there is only one type of function call. But in 16-bit code there are two types of function calls, the near call and the far call.

    A near call pushes a 16-bit return address on the stack before branching to the function entry point, which must reside in the same code segment as the caller. The function then uses a ret instruction (a near return) when it wants to return to the caller, indicating that the CPU should resume execution at the specified address within the same code segment.

    By comparison, a far call pushes both the segment (or selector if in protected mode) and the offset of the return address on the stack (two 16-bit values), and the function being called is expected to use a retf instruction (a far return) to indicate that the CPU should pop two 16-bit values off the stack to determine where execution should resume.

    When Windows was first introduced, it ran on an 8086 with 384KB of RAM. This posed a challenge because the 8086 processor had no memory manager, had no CPU privilege levels, and had no concept of task switching. And in order to squeeze into 384KB of RAM, Windows needed to be able to load code from disk on demand and discard it when memory pressure required it.

    One of the really tricky parts of the real-mode memory manager was fixing up all the function pointers when code was loaded and unloaded. When you unloaded a function, you had to make sure that any existing code in memory that called that function didn't actually call it, because the function wasn't there. If you had a memory manager, you could mark the segment or page not present, but there is no such luxury on the 8086.

    There are multiple parts to the solution, but the part that leads to the answer to the title question is the way the memory manager patched up all the stacks in the system. After all, if you discarded a function, you had to make sure that any reference to that function as a return address on somebody's stack got fixed up before the code tried to execute that retf instruction and found itself returning to a function that didn't exist.

    And that's where the mysterious inc bp came from.

    The first rule of stack frames in real-mode Windows is that you must have a bp-based stack frame. FPO was not permitted. (Fortunately, FPO was also not very tempting because the 16-bit instruction set made it cumbersome to access stack memory by means other than the bp register, so the easiest way to do something was also the right way.) In other words, the first rule required that every stack have a valid bp chain at all times.

    The second rule of stack frames in real-mode Windows is that if you are going to return with a retf, then you must increment the bp register before you push it (and must therefore perform the corresponding decrement after you pop it). This second rule means that code which walks the bp chain can find the next function up the stack. If bp is even, then the function will use a near return, so it looks at the 16-bit value stored on the stack after the bp and doesn't change the cs register. On the other hand, if the bp is odd, then it knows to look at both the 16-bit offset and the 16-bit segment that were pushed on the stack.

    Okay, so let's put it all together: When code got discarded, the kernel walked all the stacks in the system (which it could now do due to these two rules), and if it saw that a return address corresponded to a function that got discarded, it patched the return address to point to a chunk of code which called back into the memory manager to reload the function, re-patch all the return addresses so they now point to the new address where the function got loaded (probably different from where the function was when it was discarded), and then jumped back to the original code as if nothing had happened.

    I continue to be amazed at how much Windows 1.0 managed to accomplish given that it had so little to work with. It even used an LRU algorithm to choose which functions to discard by implementing a software version of the "accessed bit", something that modern CPUs manage in hardware.

  • The Old New Thing

    Raymond's highly scientific predictions for the 2011 NCAA men's basketball tournament


    Once again, it's time for Raymond to come up with an absurd, arbitrary criterion for filling out his NCAA bracket.

    This year, I look at the strength of the school's football team, on the theory that a school with a strong football team and a strong basketball team has clearly invested a lot in its athletics program. My ranking of football teams is about as scientific as my ranking of basketball teams:

    • If the school ended last season with a BCS ranking, I used that.
    • If a school wasn't ranked but received votes in the AP ranking, then I gave it a rank of 30 (and if two such schools faced each other, I looked at who got more votes).
    • If a school still isn't ranked, then I looked to see if it had been ranked at any time earlier in the season; if so, then I gave it a rank of 40.
    • If a school still isn't ranked, but it appeared on the equally-scientific ESPN Fan Rankings, then I gave it a rank of 50.
    • If a school still isn't ranked, but it has a Division I FBS football team, then I gave it a rank of 80. If two such schools faced each other, then I gave what appeared to be the weaker school a rank of 90.
    • If a school still isn't ranked, but it has a Division I FCS football team, then I gave it a rank of 100. If two such schools faced each other, then I gave what appeared to be the weaker schools a rank of 101. (Why 101 instead of 110? Who cares!)
    • If a school still isn't ranked, but it has a football team in some other division, then I gave it a rank of 150.
    • If a school still isn't ranked because its football team is new, then I gave it a rank of 200.
    • If a school still isn't ranked because it doesn't have a football team, but it had one in the past, then I gave it a rank of 300.
    • If a school still isn't ranked because it never had a football team, then I gave it a rank of 400.

    (As a special case, USC received its rank of 22 from two years ago, because it was forced to sit out the 2010 season as part of its punishment for "several major rules violations." Now that's what I call dedication to athletics!)

    I made up all these rules on the fly, which is why the spacing is so uneven and why they were not necessarily applied fairly across the board, but that's what makes it highly scientific.

    As before, once the field has been narrowed to eight teams, the results are determined by a coin flip.


    • Correct predictions are in green.
    • Incorrect predictions are in red.
    • (!) marks upsets correctly predicted.
    • (*) marks upsets predicted but did not take place.
    • (x) marks actual upsets not predicted.

    Opening Round Games

    Texas-San Antonio(200) Alabama State
    Alabama State(80)
    UAB(90) Clemson
    UNC-Asheville(400) Arkansas-Little Rock
    Arkansas-Little Rock(300)
    USC(22*) USC

    East bracket

    1Ohio State(6) Ohio State
    Ohio State
    Ohio State Ohio State
    16Alabama State(80)
    8George Mason(400) Villanova
    (100) (*)
    5Kentucky(80) Kentucky
    West Virginia
    (30) (*)
    4West Virginia(30) West Virginia
    6Syracuse(80) Syracuse
    (80) (x)
    11Indiana State(90)
    3Xavier(300) Xavier
    (300) (x)
    7Washington(30) Washington
    2North Carolina(50) North Carolina
    15Long Island(400)

    West bracket

    1Duke(90) Duke
    (80) (x)
    Arizona Arizona
    8Michigan(80) Michigan
    5Texas(40) Texas
    4Arizona(40) Arizona
    6Connecticut(30) Connecticut
    (12) (*)
    3Cincinnati(80) Missouri
    (12) (*)
    7Temple(80) Penn State
    (40) (*)
    San Diego State
    10Penn State(40)
    2San Diego State(30) San Diego State
    15Northern Colorado(200)

    Southeast bracket

    1Pittsburgh(80) Pittsburgh
    (80) (x)
    Wisconsin Michigan State
    16Arkansas-Little Rock(300)
    8Butler(100) Butler
    9Old Dominion(101)
    5Wisconsin(4) Wisconsin
    4Kansas State(40) Kansas State
    13Utah State(80)
    6BYU(39) BYU
    Michigan State
    3St. John's(200) St. John's
    (200) (x)
    7UCLA(80) Michigan State
    (7) (*)
    Michigan State
    10Michigan State(7)
    2Florida(30) Florida

    Southwest bracket

    1Kansas(90) Kansas
    (80) (*)
    Illinois Texas A&M
    16Boston University(300)
    8UNLV(90) Illinois
    (80) (!)
    5Louisville(82) Louisville
    (82) (x)
    12Morehead State(100)
    4Vanderbilt(90) Vanderbilt
    (90) (x)
    6Purdue(90) Purdue
    Texas A&M
    11Saint Peter's(300)
    3Georgetown(100) USC
    7Texas A&M(18) Texas A&M
    (18) (x)
    Texas A&M
    10Florida State(23)
    2Notre Dame(30) Notre Dame


    Ohio State Ohio State Michigan State
    Michigan State Michigan State
    Texas A&M
  • The Old New Thing

    Why can't Explorer decide what size a file is?


    If you open Explorer and highlight a file whose size is a few kilobytes, you can find some file sizes where the Explorer Size column shows a size different from the value shown in the Details pane. What's the deal? Why can't Explorer decide what size a file is?

    The two displays use different algorithms.

    The values in the Size column are always given in kilobytes, regardless of the actual file size. File is 15 bytes? Show it in kilobytes. File is 2 gigabytes? Show it in kilobytes.

    The value shown in the Size column is rounded up to the nearest kilobyte. Your 15-byte file shows up as 1KB. This has been the behavior since Explorer was first introduced back in Windows 95, Why? I don't know; the reasons may have been lost to the mists of time. Though I suspect one of the reasons is that you don't want a file to show up as 0KB unless it really is an empty file.

    On the other hand, the value shown in the Details pane uses adaptive units: For a tiny file, it'll show bytes, but for a large file, it'll show megabytes or gigabytes or whatever. And the value is shown to three significant digits.

    The result is that a file which is, say, 19465 bytes in size (19.0088 kilobytes) shows up in the Size column as 20KB, since the Size column rounds up. On the other hand, the Details pane shows 19.0KB since it displays the value to three significant digits.

    It looks like Explorer can't make up its mind, and perhaps it can't, but the reason is that the two places on the screen which show the size round in different ways.

  • The Old New Thing

    The old DEBUG program can load COM files bigger than 64KB, but that doesn't mean they actually load as a program


    Some times ago, I described why a corrupted binary sometimes results in the error "Program too big to fit in memory". Commenter Neil was under the impression that nonrelocatable programs files could be larger than 64KB and used the DEBUG command to verify this assertion.

    While it's true that DEBUG can load files bigger than 64KB, that doesn't mean that they will load as a program. If DEBUG decide that you didn't give it a program (the file extension is not EXE or COM),¹ then it treats the file on the command line as a data file and loads it into memory in its entirety, provided it fits in memory in its entirety. When it does this, the BX register contains the upper 16 bits of the file size, and CX contains the lower 16 bits. This is also the format that is used when writing files back out: Use the n command to set the name of the output file and set BX:CX to the file size.

    Even though DEBUG has been obsolete for over a decade, it is still useful for exactly this purpose: You can use it as a hex editor for files less than around 512KB.

    But don't deceive yourself into thinking that you created a COM file that is bigger than 64KB.

    ¹There is another extension which has special meaning to DEBUG, but it's not relevant to the discussion.

  • The Old New Thing

    Why does my TIME_ZONE_INFORMATION have the wrong DST cutover date?


    Public Service Announcement: Daylight Saving Time begins in most parts of the United States this weekend. Other parts of the world may change on a different day from the United States.

    A customer reported that they were getting incorrect values from the GetTimeZoneInformationForYear function.

    I have a program that calls GetTimeZoneInformationForYear, and it looks like it's returning incorrect DST transition dates. For example, GetTimeZoneInformationForYear(2010, NULL, &tzi) is returning March 2nd as the tzi.DaylightDate value, instead of the Expected March 14th date. The current time zone is Pacific Time.

    The value returned by GetTimeZoneInformationForYear (and GetTimeZoneInformation) is correct; you're just reading it wrong.

    As called out in the documentation for the TIME_ZONE_INFORMATION structure, the wDay field in the StandardDate and DaylightDate changes meaning depending on whether the wYear is zero or nonzero.

    If the wYear is nonzero, then the wDay has its usual meaning.

    But if the wYear is zero (and it is for most time zones), then the wDay encodes the week number of the cutover rather than the day number.

    In other words, that 2 does not mean "March 2nd". It means "the second week in March".

  • The Old New Thing

    How do I create a topmost window that is never covered by other topmost windows?


    We already know that you can't create a window that is always on top, even in the presence of other windows marked always-on-top. An application of the What if two programs did this? rule demonstrates that it's not possible, because whatever trick you use to be on-top-of-always-on-top, another program can use the same trick, and now you have two on-top-of-always-on-top windows, and what happens?

    A customer who failed to understand this principle asked for a way to establish their window as "super-awesome topmost". They even discovered the answer to the "and what happens?" rhetorical question posed above.

    We have overridden the OnLostFocus and OnPaint methods to re-assert the TopLevel and TopMost window properties, as well as calling BringToFront and Activate. The result is that our application and other applications end up fighting back and forth because both applications are applying similar logic. We tried installing a global hook and swallowing paint and focus events for all applications aside from our own (thereby preventing the other applications from having the opportunity to take TopMost ahead of us), but we found that this causes the other applications to crash. We're thinking of setting a timer and re-asserting TopMost when the timer fires. Is there a better way?

    This is like saying, "Sometimes I'm in a hurry, and I want to make sure I am the next person to get served at the deli counter. To do this, I find whoever has the lowest number, knock them unconscious, and steal their ticket. But sometimes somebody else comes in who's also in a hurry. That person knocks me unconscious and steals my ticket. My plan is to set my watch alarm to wake me up periodically, and each time it wakes me up, I find the person with the lowest number, knock them unconscious, and steal their ticket. Is there a better way?"

    The better way is to stop knocking each other unconscious and stealing each other's tickets.

    The customer (via the customer liaison) provided context for their question.

    This is not a general-purpose application. This application will be run on dedicated machines which operate giant monitors in retail stores. There are already other applications running on the computer which rotate through advertisements and other display information.

    The customer is writing another application which will also run on the machine. Most of the time, the application does nothing, but every so often, their application needs to come to the front and display its message, regardless of whatever the other applications are displaying. (For example, there might be a limited-time in-store promotion that needs to appear on top of the regular advertisements.)

    Unfortunately, all of these different programs were written by different vendors, and there is no coordination among them for who gets control of the screen. We were hoping that there was some way we could mark our window as "super topmost" so that when it came into conflict with another application running on the machine, it would win and the other application would lose.

    I'm thinking of recommending that the vendors all come up with some way of coordinating access to the screen so they can negotiate among themselves and not get into focus fights. (Easier said than done, since all the different applications running on the machine come from different vendors...)

    Since there is no coordination among the various applications, you're basically stuck playing a game of walls and ladders, hoping that your ladder is taller than everybody else's wall. The customer has pretty much found the tallest ladder which the window manager provides. There is no "super topmost" flag.

    Sure, you can try moving to another level of the system, like say creating a custom desktop, but all that does is give you a taller ladder. And then one of the other applications is going to say, "I need to display a store-wide page (manager to the deli please, manager to the deli), overriding all other messages, even if it's a limited-time in-store promotion." And they'll try something nastier, like enumerating all the windows in the system and calling ShowWindow(SW_HIDE).

    And then another application will say, "I need to display an important store-wide security announcement (Will the owner of a white Honda Civic, license plate 037-MSN, please return to your vehicle), overriding all other messages, even if it's a store-wide page." And it'll try something nastier, like setting their program as the screen saver, disabling the mouse and keyboard devices, and then invoking the screen saver on the secure desktop.

    And then another application will say, "I need to display a critical store-wide announcement (Fire in the automotive department. Everybody evacuate the building immediately), overriding all other messages, even if it's an important store-wide security announcement." And it'll try something nastier, like enumerating all the processes on the system, attaching to each one with debug privilege, and suspending all the threads.

    Stop the madness. The only sane way out is to have the programs coöperate to determine who is in control of the screen at any particular time.

    In response to my hypothetical game of walls and ladders, one of my colleagues wrote, "Note to self: Do not get into a walls-and-ladders contest with Raymond."

Page 123 of 441 (4,407 items) «121122123124125»