October, 2004

  • The Old New Thing

    The procedure entry point SHCreateThreadRef could not be located...


    Some people smarter than me have been working on this problem, and here's what they figured out. First, I'll just reprint their analysis (so that people with the problem and are searching for the solution can find it), and then we can discuss it.

    Update (18 October 2005): The official version of this document is now available. Please use that version instead of following the steps below, which were based on a preliminary version of the instructions.

    If you receive an error message: "Explorer.EXE - Entry Point Not Found - The procedure entry point SHCreateThreadRef could not be located in the dynamic link library SHLWAPI.dll", this information will help resolve the issue.

    This appears to have been caused by the following sequence of events:

    1. You installed Windows XP Service Pack 2
    2. The installation of Service Pack 2 failed due to a computer crash during the installation which resulted in the automatic Service Pack Recovery process. On next boot, you should have received an error message telling you that the install failed, and that you need to go to the control panel and uninstall SP2 and then try re-installing it. This message may have been dismissed accidentally or by another individual using your computer. In any event, the Service Pack recovery process was not completed by uninstalling the service pack from the Add/Remove Programs control panel, and the system is consequently in a partially installed state which is not stable.
    3. You then installed the latest security update for Windows XP, MS04-038, KB834707. Because your system is still partially SP2, the SP2 version of this fix was downloaded and installed by Windows Update or Automatic Updates. However, the operating system files on the system are the original versions due to the SP Recovery process. This results in mismatched files causing this error.

    To recover the system, carefully perform the following steps:

    1. Boot normally and attempt to log in to your desktop. At this point you should get the error message listed above.
    2. Press Control+Alt+Delete at the same time to start the Task Manager. (If you are using classic logon, click the "Task Manager" button.) You may get additional error messages but Task Manager will eventually start.
    3. On the menu bar, select File and then New Task (Run).
    4. Type in control appwiz.cpl into the new task box and hit OK. You may get additional errors that can be ignored.
    5. The Add/Remove Control Panel should now be running. You can close Task Manager.
    6. Near the bottom of the list, find the entry titled "Windows XP Hotfix – KB834707".
    7. Click on it and click the "Remove" button. It will take some time to complete. Once the "Finish" button is visible, click on it and reboot your system. If you get messages about additional software or hotfixes installed, you can safely ignore them.

    Do NOT stop now! Your system is still in the "failed SP2 install" state. You MUST complete the SP2 uninstall, and then re-install SP2.

    1. Start the system and log in.
    2. Click on Start and then Control Panel.
    3. Click on the Add/Remove programs item.
    4. Near the bottom of the list, find the entry titled "Windows XP Service Pack 2".
    5. Click on it and remove Service Pack 2. You may get a warning about software you have installed after SP2. Make a note of it as you may need to reinstall some of them after the uninstall operation.
    6. After Service Pack 2 has been successfully removed, you should visit http://www.microsoft.com/sp2install for instructions on installing Service Pack 2. You can get SP2 from http://windowsupdate.microsoft.com.
    7. After Service Pack 2 has been successfully re-installed, you should re-visit Windows Update to get the proper version of the latest critical security updates.


    Q: I don't believe I am in the "partially installed SP2" state. Is there any way to check that?

    A: After step 7, your system should be able to log in. There are several ways to check.

    1. Open the file c:\windows\svcpack.log, and scroll to the very bottom of the file. About 10 lines from the end, you should see:
      0.xxx: Executing script \SystemRoot\sprecovr.txt
      0.xxx: In cleanup mode. System will not be rebooted.
      If you have these lines in svcpack.log, and you did not uninstall Service Pack 2 in Add/Remove Programs, you definitely have a machine in this partially installed state.
    2. Click on the Start button, then Run, and type winver, then click OK. If the version is "Version 5.1 (Build 2600.xpsp_sp2_rtm.040803-2158: Service Pack 2" then you have the correct SP2 install. If, however, it has a number that is less than 040803 after the xpsp2, such as "Build 2600.xpsp2.030422-1633 : Service Pack 2" then you definitely have a machine in the partially installed state. [Corrected typo in version numbering, 11am.]

    Notice that the default answer to every dialog box is "Cancel". The Service Pack setup program detected a problem and gave instructions on how to fix it, and yet people just ignored the message.

    The result of not fixing the problem is that you are left with a machine that is neither Service Pack 1 nor Service Pack 2 but instead is a Frankenstein monster of some files from Service Pack 1 and some files from Service Pack 2. This is hardly a good situation. Half the files are mismatched with the other half.

    There was enough of Service Pack 2 on your machine that Windows Update downloaded the Service Pack 2 version of the security patch and tried to install it. This made a bad situation worse.

    What's the moral of the story?

    First, that users ignore all error messages. Even error messages that tell how to fix the error! Second, that ignoring some error messages often leads to worse error messages. And third, that once you get into this situation, you're off in uncharted territory. And there are only dragons there.

    (These are the types of problems that are nearly impossible to debug: It will never happen in the test lab, because the test lab people know that when an error message tells you, "The install failed; you need to do XYZ to fix it," you should do it! All you can work from are descriptions from people who are having the problem, and a lot of creativity to try to guess "What happened that they aren't telling me?")

  • The Old New Thing

    Where did windows minimize to before the taskbar was invented?


    Before Explorer was introduced in Windows 95, the Windows desktop was a very different place.

    The icons on your desktop did not represent files; rather, when you minimized a program, it turned into an icon on the desktop. To open a minimized program, you had to hunt for its icon, possibly minimizing other programs to get them out of the way, and then double-click it. (You could also Alt+Tab to the program.)

    Explorer changed the desktop model so that icons on your desktop represent objects (files, folders) rather than programs. The job of managing programs fell to the new taskbar.

    But where did the windows go when you minimized them?

    Under the old model, when a window was minimized, it displayed as an icon, the icon had a particular position on the screen, and the program drew the icon in response to paint messages. (Of course, most programs deferred to DefWindowProc which just drew the icon.) In other words, the window never went away; it just changed its appearance.

    But with the taskbar, the window really does go away when you minimize it. Its only presence is in the taskbar. The subject of how to handle windows when they were minimized went through several iterations, because it seemed that no matter what we did, some program somewhere didn't like it.

    The first try was very simple: When a window was minimized, the Windows 95 window manager set it to hidden. That didn't play well with many applications, which cared about the distinction between minimized (and visible) and hidden (and not visible).

    Next, the Windows 95 window manager minimized the window just like the old days, but put the minimized window at coordinates (-32000, -32000), This didn't work because some programs freaked out if they found their coordinates were negative.

    So the Windows 95 window manager tried putting minimized windows at coordinates (32000, 32000), This still didn't work because some programs freaked out if they found their coordinates were positive and too large!

    Finally the Windows 95 window manager tried coordinates (3000, 3000), This seemed to keep everybody happy. Not negative, not too large, but large enough that it wouldn't show up on the screen (at least not at screen resolutions that were readily available in 1995).

    If you have a triple-monitor Windows 98 machine lying around, you can try this: Set the resolution of each monitor to 1024x768 and place them corner-to-corner. At the bottom right corner of the third monitor, you will see all your minimized windows parked out in the boonies.

    (Windows NT stuck with the -32000 coordinates and didn't pick up the compatibility fixes for some reason. I guess they figured that by the time Windows NT became popular, all those broken programs would have been fixed. In other words: Let Windows 95 do your dirty work!)

    [Raymond is currently on vacation; this message was pre-recorded.]

  • The Old New Thing

    Accessing the current module's HINSTANCE from a static library


    If you're writing a static library, you may have need to access the HINSTANCE of the module that you have been linked into. You could require that the module that links you in pass the HINSTANCE to a special initialization function, but odds are that people will forget to do this.

    If you are using a Microsoft linker, you can take advantage of a pseudovariable which the linker provides.


    The pseudovariable __ImageBase represents the DOS header of the module, which happens to be what a Win32 module begins with. In other words, it's the base address of the module. And the module base address is the same as its HINSTANCE.

    So there's your HINSTANCE.

    [Raymond is currently on vacation; this message was pre-recorded.]

  • The Old New Thing

    People lie on surveys and focus groups, often unwittingly


    Philip Su's discussion of early versions of Microsoft Money triggered a topic that had been sitting in the back of my mind for a while: That people lie on surveys and focus groups, often unwittingly. I can think of three types of lies offhand. (I'm not counting malicious lying; that is, intentional lying for the purpose of undermining the results of the survey or focus group.)

    First, people lie about the reasons why they do things.

    The majority of consumers who buy computers claim that personal finance management is one of the top three reasons they are purchasing a PC. They've been claiming this for more than a decade. But only somewhere around 2% of consumers end up using a personal finance manager.

    This is one of those unconscious lies. People claim that they want a computer to do their personal finances, to organize their recipes, to mail-merge their Christmas card labels.

    They are lying.

    Those are the things people wish they would use their computer for. That's before the reality hits them of how much work it is to track every expenditure, transcribe every recipe, type in every address. In reality, they end up using the computer to play video games, surf the web, and email jokes to each other.

    Just because people say they would do something doesn't mean they will. That leads to the second class of focus group lying: The polite lie, also known as "say what the sponsor wants to hear".

    The following story is true, but the names have been changed.

    A company conducted focus groups for their Product X, which had as its main competitor Product Q. They asked people who were using Product Q, "Why do you use Product Q instead of Product X?" The respondents gave their reasons: "Because Product Q has feature F," "Because Product Q performs G faster," "Because Product Q lets me do activity H." They added, "If Product X did all that and was cheaper, we'd switch to it."

    Armed with this valuable insight, the company expended time, effort, and money in adding feature F to Product X, making Product X do G faster, and adding the ability to do activity H. They lowered the price and sat back and waited for the customers to beat a path to their door.

    But the customers didn't come.

    Why not?

    Because the customers were lying. In reality, they had no intention of switching from Product Q to Product X at all. They grew up with Product Q, they were used to the way Product Q worked, they simply liked Product Q. Product Q had what in the hot bubble-days was called "mindshare", but what in older days was called "brand loyalty" or just "inertia".

    When asked to justify why they preferred Product Q, the people in the focus group couldn't say, "I don't know; I just like it." That would be perceived as an "unhelpful" answer, and besides it would be subconsciously admitting that they were being manipulated by Product Q's marketing! Instead, they made up reasons to justify their preference to themselves and consequently to the sponsor of the focus group.

    Result: Company wastes tremendous effort on the wrong thing.

    (Closely related to this is the phenomenon of saying—and even believing—"I'd pay ten bucks for that!" Yet when the opportunity arises to buy it for $10, you decline. I do this myself.)

    The third example of lying that occurred to me is the one where you don't even realize that you are contradicting yourself. My favorite example of this was a poll on the subject of congestion charging on highways in the United States. The idea behind congestion charging is to create a toll road and vary the cost of driving on the road depending on how heavy traffic is. Respondents were asked two questions:

    1. "If congestion charging were implemented in your area, do you think it would reduce traffic congestion?"
    2. "If congestion charging were implemented in your area, would you be less likely to drive during peak traffic hours?"

    Surprisingly, most people answered "No" to the first question and "Yes" to the second. But if you stop and think about it, if people avoid driving during peak traffic hours, then congestion would be reduced because there are fewer cars on the road. An answer of "Yes" to the second question logically implies an answer of "Yes" to the first question.

    (One may be able to explain this by arguing that, "Well, sure congestion charging would be effective for influencing my driving behavior, but I don't see how it would affect enough other people to make it worthwhile. I'm special." Sort of how most people rate themselves as above-average drivers.)

    What I believe happened was that people reacted by saying to themselves, "I am opposed to congestion charging," and concluding, "Therefore, I must do what I can to prevent it from happening." Proclaiming on surveys that it would never work is one way of accomplishing this.

    When I shared my brilliant theories with some of my colleagues, one of them, a program manager on the Office team, added his own observation (which I have edited slightly):

    A variation of two of the above observations that often shows up in the usability lab:

    A user has spent an hour battling with the software. At some point the user's expectation of how the software should behave (the "user model") diverged from the actual behavior. Consequently, the user couldn't predict what will happen next and is therefore having a horrible time making any progress on the task. (Usually, this is the fault of the software design unintentionally misleading the user—which is why we test things.) After many painful attempts, the user finally succeeds, gets hints, or is flat-out told how the feature works. Often, the user stares mutely at the monitor for five seconds, then says: "I suppose that makes sense."

    It's an odd combination of people wanting to give a helpful answer with people wanting to feel special. In this case, the user wants to say something nice about the software that any outside observer could clearly tell was broken. Additionally, the users (subconsciously) don't want to admit that they were wrong and don't understand the software.

    Usability participants also have a tendency to say "I'm being stupid" when those of us on the other side of the one-way glass are screaming "No you're not, the software is broken!" That's an interesting contrast—in some cases, pleading ignorance is a defense. In other cases, pleading mastery is. At the end of the day, you must ignore what the user said and base any conclusions on what they did.

    I'm sure there are other ways people subconsciously lie on surveys and focus groups, but those are the ones that came to mind.

    [Insignificant typos fixed, October 13.]

  • The Old New Thing

    The macros for declaring and implementing COM interfaces


    There are two ways of declaring COM interfaces, the hard way and the easy way.

    The easy way is to use an IDL file and let the MIDL compiler generate your COM interface for you. If you let MIDL do the work, then you also get __uuidof support at no extra charge, which is a very nice bonus.

    The hard way is to do it all by hand. If you choose this route, your interface will look something like this:

    #undef  INTERFACE
    #define INTERFACE   ISample2
    DECLARE_INTERFACE_(ISample2, ISample)
        // *** IUnknown methods ***
        STDMETHOD(QueryInterface)(THIS_ REFIID riid, void **ppv) PURE;
        // ** ISample methods ***
        STDMETHOD(Method1)(THIS) PURE;
        STDMETHOD_(int, Method2)(THIS) PURE;
        // *** ISample2 methods ***
        STDMETHOD(Method3)(THIS_ int iParameter) PURE;
        STDMETHOD_(int, Method4)(THIS_ int iParameter) PURE;

    What are the rules?

    • You must set the INTERFACE macro to the name of the interface being declared. Note that you need to #undef any previous value before you #define the new one.
    • You must use the DECLARE_INTERFACE and DECLARE_INTERFACE_ macros to generate the preliminary bookkeeping for an interface. Use DECLARE_INTERFACE for interfaces that have no base class and DECLARE_INTERFACE_ for interfaces that derive from some other interface. In our example, we derive the ISample2 interface from ISample. Note: In practice, you will never find the plain DECLARE_INTERFACE macro because all interfaces derive from IUnknown if nothing else.
    • You must list all the methods of the base interfaces in exactly the same order that they are listed by that base interface; the methods that you are adding in the new interface must go last.
    • You must use the STDMETHOD or STDMETHOD_ macros to declare the methods. Use STDMETHOD if the return value is HRESULT and STDMETHOD_ if the return value is some other type.
    • If your method has no parameters, then the argument list must be (THIS). Otherwise, you must insert THIS_ immediately after the open-parenthesis of the parameter list.
    • After the parameter list and before the semicolon, you must say PURE.
    • Inside the curly braces, you must say BEGIN_INTERFACE and END_INTERFACE.

    There is a reason for each of these rules. They have to do with being able to use the same header for both C and C++ declarations and with interoperability with different compilers and platforms.

    • You must set the INTERFACE macro because its value is used by the THIS and THIS_ macros later.
    • You must use one of the DECLARE_INTERFACE* macros to ensure that the correct prologue is emitted for both C and C++. For C, a vtable structure is declared, whereas for C++ the compiler handles the vtable automatically; on the other hand, since C++ has inheritance, the macros need to specify the base class so that upcasting will work.
    • You must list the base class methods in exactly the same order as in the original declarations so that the C vtable structure for your derived class matches the structure for the base class for the extent that they overlap. This is required to preserve the COM rule that a derived interface can be used as a base interface.
    • You must use the STDMETHOD and STDMETHOD_ macros to ensure that the correct calling conventions are declared for the function prototypes. For C, the macro creates a function pointer in the vtable; for C++, the macro creates a virtual function.
    • The THIS and THIS_ macros are used so that the C declaration explicitly declares the "this" parameter which in C++ is implied. Different versions are needed depending on the number of parameters so that a spurious trailing comma is not generated in the zero-parameter case.
    • The word PURE ensures that the C++ virtual function is pure, because one of the defining characteristics of COM interfaces is that all methods are pure virtual.
    • The BEGIN_INTERFACE and END_INTERFACE macros emit compiler-specific goo which the compiler vendor provides in order to ensure that the generated interface matches the COM vtable layout rules. Different compilers have historically required different goo, though the need for goo is gradually disappearing over time.

    And you wonder why I called it "the hard way".

    Similar rules apply when you are implementing an interface. Use the STDMETHODIMP and STDMETHODIMP_ macros to declare your implementations so that they get the proper calling convention attached to them. We'll see examples of this next time.

  • The Old New Thing

    Let WMI do the heavy lifting of determining system information


    Windows Management Instrumentation is a scriptable interface to configuration information. This saves you the trouble of having to figure it out yourself.

    For example, here's a little program that enumerates all the CPUs in your system and prints some basic information about them.

    var locator = WScript.CreateObject("WbemScripting.SWbemLocator");
    var services = locator.ConnectServer();
    var cpus = new Enumerator(services.ExecQuery("SELECT * FROM Win32_Processor"));
    while (!cpus.atEnd()) {
      var cpu = cpus.item();
      WScript.StdOut.WriteLine("cpu.ProcessorType=" + cpu.ProcessorType);
      WScript.StdOut.WriteLine("cpu.CurrentClockSpeed=" + cpu.CurrentClockSpeed);
      WScript.StdOut.WriteLine("cpu.MaxClockSpeed=" + cpu.MaxClockSpeed);
      WScript.StdOut.WriteLine("cpu.Manufacturer=" + cpu.Manufacturer);

    Save this program as cpus.js and run it via cscript cpus.js.

    There's a whole lot of other information kept inside WMI. You can get lost amidst all the classes that exist. The Scripting Guys have their own tool called WMI Scriptomatic which lets you cruise around the WMI namespace. (The Scripting Guys also wrote Tweakomatic which comes with hilarious documentation.)

    Added 11am: It appears that people have misunderstood the point of this entry. The point here is not to show how to print the results to the screen. (I just did that to prove it actually worked.) The point is that you can let WMI do the hard work of actually digging up the information instead of having to hunt it down yourself. Want BIOS information? Try Win32_BIOS. Change the query to "SELECT * FROM Win32_BIOS" and you can get the manufacturer from the Manufacturer property. Plenty more examples in MSDN.

  • The Old New Thing

    Implementing higher-order clicks


    Another question people ask is "How do I do triple-click or higher?" Once you see the algorithm for double-clicks, extending it to higher order clicks should be fairly natural. The first thing you probably should do is to remove the CS_DBLCLKS style from your class because you want to do multiple-click management manually.

    Next, you can simply reimplement the same algorithm that the window manager uses, but take it to a higher order than just two. Let's do that. Start with a clean scratch program and add the following:

    int g_cClicks = 0;
    RECT g_rcClick;
    DWORD g_tmLastClick;
    void OnLButtonDown(HWND hwnd, BOOL fDoubleClick,
                       int x, int y, UINT keyFlags)
      POINT pt = { x, y };
      DWORD tmClick = GetMessageTime();
      if (!PtInRect(&g_rcClick, pt) ||
          tmClick - g_tmLastClick > GetDoubleClickTime()) {
        g_cClicks = 0;
      g_tmLastClick = tmClick;
      SetRect(&g_rcClick, x, y, x, y);
                  GetSystemMetrics(SM_CXDOUBLECLK) / 2,
                  GetSystemMetrics(SM_CYDOUBLECLK) / 2);
      TCHAR sz[20];
      wnsprintf(sz, 20, TEXT("%d"), g_cClicks);
      SetWindowText(hwnd, sz);
    void ResetClicks()
      g_cClicks = 0;
      SetWindowText(hwnd, TEXT("Scratch"));
    void OnActivate(HWND hwnd, UINT state, HWND, BOOL)
    void OnRButtonDown(HWND hwnd, BOOL fDoubleClick,
                       int x, int y, UINT keyFlags)
        HANDLE_MSG(hwnd, WM_LBUTTONDOWN, OnLButtonDown);
        HANDLE_MSG(hwnd, WM_ACTIVATE, OnActivate);

    [Boundary test for double-click time corrected 10:36am.]

    (Our scratch program doesn't use the CS_DBLCLKS style, so we didn't need to remove it - it wasn't there to begin with.)

    The basic idea here is simple: When a click occurs, we see if it is in the "double-click zone" and has occurred within the double-click time. If not, then we reset the consecutive click count.

    (Note that the SM_CXDOUBLECLK and SM_CYDOUBLECLK values are the width of the entire rectangle, so we cut it in half when inflating so that the rectangle extends halfway in either direction. Yes, this means that a pixel is lost if the double-click width is odd, but Windows has been careful always to set the value to an even number.)

    Next, we record the coordinates and time of the current click so the next click can compare against it.

    Finally, we react to the click by putting the consecutive click number in the title bar.

    There are some subtleties in this code. First, notice that setting g_cClicks to zero forces the next click to be treated as the first click in a series, for regardless of whether it matches the other criteria, all that will happen is that the click count increments to 1.

    Next, notice that the way we test whether the clicks occurred within the double click time was done in a manner that is not sensitive to timer tick rollover. If we had written

          tmClick > g_tmLastClick + GetDoubleClickTime()) {

    then we would fail to detect multiple clicks properly near the timer tick rollover. (Make sure you understand this.)

    Third, notice that we reset the click count when the window gains or loses activation. That way, if the user clicks, then switches away, then switches back, and then clicks again, that is not treated as a double-click. We do the same if the user clicks the right mouse button in between. (You may notice that few programs bother with quite this much subtlety.)

    Exercise: Suppose your program isn't interested in anything beyond triple-clicks. How would you change this program in a manner consistent with the way the window manager stops at double-clicks?

  • The Old New Thing

    Logical consequences of the way Windows converts single-clicks into double-clicks


    First, I'm going to refer you to the MSDN documentation on mouse clicks, since that's the starting point. I'm going to assume that you know the mechanics of how single-clicks are converted to double-clicks.

    Okay, now that you've read it, let's talk about some logical consequences of that article and what it means for the way you design your user interface.

    First, some people design their double-click action to be something unrelated to the single-click action. They want to know if they can suppress the initial WM_LBUTTONDOWN of the double-click sequence.

    Of course, you realize that that would require clairevoyance. When the mouse button goes down for the first time, the window manager doesn't know whether another click will come or not. (Heck, often the user doesn't know either!) So it spits out a WM_LBUTTONDOWN and waits for more.

    Now suppose you're a program that nevertheless wants to continue with the dubious design of having the double-click action be unrelated to the single-click action. What do you do?

    Well, one thing you could do is to do nothing on receipt of the WM_LBUTTONDOWN message aside from set a timer to fire in GetDoubleClickTime() milliseconds. [Corrected 10am.] If you get a WM_LBUTTONDBLCLK message within that time, then it was a double-click after all. If you don't, then it must have been a single-click, so you can do your single-click action (although a bit late).

    This "wait a tick" technique is also necessary if you don't have a double-click action, but the second click causes trouble in conjunction with the first click. Why is this necessary? Because many users double-click everything. Here are some examples of where the "delayed action to avoid the second click" can be seen:

    • The context menu that appears for taskbar notification icons. If the context menu appeared immediately upon the first click, then the second click would dismiss the context menu, leaving the user confused. "I clicked and something happened and then it went away." (Users don't say "I double-clicked"; they just say that they clicked. Double-click is the only thing they know how to do, so they just call it "click". For the same reason you don't say "I drove my blue car" if you have only one car.)
    • If Explorer is in one-click mode, it waits to see if there is a second click, and if so, it ignores it. Otherwise, when people double-click, they launch two copies of the program. Furthermore, if you suppress the second click but don't wait a tick, then the program they launched gets stuck behind the Explorer window, since the user clicked on Explorer after launching the program.
    • The XP style Start button ignores the second click. Otherwise, when people double-click the Start button, the first click would open the Start menu and the second click would dismiss it! (This is sometimes known as "debouncing".)

    Let's demonstrate how you might implement click delay. Start with the scratch program and add the following:

    void CALLBACK DelayedSingleClick(HWND hwnd, UINT,
                                     UINT_PTR id, DWORD)
        KillTimer(hwnd, id);
    void OnLButtonDown(HWND hwnd, BOOL fDoubleClick,
                       int x, int y, UINT keyFlags)
        if (fDoubleClick) {
            KillTimer(hwnd, 1);
        } else {
            SetTimer(hwnd, 1, GetDoubleClickTime(),
        HANDLE_MSG(hwnd, WM_LBUTTONDOWN, OnLButtonDown);
        HANDLE_MSG(hwnd, WM_LBUTTONDBLCLK, OnLButtonDown);

    Also, since we're messing with double clicks, we should turn them on:

        wc.style = CS_DBLCLKS;

    When you run this program, click and double-click in the client area. Notice that the program doesn't react to the single click until after your double-click timeout has elapsed, because it's waiting to see if you are going to continue to click a second time (and therefore double-click instead of single-click).

    Next time, we'll look at clicks beyond two.

  • The Old New Thing

    What's the atom returned by RegisterClass useful for?


    The RegisterClass and RegisterClassEx functions return an ATOM. What is that ATOM good for?

    The names of all registered window classes is kept in an atom table internal to USER32. The value returned by the class registration functions is that atom. You can also retrieve the atom for a window class by asking a window of that class for its class atom via GetClassWord(hwnd, GCW_ATOM).

    The atom can be converted to an integer atom via the MAKEINTATOM macro, which then can be used by functions that accept class names in the form of strings or atoms. The most common case is the lpClassName parameter to the CreateWindow macro and the CreateWindowEx function. Less commonly, you can also use it as the lpClassName parameter for the GetClassInfo and GetClassInfoEx functions. (Though why you would do this I can't figure out. In order to have the atom to pass to GetClassInfo in the first place, you must have registered the class (since that's what returns the atom), in which case why are you asking for information about a class that you registered?)

    To convert a class name to a class atom, you can create a dummy window of that class and then do the aforementioned GetClassWord(hwnd, GCW_ATOM). Or you can take advantage of the fact that the return value from the GetClassInfoEx function is the atom for the class, cast to a BOOL. This lets you do the conversion without having to create a dummy window. (Beware, however, that GetClassInfoEx's return value is not the atom on Windows 95-derived operating systems.)

    But what good is the atom?

    Not much, really. Sure, it saves you from having to pass a string to functions like CreateWindow, but all it did was replace a string with with an integer you now have to save in a global variable for later use. What used to be a string that you could hard-code is now an atom that you have to keep track of. Unclear that you actually won anything there.

    I guess you could use it to check quickly whether a window belongs to a particular class. You get the atom for that class (via GetClassInfo, say) and then get the atom for the window and compare them. But you can't cache the class atom since the class might get unregistered and then re-registered (which will give it a new atom number). And you can't prefetch the class atom since the class may not yet be registered at the point you prefetch it. (And as noted above, you can't cache the prefetched value anyway.) So this case is pretty much a non-starter anyway; you may as well use the GetClassName function and compare the resulting class name against the class you're looking for.

    In other words, window class atoms are an anachronism. Like replacement dialog box classes, it's one of those generalities of the Win32 API that never really got off the ground, but which must be carried forward for backwards compatibility.

    But at least now you know what they are.

    [Typos fixed October 12.]

  • The Old New Thing

    Why didn't the desktop window shrink to exclude the taskbar?


    The taskbar created all sorts of interesting problems, since the work area was not equal to the entire screen dimensions. (Multiple monitors created similar problems.) "Why didn't the gui return the usable workspace as the root window (excluding the taskbar)?"

    That would have made things even worse.

    Lots of programs want to cover the entire screen. Games, for example, are very keen on covering the entire screen. Slideshow programs also want to cover the entire screen. (This includes both slideshows for digital pictures as well as business presentations.) Screen savers of course must cover the entire screen.

    If the desktop window didn't include the taskbar, then those programs would leave a taskbar visible while they did their thing. This is particularly dangerous for screen savers, since a user could just click on the taskbar to switch to another program without going through the screen saver's password lock!

    And if the taskbar were docked at the top or left edge of the screen, this would have resulted in the desktop window not beginning at coordinates (0,0), which would no doubt have caused widespread havoc. (Alternatively, one could have changed the coordinate system so that (0, 0) was no longer the top left corner of the screen, but that would have broken so many programs it wouldn't have been funny.)

    [Raymond is currently on vacation; this message was pre-recorded.]

Page 1 of 3 (25 items) 123