September, 2012

  • The Old New Thing

    How do I invoke a verb on an IShellItemArray?


    A customer wanted to invoke a command on multiple items at once.

    I have an IShellItemArray, and I want to invoke a verb with that array as the parameter. I know that I can invoke a verb on a single IShellItem by the code below, but how do I pass an entire array?

    void InvokeVerbOnItem(__in IShellItem *psi,
                          __in_opt PCWSTR pszVerb)
     HRESULT hr = SHGetIDListFromObject(psi, &pidl);
     if (SUCCEEDED(hr)) {
      SHELLEXECUTEINFO sei = { sizeof(sei) };
      sei.fMask = SEE_MASK_UNICODE |
                  SEE_MASK_INVOKEIDLIST |
      sei.lpIDList = pidl;
      sei.lpVerb = pszVerb;
      sei.nShow = SW_SHOWNORMAL;

    The function Invoke­Verb­On­Item invokes the command by extracting the pidl, then asking Shell­Execute­Ex to invoke the command on the pidl. A limitation of Shell­Execute* is that it can invoke on only one pidl. What if you want to invoke it on a bunch of pidls at once? (Doing it all at once gives the target program the opportunity to optimize the multi-target invoke.)

    As noted in the documentation, passing SEE_MASK_INVOKE­ID­LIST flag tells the Shell­Execute­Ex to "use the IContextMenu interface of the selected item's shortcut menu handler."

    So if you are frustrated by the limitations of the middle man, then cut out the middle man!

    void InvokeVerbOnItemArray(__in IShellItemArray *psia,
                               __in_opt PCWSTR pszVerb)
     IContextMenu *pcm;
     HRESULT hr = psia->BindToHandler(BHID_SFUIObject,
     if (SUCCEEDED(hr)) {
      ... context menu invoke incorporated by reference ...

    If you think about it, the original Invoke­Verb­On­Item function could've avoid the middle man too. It converted an IShellItem (a live object which encapsulates an IShell­Folder and a child pidl) into an absolute pidl (a dead object), which then passed it to Shell­Execute­Ex, which had to reanimate the object back into an IShell­Folder and child pidl so it could call Get­UI­Object­Of.

  • The Old New Thing

    Does the CopyFile function verify that the data reached its final destination successfully?


    A customer had a question about data integrity via file copying.

    I am using the File.Copy to copy files from one server to another. If the call succeeds, am I guaranteed that the data was copied successfully? Does the File.Copy method internally perform a file checksum or something like that to ensure that the data was written correctly?

    The File.Copy method uses the Win32 Copy­File function internally, so let's look at Copy­File.

    Copy­File just issues Read­File calls from the source file and Write­File calls to the destination file. (Note: Simplification for purposes of discussion.) It's not clear what you are hoping to checksum. If you want Copy­File to checksum the bytes when the return from Read­File, and checksum the bytes as they are passed to Write­File, and then compare them at the end of the operation, then that tells you nothing, since they are the same bytes in the same memory.

    while (...) {
     ReadFile(sourceFile, buffer, bufferSize);
     readChecksum.checksum(buffer, bufferSize);
     writeChecksum.checksum(buffer, bufferSize);
     WriteFile(destinationFile, buffer, buffer,Size);

    The read­Checksum and write­Checksum are identical because they operate on the same bytes. (In fact, the compiler might even optimize the code by merging the calculations together.) The only way something could go awry is if you have flaky memory chips that change memory values spontaneously.

    Maybe the question was whether Copy­File goes back and reads the file it just wrote out to calculate the checksum. But that's not possible in general, because you might not have read access on the destination file. I guess you could have it do a checksum if the destination were readable, and skip it if not, but then that results in a bunch of weird behavior:

    • It generates spurious security audits when it tries to read from the destination and gets ERROR_ACCESS_DENIED.
    • It means that Copy­File sometimes does a checksum and sometimes doesn't, which removes the value of any checksum work since you're never sure if it actually happened.
    • It doubles the network traffic for a file copy operation, leading to weird workarounds from network administrators like "Deny read access on files in order to speed up file copies."

    Even if you get past those issues, you have an even bigger problem: How do you know that reading the file back will really tell you whether the file was physically copied successfully? If you just read the data back, it may end up being read out of the disk cache, in which case you're not actually verifying physical media. You're just comparing cached data to cached data.

    But if you open the file with caching disabled, this has the side effect of purging the cache for that file, which means that the system has thrown away a bunch of data that could have been useful. (For example, if another process starts reading the file at the same time.) And, of course, you're forcing access to the physical media, which is slowing down I/O for everybody else.

    But wait, there's also the problem of caching controllers. Even when you tell the hard drive, "Now read this data from the physical media," it may decide to return the data from an onboard cache instead. You would have to issue a "No really, flush the data and read it back" command to the controller to ensure that it's really reading from physical media.

    And even if you verify that, there's no guarantee that the moment you declare "The file was copied successfully!" the drive platter won't spontaneously develop a bad sector and corrupt the data you just declared victory over.

    This is one of those "How far do you really want to go?" type of questions. You can re-read and re-validate as much as you want at copy time, and you still won't know that the file data is valid when you finally get around to using it.

    Sometimes, you're better off just trusting the system to have done what it says it did.

    If you really want to do some sort of copy verification, you'd be better off saving the checksum somewhere and having the ultimate consumer of the data validate the checksum and raise an integrity error if it discovers corruption.

  • The Old New Thing

    The day I stole Joe Belfiore's mouse


    He's now the head demo-monkey/cheerleader for Windows Phone, but back in the old days, Joe Belfiore was the head demo-monkey/cheerleader for the Windows 95 user interface design. A team-wide meeting was held to show off the new interface that they had developed. Wow look, we have a Start menu (though it wasn't known by that name yet), a taskbar (though it wasn't known by that name yet), shortcuts, a Close button (in the upper right corner), property sheets, all that good stuff.

    At that time, I was still developing my thermonuclear skills and in particular was cultivating the skill of asking challenging questions during the Q&A that comes at the end of these sorts of meetings. I adopted the role of the mouse skeptic and asked, "I noticed that property sheets don't show up in the Alt+Tab list. How do I switch to a property sheet without a mouse? And more generally, how well will this new interface work for keyboard-based users?"

    The answer (which managed to remain true all the way through the Windows 95 project) was "To get back to a property sheet, go back to wherever you launched it from and launch it again. And we have not abandoned the existing rules for keyboard access. There will be keyboard equivalents for all mouse-based actions."

    To make sure he stuck to his word, I snuck into his office and stole his mouse.

    I assume he survived, though for all I know, he just went and ordered a new one.

    Inspiration for today's entry: Seven days using only keyboard shortcuts: No mouse, no trackpad, no problem?

  • The Old New Thing

    How do you deal with an input stream that may or may not contain Unicode data?


    Dewi Morgan reinterpreted a question from a Suggestion Box of times past as "How do you deal with an input stream that may or may not contain Unicode data?" A related question from Dave wondered how applications that use CP_ACP to store data could ensure that the data is interpreted in the same code page by the recipient. "If I send a .txt file to a person in China, do they just go through code pages until it seems to display correctly?"

    These questions are additional manifestations of Keep your eye on the code page.

    When you store data, you need to have some sort of agreement (either explicit or implicit) with the code that reads the data as to how the data should be interpreted. Are they four-byte sign-magnitude integers stored in big-endian format? Are they two-byte ones-complement signed integers stored in little-endian format? Or maybe they are IEEE floating-point data stored in 80-bit format. If there is no agreement between the two parties, then confusion will ensue.

    That your data consists of text does not exempt you from this requirement. Is the text encoded in UTF-16LE? Or maybe it's UTF-8. Or perhaps it's in some other 8-bit character set. If the two sides don't agree, then there will be confusion.

    In the case of files encoded in CP_ACP, you have a problem if the source and destination have different values for CP_ACP. That text file you generate on a US-English system (where CP_ACP is 1252) may not make sense when decoded on a Chinese-Simplified system (where CP_ACP is 936). It so happens that all Windows 8-bit code pages agree on code points 0 through 127, so if you restrict yourself to that set, you are safe. The Windows shell team was not so careful, and they slipped some characters into a header file which are illegal when decoded in code page 932 (the CP_ACP used in Japan). The systems in Japan do not cycle through all the code pages looking for one that decodes without errors; they just use their local value of CP_ACP, and if the file makes no sense, then I guess it makes no sense.

    If you are in the unfortunate situation of having to consume data where the encoding is unspecified, you will find yourself forced to guess. And if you guess wrong, the result can be embarrassing.

    Bonus chatter: I remember one case where a customer asked, "We need to convert a string of chars into a string of wchars. What code page should we pass to the Multi­Byte­To­Wide­Char function?"

    I replied, "What code page is your char string in?"

    There was no response. I guess they realized that once they answered that question, they had their answer.

  • The Old New Thing

    Raymond learns about some of the things people do to get banned on Xbox LIVE


    I still enjoy dropping in on Why Was I Banned? every so often, but not being a l33t Xbox haxxor, I don't understand a lot of the terminology. Fortunately, some of my colleagues were kind enough to explain them to me. (And now I'm explaining them to you so that you don't have to look as stupid asking them.)

    A modded lobby is a pre-game lobby (a server you connect to in order to find other people to play with or against) that has been modified (modded) with carefully-crafted parameters so that they grant people who visit them various advantages. For example, the reward for winning the game could be some absurd number of experience points. Sometimes the reward is granted merely for visiting the lobby; you don't need to actually play the game.

    A glitch lobby is a modded lobby that takes advantage of a bug (glitch) in the software. An infection lobby is a modded lobby that modifies (infects) your character so that the modification persists even after you leave the modded lobby and return to regular play.

    I mused that it would be interesting (if possibly ultimately a bad idea) to create a separate universe for all the modded accounts. You aren't banned from Xbox entirely, but your account has been moved permanently to the mod universe. You're allowed to play games only against other modded accounts. Soon, you will realize that other people are much better than modding than you, and the result is that the gameplay is totally unfair and not fun at all.

    And if you complain that the mod universe is totally unfair and no fun at all, then everybody laughs at you and you earn the IRONY badge.

    (At least until somebody comes up with a mod that removes the IRONY badge.)

  • The Old New Thing

    IShellFolder::BindToObject is a high-traffic method; don't do any heavy lifting


    A customer observed that once the user visited their shell extension, Explorer ran really slowly. (Actually, it took a while just to get to this point, but I'll skip the early investigations because they aren't relevant to the story.) Some investigation showed that Explorer's tree view was calling into the shell extension, which was in turn hanging the shell for several seconds at a time.

    Explorer was calling into the shell extension because the node was in the folder tree view, and Explorer was doing a little bookkeeping to synchronize the folder state with the view. The node referred to a server that was no longer available, so when Explorer asked the shell extension, "Hey, do you have any translucent froodads for me?" the shell extension went off and tried to contact the server (30 second timeout) before returning with the answer, "Um, sorry, I'm not sure what you're talking about."

    The problem was in the shell extension's IShell­Folder::Bind­To­Object method. The Bind­To­Object method is how you navigate from a parent to a child object, but this is supposed to be a lightweight navigation. Don't try to validate that the child still exists. Just bind to the child as if it existed. Only when the client tries to do something interesting should you go check whether the object actually exists.

    You can see this in the shell, for example. Suppose you generate a pidl to a network server. Meanwhile, the network server goes down. If you try to bind to that pidl, the bind will succeeed (and quickly). Only if you then ask a question like Enum­Objects will you find out, "Oh, wait, this server doesn't actually exist."

    (Validating existence on bind doesn't really buy you much anyway, because the server might go down after the bind succeeds but before the Enum­Objects call, so clients have to be prepared anyway for the possibility of a successful bind but a failed enumeration.)

    In the shell, binding is a common operation since it's a prerequisite for talking about objects. As long as the pidl is valid, you should succeed the bind. Try not to hit the disk and definitely don't hit the network. There's plenty of time to do that later. Because the bind may not even have been done with the intention of communicating with the target; the client may have bound to the pidl just to be able to talk about the target. (As in this case, where the shell wasn't interested in the target per se; it just wanted to know if the target had a translucent froodad.)

  • The Old New Thing

    WM_CTLCOLOR vs GetFileVersionInfoSize: Just because somebody else screwed up doesn't mean you're allowed to screw up too


    In a discussion of the now-vestigial lpdwHandle parameter to the Get­File­Version­Info­Size function, Neil asks, "Weren't there sufficient API differences (e.g. WM_CTLCOLOR) between Win16 and Win32 to justify changing the definitions to eliminate the superfluous handle?"

    The goal of Win32 was to provide as much backward compatibility with existing 16-bit source code as can be practically achieved. Not all of the changes were successful in achieving this goal, but just because one person fails to meet that goal doesn't mean that everybody else should abandon the goal, too.

    The Win32 porting tool PORTTOOL.EXE scanned for things which had changed and inserted comments saying things like

    • "No Win32 API equivalent" -- these were for the 25 functions which were very tightly coupled to the 16-bit environment, like selector management functions.
    • "Replaced by OtherFunction" -- these were used for the 38 functions which no longer existed in Win32, but for which corresponding function did exist, but the parameters were different so a simple search-and-replace was not sufficient.
    • "Replaced by XYZ system" -- these were for functions that used an interface that was completely redesigned: the 16 old sound functions that buzzed your tinny PC speaker being replaced by the new multimedia system, and the 8 profiling functions.
    • "This function is now obsolete" -- these were for the 16 functions that no longer had any effect, like Global­LRU­Newest and Limit­EMS­Pages.
    • "wParam/lParam repacking" -- these were for the 21 messages that packed their parameters differently.
    • Special remarks for eight functions whose parameters changed meaning and therefore required special attention.
    • A special comment just for window procedures.

    If you add it up, you'll see that this makes for a total of 117 breaking changes. And a lot of these changes were in rarely-used parts of Windows like the selector-management stuff, the PC speaker stuff, the profiling stuff, and the serial port functions. The number of breaking changes that affected typical developers was more like a few dozen.

    Not bad for a total rewrite of an operating system.

    If somebody said, "Hey, you should port to this new operating system. Here's a list of 117 things you need to change," you're far more likely to respond, "Okay, I guess I can do that," than if somebody said, "Here's a list of 3,000 things you need to change." Especially if some of the changes were not absolutely necessary, but were added merely to annoy you. (I would argue that the handling of many GDI functions like Move­To fell into the added merely to annoy you category, but at least a simple macro smooths over most of the problems.)

    One of the messages that required special treatment was WM_COMMAND. In 16-bit Windows, the parameters were as follows:

    WPARAM int nCode
    LPARAM HWND hwndCtl (low word)
    int id (high word)

    Observe that this message violated the rule that handle-sized things go in the WPARAM. As a result, this parameter packing method could not be maintained in Win32. If it had been packed as

    WPARAM HWND hwndCtl
    LPARAM int id (low word)
    int nCode (high word)

    then the message would have ported cleanly to Win32. But Win32 handles are 32-bit values, so there's no room for both an HWND and an integer in a 32-bit LPARAM; as a result, the message had to be repacked in Win32.

    The WM_CTL­COLOR message was an extra special case of a message that required changes, because it was the only one that changed in a way that required more than just mechanical twiddling of the way the parameters were packaged. Instead, it got split out into several messages, one for each type of control.

    In 16-bit Windows, the parameters to the WM_CTL­COLOR message were as follows:

    WPARAM HDC hdc
    LPARAM HWND hwndCtl (low word)
    int type (high word)

    The problem with this message was that it had two handle-sized values. One of them went into the WPARAM, like all good handle-sized parameters, but the second one was forced to share a bunk bed with the type code in the LPARAM. This arrangement didn't survive in Win32 because handles expanded to 32-bit values, but unlike WM_COMMAND, there was nowhere to put the now-ousted type, since both the WPARAM and LPARAM were full with the two handles. Solution: Encode the type code in the message number. The WM_CTL­COLOR message became a collection of messages, all related by the formula


    The WM_CTL­COLOR message was the bad boy in the compatibility contest, falling pretty badly on its face. (How many metaphors can I mix in one article?)

    But just because there's somebody who screwed up doesn't mean that you're allowed to screw up too. If there was a parameter that didn't do anything any more, just declare it a reserved parameter. That way, you didn't have to go onto the "wall of shame" of functions that didn't port cleanly. The Get­File­Version­Info­Size function kept its vestigial lpdwHandle parameter, Win­Main kept its vestigial hPrev­Instance parameter, and Co­Initialize kept its vestigial lpReserved parameter.

    This also explains why significant effort was made in the 32-bit to 64-bit transition not to make breaking changes just because you can. As much as practical, porting issues were designed in such a way that they could be detected at compile time. Introducing gratuitous changes in behavior makes the porting process harder than it needs to be.

  • The Old New Thing

    Rogue feature: Docking a folder at the edge of the screen


    Starting in Windows 2000 and continuing through Windows Vista, you could drag a folder out of Explorer and slam it into the edge of the screen. When you let go, it docked itself to that edge of the screen like a toolbar. A customer noticed that this stopped working in Windows 7 and asked, "Was this feature dropped in Windows 7, and is there a way to turn it back on?"

    Yes, the feature was dropped in Windows 7, and there is no way to turn it back on because the code to implement it was deleted from the product. (Well, okay, you could "turn it back on" by working with your support representative to file a Design Change Request with the Windows Sustained Engineering team and asking them to restore the code. But they'll probably cackle with glee as they click REQUEST DENIED. They will also probably add a buzzing sound just for extra oomph.)

    The introduction of this feature took place further back in history than I have permission to access the Windows source code history database, so I can't explain how it was introduced, but I can guess, and then the person who removed the feature confirmed that my guess was correct.

    First of all, very few people were actually using the feature. And of the people who activated it, most of them did so by mistake and couldn't figure out how to undo it. (Sound familiar?) The feature was creating far more trouble than benefit, and by that calculation alone, it was a strong candidate for removal. Furthermore, the design team was interested in a new way to use the edges of the screen. Nobody could figure out how the docking feature actually got added. We strongly suspect that it was another rogue feature added by a specific developer who had a history of slipping in rogue features.

  • The Old New Thing

    One for the "They have to say that because of me": Ground rules at the Point Defiance Zoo


    The ground rules for the Point Defiance Zoo and Aquarium in Tacoma include the usual things you might expect. "No pets." "Do not feed the animals." "No smoking." But then there's a rule that clearly is one about which somebody somewhere in the world can say "They have to say that because of me":

    • Remain clothed at all times.
  • The Old New Thing

    Why can't I use Magnifier in Full Screen or Lens mode?


    A customer liaison asked why their customer's Windows 7 machines could run Magnifier only in Docked mode. Full Screen and Lens mode were disabled. The customer liaison was unable to reproduce the problem on a physical machine, but was able to reproduce it in a virtual machine.

    Full Screen and Lens mode require that desktop composition be enabled. Windows will enable desktop composition by default if it thinks your video card is capable of handling it. (Finding the minimum hardware requirements for desktop composition is left as an exercise.)

    This was visible in the screen shots provided by the customer liaison. In the screen shot where Full Screen and Lens modes were enabled, the Aero theme was being used, whereas in the screen shot where they were disabled, the theme was Windows 7 Basic. The Windows 7 Basic theme is used when desktop composition is disabled.

    A quick way to check whether desktop composition is enabled is to hit Alt+Tab and see whether windows get the Aero Peek effect when you select them. Aero Peek is a feature that is provided by the desktop compositor.

Page 2 of 3 (26 items) 123