March, 2011

  • The Old New Thing

    Microspeak: Cadence


    Originally, the term cadence meant the rate at which a regular event recurs, possibly with variations, but with an overall cycle that repeats. For example, the cadence for team meetings might be "Every Monday, with a longer meeting on the last meeting of each month."

    Project X is on a six-month release cadence, whereas Project Y takes two to three years between releases.

    Q: What was the cadence of email requests you sent out to drive contributions?

    A: We started with an announcement in September, with two follow-up messages in the next month.

    In what I suspect is a case of I want to use this cool word other people are using, even though I don't know exactly what it means, the term has been applied more broadly to mean schedule or timeline, even for nonrecurring events. Sample usage: "What is our cadence for making this available outside the United States?"

  • The Old New Thing

    What's the difference between FreeResource and, say, DestroyAcceleratorTable


    MaxMax asks a number of resource-related questions, starting with "How do you Unlock a Lock­Resource?" and culminating in "What are the differences between Free­Resource and Destroy­Accelerator­Table, Destroy­Object, etc.? It would be much easier to use a single function instead of a collection of five."

    It helps if you understand the history of resources, because the functions were designed back when resources were managed very differently from how they are today. The usage pattern is still the same:

    • Find­Resource
    • Load­Resource
    • Lock­Resource
    • use the resource
    • Unlock­Resource
    • Free­Resource

    You unlock a resource by calling, um, Unlock­Resource.

    Although the usage pattern is the same, the mechanism under the covers is completely different. In 16-bit Windows, loading a resource entailed allocating a chunk of memory, then filling that memory block from the disk image. In Win32, resources are mapped into the address space as part of the image; there is no memory allocation and no explicit loading.

    The next thing to understand is that resources are just blobs of binary data. They are not live objects. It's not like there's a HBITMAP sitting in there just waiting to be found.

    Think of resource data as a set of blueprints. If you call Find­Resource + Load­Resource + Lock­Resource, you wind up with the blueprints for a radio, but that's not the same as actually having a radio. To do that, you need to hand the radio blueprints to somebody who knows how to read electronic schematic diagrams and who knows how to solder wires together in order to convert the potential radio into an actual radio.

    If you've been following the sporadic series on the format of resources, you'll know that these schematic diagrams can often be quite complicated. The Load­Bitmap function first does the Find­Resource + Load­Resource + Lock­Resource dance to locate the bitmap blueprint, but then it needs to actually make the bitmap, which is done by parsing the raw resource data and trying to make sense of it, calling functions like Create­Bitmap and Set­DI­Bits to convert the blueprint into an actual bitmap.

    That's why, if you use these helper functions like Load­Accelerators to convert the blueprint into an object, you need to use the corresponding cleanup function like Destroy­Accelerator­Table when you want to destroy the object. You have to use the correct cleanup function, of course. You can't destroy a bitmap with Destroy­Accelerator­Table any more than you can put a radio in the clothing drop bin.

    Just like when the radio guy returns the original blueprints plus a brand new radio, you return the blueprints to the library, but if you want to destroy the radio, you have to take it to the electronics recycling facility.

  • The Old New Thing

    News flash: Companies change their product to appeal to their customers


    There was some apparent uproar because there was an industry which "changed the flavoring of their product depending on which market segment they were trying to appeal to."

    Well duh, don't all industries do this?

    The reason why this even remotely qualified as news didn't appear until the last five words of the article!

  • The Old New Thing

    The window manager needs a message pump in order to call you back unexpectedly


    There are a bunch of different ways of asking the window manager to call you when something interesting happens. Some of them are are in response to things that you explicitly asked for right now. The enumeration functions are classic examples of this. If you call EnumWindows and pass a callback function, then that callback is called directly from the enumerator.

    On the other hand, there is a much larger class of things that are in response either to things that happen on another thread, or in response to things that happen on your thread, but not as a direct result of an immediate request. For example, if you use the SendMessageCallback function, and the window manager needs to trigger your callback, the window manager needs a foot in the door of your thread in order to get control. It can't just interrupt code arbitrarily; that way lies madness. So we're looking for some way the window manager can regain control of the CPU at a time when the program is in a stable, re-entrant state.

    That foot in the door for the window manager is the message pump. That's the one component that the window manager can have some confidence that the program is going to call into periodically. This solves the first problem: How do I get control of the CPU.

    Furthermore, it's a known quantity for programs that when you call GetMessage or PeekMessage, incoming sent messages are dispatched, so your program had better be in a stable, re-entrant state when you call those functions. That solves the second problem: How do I get control of the CPU when the program is in a stable state?

    Take-away: When you register a callback with the window manager, you need to pump messages. Otherwise, the window manager has no way of calling you back.

    Related: The alertable wait is the non-GUI analog to pumping messages.

  • The Old New Thing

    If you're waiting for I/O to complete, it helps if you actually have an I/O to begin with


    We saw earlier the importance of waiting for I/O to complete before freeing the data structures associated with that I/O. On the other hand, before you start waiting, you have to make sure that you have something to wait for.

    A customer reported a hang in Get­Overlapped­Result waiting for an I/O to cancel, and the I/O team was brought in to investigate. They looked at the I/O stack and found that the I/O the customer was waiting for was no longer active. The I/O people considered a few possibilities.

    • The I/O was active at one point, but when it completed, a driver bug prevented the completion event from being signaled.
    • The I/O was active at one point, and the I/O completed, but the program inadvertently called Reset­Event on the handle, negating the Set­Event performed by the I/O subsystem.
    • The I/O was never active in the first place.

    These possibilities are in increasing order of likelihood (and, perhaps not coincidentally, decreasing order of relevance to the I/O team).

    A closer investigation of the customer's code showed a code path in which the Read­File call was bypassed. When the bypass code path rejoined the mainline code path, the code continued its work for a while, and then if it decided that it was tired of waiting for the read to complete, it performed a Cancel­Io followed by a Get­Overlapped­Result to wait for the cancellation to complete.

    If you never issue the I/O, then a wait for the I/O to complete will wait forever, since you're waiting for something that will never happen.

    Okay, so maybe this was a dope-slap type of bug. But here's something perhaps a little less self-evident:

    // there is a flaw in this code - see discussion
    // assume operating on a FILE_FLAG_OVERLAPPED file
    if (ReadFile(h, ..., &overlapped)) {
     // I/O completed synchronously, as we learned earlier
    } else {
     // I/O under way
     ... do stuff ...
     // okay, let's wait for that I/O
     GetOverlappedResult(h, &overlapped, &dwRead, TRUE);

    The Get­Overlapped­Result call can hang here because the comment "I/O is under way" is overly optimistic: The I/O may never even have gotten started. If it never started, then it will never complete either. You cannot assume that a FALSE return from Read­File implies that the I/O is under way. You also have to check that Get­Last­Error() returns ERROR_IO_PENDING. Otherwise, the I/O failed to start, and you shouldn't wait for it.

    // assume operating on a FILE_FLAG_OVERLAPPED file
    if (ReadFile(h, ..., &overlapped)) {
     // I/O completed synchronously, as we learned earlier
    } else if (GetLastError() == ERROR_IO_PENDING) {
     // I/O under way
     ... do stuff ...
     // okay, let's wait for that I/O
     GetOverlappedResult(h, &overlapped, &dwRead, TRUE);
    } else {
     // I/O failed - don't wait because there's nothing to wait for!
  • The Old New Thing

    Charlie Sheen v Muammar Gaddafi: Whose line is it anyway?


    I got seven out of ten right.

  • The Old New Thing

    Although the x64 calling convention reserves spill space for parameters, you don't have to use them as such


    Although the x64 calling convention reserves space on the stack as spill locations for the first four parameters (passed in registers), there is no requirement that the spill locations actually be used for spilling. They're just 32 bytes of memory available for scratch use by the function being called.

    We have a test program that works okay when optimizations are disabled, but when compiled with full optimizations, everything appears to be wrong right off the bat. It doesn't get the correct values for argc and argv:

    int __cdecl
    wmain( int argc, WCHAR** argv ) { ... }

    With optimizations disabled, the code is generated correctly:

            mov         [rsp+10h],rdx  // argv
            mov         [rsp+8],ecx    // argc
            sub         rsp,158h       // local variables
            mov         [rsp+130h],0FFFFFFFFFFFFFFFEh

    But when we compile with optimizations, everything is completely messed up:

            mov         rax,rsp 
            push        rsi  
            push        rdi  
            push        r13  
            sub         rsp,0E0h 
            mov         qword ptr [rsp+78h],0FFFFFFFFFFFFFFFEh 
            mov         [rax+8],rbx    // ??? should be ecx (argc)
            mov         [rax+10h],rbp  // ??? should be edx (argv)

    When compiler optimizations are disabled, the Visual C++ x64 compiler will spill all register parameters into their corresponding slots. This has as a nice side effect that debugging is a little easier, but really it's just because you disabled optimizations, so the compiler generates simple, straightforward code, making no attempts to be clever.

    When optimizations are enabled, then the compiler becomes more aggressive about removing redundant operations and using memory for multiple purposes when variable lifetimes don't overlap. If it finds that it doesn't need to save argc into memory (maybe it puts it into a register), then the spill slot for argc can be used for something else; in this case, it's being used to preserve the value of rbx.

    You see the same thing even in x86 code, where the memory used to pass parameters can be re-used for other purposes once the value of the parameter is no longer needed in memory. (The compiler might load the value into a register and use the value from the register for the remainder of the function, at which point the memory used to hold the parameter becomes unused and can be redeployed for some other purpose.)

    Whatever problem you're having with your test program, there is nothing obviously wrong with the code generation provided in the purported defect report. The problem lies elsewhere. (And it's probably somewhere in your program. Don't immediately assume that the reason for your problem is a compiler bug.)

    Bonus chatter: In a (sadly rare) follow-up, the customer confessed that the problem was indeed in their program. They put a function call inside an assert, and in the nondebug build, they disabled assertions (by passing /DNDEBUG to the compiler), which means that in the nondebug build, the function was never called.

    Extra reading: Challenges of debugging optimized x64 code. That .frame /r command is real time-saver.

  • The Old New Thing

    No, not that M, the other M, the one called Max


    Code names are rampant at Microsoft. One of the purposes of a code name is to impress upon the people who work with the project that the name is only temporary, and that the final name will come from the marketing folks (who sometimes pull through with a catchy name like Zune, and who sometimes drop the ball with a dud like Bob and who sometimes cough up monstrosities like Microsoft WinFX Software Development Kit for Microsoft® Pre-Release Windows Operating System Code-Named "Longhorn", Beta 1 Web Setup).

    What I find amusing are the project which change their code names. I mean, the code name is already a placeholder; why replace a placeholder with another placeholder?

    One such example is the experimental project released under the code name Max. The project founders originally named it M. Just the letter M. Not to be confused with this thing code named M or this other thing code named M.

    In response to a complaint from upper management about single-letter code names, the name was changed to Milkshake, and the team members even made a cute little mascot figure, with a straw coming out the top of his head like a milkshake.

    I'm not sure why the name changed a second time. Perhaps those upper level managers didn't think Milkshake was a dignified-enough name. For whatever reason, the name changed yet again, this time to Max. (Wikipedia claims that the project was named after the pet dog of one of the team members; I have been unable to confirm this. Because I haven't bothered trying.)

    There's no real punch line here, sorry. Just one example of the naming history of a project that went by many names.

    Bonus chatter: Apparently the upper management folks who complained about the single-letter code name M were asleep when another product was code-named Q (now known as Windows Home Server).

Page 3 of 3 (28 items) 123