History

  • The Old New Thing

    Some trivia about the //build/ 2011 conference

    • 12 Comments

    Registration for //build/ 2013 opens tomorrow. I have no idea what's in store this year, but I figure I'd whet your appetite by sharing some additional useless information about //build/ 2011.

    The internal code name for the prototype tablets handed out at //build/ 2011 was Nike. I think we did a good job of keeping the code name from public view, but one person messed up and accidentally let it slip to Mary-Jo Foley when they said that the contact email for people having tax problems related to the device is nikedistⓐmicrosoft.com.

    The advance crew spent an entire week preparing those devices. One of the first steps was unloading the devices from the pallettes. This was done in a disassembly line: The boxes were opened, the devices were fished out, then removed from the protective sleeve. At the end of this phase, you had one neat stack of boxes and one neat stack of devices.

    The advance crew also configured the hall so they would be ready to start once Redmond sent down the final bits of the Developer Preview build. The hall was divided into sections, and each section consisted of eight long tables. Four of the tables were arranged in a square, and the other four tables were placed outside the square, one parallel to each side, forming four lanes.

     
     
           
     
     

    Along the inner tables, there were docking stations, each with power, wired access to a private network, and a USB thumb drive. Along the outer tables, there were desk organizers like this one, ready to hold several devices in a vertical position, and next to the organizer was a power strip with power cables at the ready.

    In this phase of the preparation, the person working the station would take a device, pop it into a docking station, and power it on with the magic sequence to boot from USB. The USB stick copied itself to a RAM drive, then ran scripts to reformat the hard drive and copy all the setup files from the private network onto the hard drive, then it installed the build onto the machine, installed Visual Studio, installed the sample applications, flashed the firmware, and otherwise prepared the machine for unboxing. (Not necessarily in that order; I didn't write the scripts, so I don't know what they did exactly. But I figure these were the basic steps.) Once the setup files were copied from the private network, the rest of the installation could proceed autonomously. It didn't need any further access to the USB stick or the network. Everything it needed was on the RAM drive or the hard drive.

    The scripts changed the screen color based on what step of the process it was in, so that the person working the station could glance over all the devices to see which ones needed attention. Once all the files were copied from the network, the devices were unplugged from the docking station and moved to the vertical desk organizer. There, it got hooked up with a power cable and left to finish the installation. Moving the device to the second table freed up the docking station to accept another device.

    Assuming everything went well, the screen turned green to indicate that installation was complete, and the device was unplugged, powered down, and placed in the stack of devices that were ready for quality control.

    The devices that passed quality control then needed to be boxed up so they could be handed out to the conference attendees. Another assembly line formed: The devices were placed back in the protective sleeves, nestled snugly in their boxes, and the boxes closed back up.

    Now, I'm describing this all as if everything ran perfectly smoothly. Of course there were problems which arose, some minor and some serious, and the process got tweaked as the days progressed in order to make things more efficient or to address a problem that was discovered.

    For example, the devices were labeled preview devices, but shortly before the conference was set to begin, the manufacturer registered their objection to the term, since preview implies that the device will actually turn into a retail product. They insisted that the devices be called prototype devices. This meant that mere days before the conference opened, a rush print job of 5000 stickers had to be shipped down to the convention center in order to cover the word preview with the word prototype. A new step was added to the assembly line: place sticker over offending word.

    Another example of problem-solving on the fly: The SIM chip for the wireless data plan was preinstalled in the device. The chip came on a punch-out card, and the manufacturer decided to leave the card shell in the box. Okay, I guess, except that the card shell had the SIM card's account number printed on it. Since the reassembly process didn't match up the devices with the original boxes, you had all these devices with unmatched card shells. In theory, somebody might call the service provider and give the account number on the shell rather than the number on the SIM card. To fix this, a new step was added to the assembly line: Remove the card shells. All the previously-assembled boxes had to be unpacked so the shells could be removed. (At some point, somebody discovered that you could extract the shells without removing the foam padding if you held the box at just the right angle and shook it, so that saved a few seconds.)

    Now about the devices themselves: They were a very limited run of custom hardware, and they were not cheap. I think the manufacturing cost was in the high $2000s per unit, and that doesn't count all the sunk costs. I found it amusing when people wrote, "What do you mean a free tablet? Obviously they baked that into the cost of the conference registration, so you paid for it anyway." Conference registeration was $2,095 (or $1,595 if you registered early), which nowhere near covered the cost of the device.

    Some people whined that Microsoft should have made these devices available to the general public for purchase. First of all, these are developer prototypes, not consumer-quality devices. They are suitable for developing Windows 8 software but aren't ready for prime time. (For one thing, they run hot. More on that later.) Second of all, there aren't any to sell. We gave them all away! It's not like there's a factory sitting there waiting for orders. It was a one-shot production run. When they ran out, they ran out.¹

    Third, these devices, by virtue of being prototypes, had a high infant morality rate. I don't know exactly, but I'm guessing that maybe a quarter of them ended up not being viable. One of the things that the advance crew had to do was burn in the devices to try to catch the dead devices. I remember the team being very worried that the hardware helpdesk at the conference would be overwhelmed by machines that slipped through the on-site testing. Luckily, that didn't happen. (Perhaps they were too successful, because everybody ended up assuming that pumping out these puppies was a piece of cake!)

    Doing a little back-of-the-envelope calculations, let's say that the machines cost around $2,750 to produce, and that a quarter of them failed burn-in. Add on top of that a 25% buffer for administrative overhead, and you're looking at a cost-per-device of over $4,500. I doubt there would be many people interested in buying one at that price.

    Especially since you could buy something very similar for around $1100 to $1400. It won't have the hardware customizations, but it'll be close.

    The hardware glitches that occurred during the keynote never appeared during rehearsals in Redmond. But when rehearing in Anaheim, the hardware started flaking out like crazy and eventually self-destructing. (And like I said, those devices weren't cheap!) One of my colleagues got a call from Los Angeles: "When you come down here, bring as many extra Nikes as you can. We're burning through them like mad!" My colleague ended up pissing off everybody in the airport security line behind her when she got to the X-ray machine and unloaded nine devices onto the conveyer belt. "Great, I just put tens of thousands of dollars worth of top-secret hardware on an airport X-ray machine. I hope nothing happens to them."

    Why did the devices start failing during rehearsals in Anaheim, when the ran just fine in Redmond? Because in Anaheim, the devices were being run at full brightness all the time (so they show up better on camera), and they were driving giant video displays, and they were sitting under hot stage lights for hours on end. On top of that, I'm told that the HDMI protocol is bi-directional, so it's possible that the giant video displays at the convention center were feeding data back into the devices in a way that they couldn't handle. Put all that together, and you can see why the devices would start overheating.

    What made it worse was that in order to cram all the extra doodads and sensors into the device, the intestines had to be rearranged, and the touch processor chip ended up being placed directly over the HDMI processor chip. That meant that when the HDMI chip overheated, it caused the touch processor to overheat, too. If you watched the keynote carefully, you'd see that shortly before the machine on stage blew up, you saw the touch sensor flip out and generate phantom touches all over the screen. That was the clue that the machine was about to die from overheating and it would be in the presenter's best interest to switch to another machine quickly. (The problem, of course, is that the presenter is looking out into the audience giving the talk, not staring at the device's screen the whole time. As a result, this helpful early warning signal typically goes unnoticed by the very person who can do the most about it.)

    The day before the conference officially began, Jensen Harris did a preview presentation to the media. One of the glitches that hit during his presentation was that the system started hallucinating an invisible hand that kept swiping the Word Hunt sample game back onto the screen. Jensen quipped, "This is our new auto-Word Hunt feature. We want to make sure you always have Word Hunt when you need it. We've moved beyond touch. Now you don't even need to touch your PC to get access to Word Hunt."

    Jensen's phenomenal calm in the face of adversity also manifested itself during his keynote presentation. You in the audience never noticed it, but at one point, one of the demo applications hit a bug and hung. Jensen spotted the problem before it became obvious and smoothly transitioned to another device and continued. What's more, while he was talking, he went back to the first device and surreptitiously called up Task Manager, killed the the hung application, and prepared the device for the next demo. All this without skipping a beat.

    We are all in awe of Jensen.

    When he stopped by the booth, Jensen said to me, "I don't know how you can stand it, Raymond. Now I can't walk down the hallway without a dozen people coming up to me and wanting to say something or shake my hand or get my autograph!" (One of the rare times we are both in the same room.)

    Welcome to nerd celebrity, Jensen. You just have to smile and be polite.

    Bonus chatter: What happened to the devices that failed quality control? A good number of them were rejected for cosmetic reasons (scuff marks, mostly). As a thank-you gift to the advance crew for all their hard work, everybody was given their choice of a scuffed-up device to take home. The remaining devices that were rejected for purely cosmetic reasons were taken back to Redmond and distributed to the product team to be used for internal testing purposes.

    ¹ My group had one of these scuffed-up devices that we used for internal testing. Somebody dropped it, and a huge spiderweb crack covered the left third of the screen, so you had to squint to see what was on the screen through the cracks. We couldn't order a replacement because there was nowhere to order replacements from. We just had to continue testing with a device that had a badly cracked screen.

  • The Old New Thing

    What's the story of the onestop.mid file in the Media directory?

    • 45 Comments

    If you look in your C:\Windows\Media folder, you'll find a MIDI file called onestop. What's the story behind this odd little MIDI file? Aaron Margosis considers this file a security risk because "if an attacker can cause that file to be played, it will cause lasting mental pain and anguish to everybody within earshot."

    Despite Wikipedia's claims[citation needed], the file is not an Easter Egg. The file was added in in Windows XP with the comment "Add cool MIDI files to replace bad old ones." So as bad as onestop is, the old ones must have been even worse!

    Okay, but why were they added?

    For product support.

    The product support team wants at least one MIDI file present on the system by default for troubleshooting purposes. That way, problems with MIDI playback can be diagnosed without making the customer go to a Web page and download a MIDI file. When asked why the song is so awful, the developer who added the file explained, "Believe it or not, OneStop is 'less bad' than the ones that it replaced. (Dance of the Sugar Plum Fairy, etc.)" Another reason for replacing the old MIDI file is that the new one exercises more instruments.

    The song was composed by David Yackley.

    On the other hand, we lost clock.avi.

  • The Old New Thing

    For the Nitpickers: Enhanced-mode Windows 3.0 didn't exactly run a copy of standard-mode Windows inside the virtual machine

    • 45 Comments

    Generally speaking, Enhanced-mode Windows 3.0 ran a copy of standard-mode Windows inside the virtual machine. This statement isn't exactly true, but it's true enough.

    Commenter Nitpicker objected, "Why are you threatening us with the Nitpicker's Corner for asking about this issue instead of explaining it once and linking it everywhere?"

    Okay, first of all, as far as I can tell, you're the first person to ask about the issue. So you can't say "Everybody who asks about the issue is threatened with the Nitpicker's Corner" because up until you made your comment, nobody ever asked. Okay, well, technically you can say it, because every statement quantified over the empty set is true. But it is equally true that, at the time you made your comment, that "Everybody who asks about the issue is awarded a new car." So it is not a meaningfully true statement.

    I haven't bothered explaining the issue because the issue has never been central to the main point of whatever article happens to bring it up. The statement is true enough for the purpose of discussion, and the various little corners in which the statement breaks down have no bearing on the original topic. Nitpickers would point out that you can't combine velocities by simple addition because of the laws of Special Relativity. Even when the situation under discussion takes place at non-relativistic speeds.

    As for the suggestion, "Explain it once and link it everywhere," you're assuming that I can even explain it once, that doing so is less work than just saying "not exactly true, but true enough," and that I would enjoy explaining it in the first place.

    If you don't like it, you can ask for your money back.

    Okay, I went back and dug through the old Windows 3.0 source code to answer this question. It took me about four hours to study it all, try to understand what the code was doing, and then distill the conclusions into this article. Writing up the results took another two hours. That's six hours I could've spent doing something enjoyable.

    The 16-bit Windows kernel was actually three kernels. One if you were using an 8086 processor, another if you were using an 80286 processor, and a third if you were using an 80386 processor. The 8086 kernel was a completely separate beast, but the 80286 and 80386 kernels shared a lot of code in common. The major difference between the 80286 and 80386 kernels was in how they managed memory, because the descriptor tables on the 80386 were a different format from the descriptor tables on the 80286. The 80386 memory manager could also take advantage of the new 32-bit registers.

    But the difference between the 80286 and 80386 kernels were not based on whether you were running Standard or Enhanced mode. If you're running on an 80386 processor, then you get the 80386 kernel, regardless of whether you're using Standard or Enhanced mode Windows. And since Enhanced mode Windows required an 80386 processor, the behavioral changes between Standard and Enhanced mode were restricted to the 80386 kernel.

    The 80386 kernel was designed to run as a DPMI client. It asked the DPMI host to take it into protected mode, then used the DPMI interface to do things like allocate selectors and allocate memory. If you ran Windows in Standard mode, then the DPMI host was a custom-built DOS extender that was created just for Standard mode Windows. If you ran Windows in Enhanced mode, then the DPMI host was the 32-bit virtual machine manager. Abstracting to the DPMI interface allowed a single 80386 kernel to run in both Standard and Enhanced modes.

    And in fact if you ran Enhanced mode Windows with paging disabled, then the code running in the 80386 kernel was pretty much the same code that ran if you had run the 80386 kernel under Standard mode Windows.

    One obvious place where the behavior changed was in the code to manage MS-DOS applications, because Enhanced mode Windows could multi-task MS-DOS applications, and Standard mode Windows could not.

    Another place where the behavior changed was in in the code to allocate more selectors: The attempt to retry after extending the local descriptor table was skipped if you were running under the Standard mode DOS extender, because the Standard mode DOS extender didn't support extending the local descriptor table.

    And another difference is that the Windows idle loop in Enhanced mode would issue a special call to release its time slice to any multi-tasking MS-DOS applications. (If you were running in Standard mode, there were no multi-tasking MS-DOS applications, so there was nobody to release your time slice to.)

    Another thing special that the 80386 kernel did was register with the virtual machine manager so that it could display an appropriate message when you pressed Ctrl+Alt+Del. For example, you saw this message if you hit Ctrl+Alt+Del while there was a hung Windows application:

    Contoso Deluxe Music Composer


    This Windows application has stopped responding to the system.

    *  Press ESC to cancel and return to Windows.
    *  Press ENTER to close this application that is not responding.
       You will lose any unsaved information in this application.
    *  Press CTRL+ALT+DEL again to restart your computer. You will
       lose any unsaved information in all applications.

    But all these differences are minor in the grand scheme of things. The window manager behaved the same in Standard mode and Enhanced mode. GDI behaved the same in Standard mode and Enhanced mode. Printer drivers behaved the same in Standard mode and Enhanced mode. Only the low-level kernel bits had to change behavior between Standard mode and Enhanced mode, and as you can see, even those behavior changes were relatively minor.

    That's why I said it was "true enough" that what was running inside the virtual machine was a copy of Standard-mode Windows.

  • The Old New Thing

    Why was WHEEL_DELTA chosen to be 120 instead of a much more convenient value like 100 or even 10?

    • 35 Comments

    We saw some time ago that the nominal mouse wheel amount for one click (known as a "detent") is specified by the constant WHEEL_DELTA, which has the value 120.

    Why 120? Why not a much more convenient number like 100, or even 10?

    Because the value 120 made it easier to create higher-resolution mouse wheels.

    As noted in the documentation:

    The delta was set to 120 to allow Microsoft or other vendors to build finer-resolution wheels (a freely-rotating wheel with no notches) to send more messages per rotation, but with a smaller value in each message.

    Suppose the original wheel mouse had nine clicks around its circumference. Click nine times, and you've made a full revolution. (I have no idea how many actual clicks there were, but the actual number doesn't matter.) Therefore, each click of the wheel on the original mouse resulted in 120 wheel units.

    Now, suppose you wanted to build a double-resolution wheel, say one with eighteen clicks around the circumference instead of just nine. If you reported 120 wheel units for each click, then your mouse would feel "slippery", because it scrolled twice as fast as the original mouse. The solution: Have each click of your double-resolution mouse report 60 wheel units instead of 120.

    That's why the number chosen was 120. The number 120 has a lot more useful factors than 100. The number 100 = 2² × 5² can be evenly divided by the small integers 2, 4, 5, and 10. On the other hand, the number 120 = 2³ × 3 × 5 can be evenly divided by 2, 3, 4, 5, 6, 8, and 10.

    If you wanted to build a triple-resolution mouse, and the MOUSE_WHEEL value were 100, then you would have difficulty reporting each click, because you couldn't just report 33 for each one. (After three clicks, you would have reported only 99 units, and applications which waited for a full MOUSE_WHEEL would still be waiting.) Your mouse driver would have to report 33, 33, 34, 33, 33, 34, 33, 33, 34, and so on. And then it gets messy if the user changes scrolling direction.

    On the other hand, if MOUSE_WHEEL were 120, then the triple-resolution mouse could simply report 40 units per click.

    Okay, so why 120 instead of just 12?

    As noted in the documentation, the value was chosen so that it would be possible to build a mouse with no clicks at all. The wheel simply spun smoothly, and you could stop it at any point. Such a wheel would report one wheel unit for every one-third of one degree of rotation. If the detent were only 12 units, then the wheel would report one unit for every 3 1/3 degrees of rotation, which wouldn't be as smooth.

    I don't know if anybody has developed such a mouse, but at least the possibility is still there. (There are free-spinning mouse wheels, but I don't know whether they are normal WHEEL_DELTA wheels just without the mechanical detents, or whether they really do report fine rotational information.)

    Bonus reading: The History of the Scroll Wheel, written by its inventor, Eric Michelman.

    Mouse wheel trivia: The code name for the mouse wheel project was Magellan. The code name still lingers in error messages that pop up from the original wheel mouse driver.

  • The Old New Thing

    A brief history of the GetEnvironmentStrings functions

    • 24 Comments

    The Get­Environment­Strings function has a long and troubled history.

    The first bit of confusion is that the day it was introduced in Windows NT 3.1, it was exported funny. The UNICODE version was exported under the name Get­Environment­StringsW, but the ANSI version was exported under the name Get­Environment­Strings without the usual A suffix.

    A mistake we have been living with for over two decades.

    This is why the winbase.h header file contains these confusing lines:

    WINBASEAPI
    LPCH
    WINAPI
    GetEnvironmentStrings(
        VOID
        );
    
    WINBASEAPI
    LPWCH
    WINAPI
    GetEnvironmentStringsW(
        VOID
        );
    
    #ifdef UNICODE
    #define GetEnvironmentStrings  GetEnvironmentStringsW
    #else
    #define GetEnvironmentStringsA  GetEnvironmentStrings
    #endif // !UNICODE
    

    It's trying to clean up a mess that was created long ago, and it only partly succeeds. This is why your IDE may get confused when you try to call the Get­Environment­Strings function and send you to the wrong definition. It's having trouble untangling the macros whose job is to try to untangle the original mistake.

    The kernel folks tried to clean this up as quickly as they could, by exporting new functions with the names Get­Environment­StringsW and Get­Environment­StringsA, like they should have been in the first place, but for compatibility purposes, they still have to export the weird unsuffixed Get­Environment­Strings function. And then to avoid all the "gotcha!"s from people looking for proof of nefarious intent, they kept the mistake in the public header files to make their actions visible to all.

    Though personally, I would have tidied things up differently:

    WINBASEAPI
    LPCH
    WINAPI
    GetEnvironmentStrings(
        VOID
        );
    
    WINBASEAPI
    LPCH
    WINAPI
    GetEnvironmentStringsA(
        VOID
        );
    
    WINBASEAPI
    LPWCH
    WINAPI
    GetEnvironmentStringsW(
        VOID
        );
    
    #ifdef UNICODE
    #define GetEnvironmentStrings  GetEnvironmentStringsW
    #else
    #define GetEnvironmentStrings  GetEnvironmentStringsA
    #endif // !UNICODE
    

    I would have left the declaration of the mistaken Get­Environment­Strings function in the header file, but redirected the symbolic name to the preferred suffixed version.

    But then again, maybe my version would have confused IDEs even more than the current mechanism does.

    The other unfortunate note in the history of the Get­Environment­Strings function is the odd way it handled the Unicode environment. Back in the old days, the Get­Environment­Strings function returned a raw pointer to the environment block. The result was that if some other code modified the environment, your pointer became invalid, and there was nothing you could do about it. As I noted, the function was subsequently changed so that both the ANSI and Unicode versions return snapshots of the environment strings, so that the environment strings you received wouldn't get spontaneously corrupted by another thread.

  • The Old New Thing

    Why do BackupRead and BackupWrite require synchronous file handles?

    • 24 Comments

    The Backup­Read and Backup­Write functions require that the handle you provide by synchronous. (In other words, that they not be opened with FILE_FLAG_OVERLAPPED.)

    A customer submitted the following question:

    We have been using asynchronous file handles with the Backup­Read. Every so often, the call to Backup­Read will fail, but we discovered that as a workaround, we can just retry the operation, and it will succeed the second time. This solution has been working for years.

    Lately, we've been seeing crash when trying to back up files, and the stack traces in the crash dumps appear to be corrupted. The issue appears to happen only on certain networks, and the problem goes away if we switch to a synchronous handle.

    Do you have any insight into this issue? Why were the Backup­Read and Backup­Write functions designed to require synchronous handles?

    The Backup­Read and Backup­Write functions have historically issued I/O against the handles provided on the assumption that they are synchronous. As we saw a while ago, doing so against an asynchronous handle means that you're playing a risky game: If the I/O completes synchronously, then nobody gets hurt, but if the I/O goes asynchronous, then the temporary OVERLAPPED structure on the stack will be updated by the kernel when the I/O completes, which could very well be after the function that created it has already returned. The result: A stack smash. (Related: Looking at the world through kernel-colored glasses.)

    This oversight in the code (blindly assuming that the handle is a synchronous handle) was not detected until 10 years after the API was originally designed and implemented. During that time, backup applications managed to develop very tight dependencies on the undocumented behavior of the backup functions. The backup folks tried fixing the bug but found that it ended up introducing massive compatibility issues. On top of that, there was no real business case for extending the Backup­Read and Backup­Write functions to accept asynchronous handles.

    As a result, there was no practical reason for changing the function's behavior. Instead, the requirement that the handle be synchronous was added to the documentation, along with additional text explaining that if you pass an asynchronous handle, you will get "subtle errors that are very difficult to debug."

    In other words, the requirement that the handles be synchronous exists for backward compatibility.

  • The Old New Thing

    Why was Pinball removed from Windows Vista?

    • 115 Comments

    Windows XP was the last client version of Windows to include the Pinball game that had been part of Windows since Windows 95. There is apparently speculation that this was done for legal reasons.

    No, that's not why.

    One of the things I did in Windows XP was port several millions of lines of code from 32-bit to 64-bit Windows so that we could ship Windows XP 64-bit Edition. But one of the programs that ran into trouble was Pinball. The 64-bit version of Pinball had a pretty nasty bug where the ball would simply pass through other objects like a ghost. In particular, when you started the game, the ball would be delivered to the launcher, and then it would slowly fall towards the bottom of the screen, through the plunger, and out the bottom of the table.

    Games tended to be really short.

    Two of us tried to debug the program to figure out what was going on, but given that this was code written several years earlier by an outside company, and that nobody at Microsoft ever understood how the code worked (much less still understood it), and that most of the code was completely uncommented, we simply couldn't figure out why the collision detector was not working. Heck, we couldn't even find the collision detector!

    We had several million lines of code still to port, so we couldn't afford to spend days studying the code trying to figure out what obscure floating point rounding error was causing collision detection to fail. We just made the executive decision right there to drop Pinball from the product.

    If it makes you feel better, I am saddened by this as much as you are. I really enjoyed playing that game. It was the location of the one Windows XP feature I am most proud of.

    Update: Hey everybody asking that the source code be released: The source code was licensed from another company. If you want the source code, you have to go ask them.

  • The Old New Thing

    The QuickCD PowerToy, a brief look back

    • 27 Comments

    One of the original Windows 95 PowerToys was a tool called QuickCD. Though that wasn't its original name.

    The original name of the QuickCD PowerToy was FlexiCD. You'd think that it was short for "Flexible CD Player", but you'd be wrong. FlexiCD was actually named after its author, whose name is Felix, but who uses the "Flexi" anagram as a whimsical nickname. We still called him Felix, but he would occasionally use the Flexi nickname to sign off an email message, or use it whenever he had to create a userid for a Web site (if Web sites which required user registration existed in 1994).

    You can still see remnants of FlexiCD in the documentation. The last sample INF file on this page was taken from the QuickCD installer.

  • The Old New Thing

    Have you found any TheDailyWTF-worthy code during the development of Windows 95?

    • 25 Comments

    Mott555 is interested in some sloppy/ugly code or strange workarounds or code comments during the development of Windows 95, like "anything TheDailyWTF-worthy."

    I discovered that opening a particular program churned the hard drive a lot when you opened it. I decided to hook up the debugger to see what the problem was. What I discovered was code that went roughly like this, in pseudo-code:

    int TryToCallFunctionX(a, b, c)
    {
      for each file in (SystemDirectory,
                        WindowsDirectory,
                        ProgramFilesDirectory(RecursiveSearch),
                        KitchenSink,
                        Uncle.GetKitchenSink)
      {
        hInstance = LoadLibrary(file);
        fn = GetProcAddress(hInstance, "FunctionX");
        if (fn != nullptr) {
            int result = fn(a,b,c);
            FreeLibrary(hInstance);
            return result;
        }
        fn = GetProcAddress(hInstance, "__imp__FunctionX");
        if (fn != nullptr) {
            int result = fn(a,b,c);
            FreeLibrary(hInstance);
            return result;
        }
        fn = GetProcAddress(hInstance, "FunctionX@12");
        if (fn != nullptr) {
            int result = fn(a,b,c);
            FreeLibrary(hInstance);
            return result;
        }
        fn = GetProcAddress(hInstance, "__imp__FunctionX@12");
        if (fn != nullptr) {
            int result = fn(a,b,c);
            FreeLibrary(hInstance);
            return result;
        }
        FreeLibrary(hInstance);
      }
      return 0;
    }
    

    The code enumerated every file in the system directory, Windows directory, Program Files directory, and possibly also the kitchen sink and their uncle's kitchen sink. It tries to load each one as a library, and sees if it has an export called FunctionX. For good measure, it also tries __imp__­FunctionX, FunctionX@12, and __imp__­FunctionX@12. If it finds any match, it calls the function.

    As it happens, every single call to Get­Proc­Address failed. The function they were trying to call was an internal function in the window manager that wasn't exported. I guess they figured, "Hm, I can't find it in user32. Maybe it moved to some other DLL," and went through every DLL they could think of.

    I called out this rather dubious programming technique, and word got back to the development team for that program. They came back and admitted, "Yeah, we were hoping to call that function, but couldn't find it, and the code you found is stuff we added during debugging. We have no intention of actually shipping that code."

    Well, yeah, but still, what possesses you to try such a crazy technique, even if only for debugging?

  • The Old New Thing

    Why are there both FIND and FINDSTR programs, with unrelated feature sets?

    • 35 Comments
    Jonathan wonders why we have both find and findstr, and furthermore, why the two programs have unrelated features. The find program supports UTF-16, which findstr doesn't; on the other hand, the findstr program supports regular expressions, which find does not.

    The reason why their feature sets are unrelated is that the two programs are unrelated.

    The find program came first. As I noted in the article, the find program dates back to 1982. When it was ported to Windows NT, Unicode support was added. But nobody bothered to add any features to it. It was intended to be a straight port of the old MS-DOS program.

    Meanwhile, one of my colleagues over on the MS-DOS team missed having a grep program, so he wrote his own. Developers often write these little tools to make their lives easier. This was purely a side project, not an official part of any version of MS-DOS or Windows. When he moved to the Windows 95 team, he brought his little box of tools with him, and he ported some of them to Win32 in his spare time because, well, that's what programmers do. (This was back in the days when programmers loved to program anything in their spare time.)

    And that's where things stood for a long time. The official find program just searched for fixed strings, but could do so in Unicode. Meanwhile, my colleague's little side project supported regular expressions but not Unicode.

    And then one day, the Windows 2000 Resource Kit team said, "Hey, that's a pretty cool program you've got there. Mind if we include it in the Resource Kit?"

    "Sure, why not," my colleague replied. "It's useful to me, maybe it'll be useful to somebody else."

    So in it went, under the name qgrep.

    Next, the Windows Resource Kit folks said, "You know, it's kind of annoying that you have to go install the Resource Kit just to get these useful tools. Wouldn't it be great if we put the most useful ones in the core Windows product?" I don't know what sort of cajoling was necessary, but they convinced the Windows team to add a handful of Resource Kit programs to Windows. Along the way, qgrep somehow changed its name to findstr. (Other Resource Kit programs kept their names, like where and diskraid.)

    So there you have it. You can think of the find and findstr programs as examples of parallel evolution.

Page 5 of 50 (496 items) «34567»