• The Old New Thing

    As long as your file names meet operating system requirements, you can use whatever you like; the rest is up to you


    A customer had a question about the MSDN documentation on rules for legal file names:

    My employees keep naming documents with hyphens in the name. For example, they might name a file Budget-2012-Final.xlsx. It is my position that hyphens should not be used in this way, and the document should be named Budget 2012 Final.xlsx. Please advise on the use of hyphens within file names.

    Hyphens inside file names are legal, and you can use as many as you like, subject to the other rules for file names.

    If you are having an argument with your employees about file naming conventions, that's something you just need to work out among yourselves. Whatever you decide, the file system will be there for you.

  • The Old New Thing

    Obtaining information about the user's wallpaper on multiple monitors


    Today we're going to dump information about the user's wallpaper settings on multiple monitors.

    The idea is simple. You use the IDesktop­Wallpaper interface on the Desktop­Wallpaper object to get information about the desktop wallpaper. It will tell you the wallpaper positioning information, whether a single image is being used for all monitors, where those monitors are, and which image is on which monitor.

    #define UNICODE
    #define _UNICODE
    #define STRICT
    #include <windows.h>
    #include <shlobj.h>
    #include <atlbase.h>
    #include <atlalloc.h>
    #include <stdio.h> // horrors! mixing C and C++!
    int __cdecl wmain(int, wchar_t **)
     CCoInitialize init;
     // Create the DesktopWallpaper object
     CComPtr<IDesktopWallpaper> spdw;
     CoCreateInstance(CLSID_DesktopWallpaper, nullptr, CLSCTX_ALL,
     // See if there is a single wallpaper on all monitors.
     CComHeapPtr<wchar_t> spszCommonWallpaper;
     HRESULT hr = spdw->GetWallpaper(nullptr, &spszCommonWallpaper);
     switch (hr) {
     case S_OK:
      printf("Same wallpaper on all monitors: %ls\n",
             static_cast<wchar_t *>(spszCommonWallpaper));
     case S_FALSE:
      printf("Different wallpaper on each monitor\n");
      printf("Mysterious error: 0x%08x\n", hr);
     // Get the number of monitors,
     UINT count;
     printf("There are %d monitors\n", count);
     // Print information about each monitor.
     for (UINT i = 0; i < count; i++) {
      // Get the device path for the monitor.
      CComHeapPtr<wchar_t> spszId;
      spdw->GetMonitorDevicePathAt(i, &spszId);
      printf("path[%d] = %ls\n",
             i, static_cast<wchar_t *>(spszId));
      // Get the monitor location.
      RECT rc;
      spdw->GetMonitorRECT(spszId, &rc);
      printf("rect = (%d, %d, %d, %d)\n",
             rc.left, rc.top, rc.bottom, rc.right);
      // Get the wallpaper on that monitor.
      CComHeapPtr<wchar_t> spszWallpaper;
      hr = spdw->GetWallpaper(spszId, &spszWallpaper);
      printf("image = %ls\n",
             static_cast<wchar_t *>(spszWallpaper));
     return 0;

    The program proceeds in a few basic steps.

    We create the Desktop­Wallpaper object. That object will give us the answers to our questions.

    Our first question is, "Is the same wallpaper being shown on all monitors?" To determine that, we call IDesktop­Wallpaper::Get­Wallpaper and specify nullptr as the monitor ID. The call succeeds with S_OK if the same wallpaper is shown on all monitors (in which case the shared wallpaper is returned). It succeeds with S_FALSE if each monitor has a different wallpaper.

    To get information about the wallpaper on each monitor, we iterate through them, first asking for the monitor device path, since that is how the Desktop­Wallpaper object identifies monitors. For each monitor, we ask for its location and the wallpaper for that monitor. Note that if the monitor is not displaying a wallpaper at all, the Get­Wallpaper method succeeds but returns an empty string.

    And that's it. You can juice up this program by asking for wallpaper positioning information, and if you are feeling really adventuresome, you can use the Set­Wallpaper method to change the wallpaper.

  • The Old New Thing

    Why does GetFileVersionInfo map the whole image into memory instead of just parsing out the pieces it needs?


    Commenter acq responds (with expletive deleted), "the whole file is mapped into the process' memory only for version info that's certainly only a few kilobytes to be read?" Why not map only the parts that are needed? "I don't understand the necessity to map the whole file except that it was easier to write that code without thinking too much."

    That was exactly the reason. But not because it was to avoid thinking. It was to make things more secure.

    Back in the old days, the Get­File­Version­Info function did exactly what acq suggested: It parsed the executable file format manually looking for the file version information. (In other words, the original authors did it the hard way.) And it was the source of security vulnerabilities because malformed executables would cause the parser to "behave erratically".

    This is a common problem: Parsing is hard, and parsing bugs are so common that that there's an entire category of software testing focused on throwing malformed data at parsers to try to trip them up. The general solution for this sort of thing is to establish one "standard parser" and make everybody use that one rather than rolling their own. That way, the security efforts can be focused on making that one standard parser resilient to malformed data. Otherwise, you have a whole bunch of parsers all over the place, and a bad guy can just shop around looking for the buggiest one.

    And it so happens that there is already a standard parser for resources. It's known as the loader.

    The function Get­File­Version­Info therefore got out of the file parsing business (it wasn't profitable anyway) and subcontracted the work to the loader.

    Pre-emptive xpclient rant: "Removing the icon extractor for 16-bit DLLs was a mistake of the highest order, even worse than Component Based Servicing."

  • The Old New Thing

    Get your hex wrench ready, because here comes the Ikea bicycle


    Ikea säljer elcyklar. Click through for two-image slide show.

    Ikea selling electric bicycles

    Forget furniture. Ikea is now launching, that's right, an electric bicycle.

    It goes under the name People-Friendly and costs around 6000 SEK ($900 USD).

    But only in Älmhult, Småland.

    People-Friendly has already received three design awards, including the IF Design Award, according to Ikea's press release.

    What distinguishes it from other electric bicycles is that the battery is hidden in the frame. That makes it look like a regular bicycle as well as lowering the center of gravity and makes the bicycle more stable.

    Performance is for the most part like other electric bicycles: It handles 6–7 Swedish miles (60–70 km, 35–45 US miles) on a charge, which takes 5–6 hours. The weight is 25 kg (55 pounds). The frame is aluminum and the engine is in front.

    Only in Småland

    The 5995 SEK cost of the bicycle may sound like a lot, but it's inexpensive for an electric bicycle.

    The biggest problem with the People-Friendly is that you can't buy it at regular Ikea stores.

    So far, the bicycle is sold only at the bargain department of the Älmhult Ikea.

    "Here is where we test new products. And this is a test product. We want to see how much interest there is and be sure that we can take care of the product, even after the purchase," says Daniela Rogosic, press officer for Ikea Sweden.

    She cannot say when it will begin being sold at general Ikea stores, but she confirms that interest has been strong for the bicycle during the month it has been available.

    Do you have to assemble it yourself like the furniture?

    "Yes, you put it together yourself in the classic Ikea way," says Daniela Rogosic.

    Fact sheet

    • Price: Around 7200 SEK ($1100 USD) in Austria
    • Material: Aluminum and steel (front fork)
    • Gears: 3
    • Weight: 25 kg
    • Battery: 36 V
    • Range: 60–70 km
    • Charge time: 5–6 hours
    • Top speed: N/A
    • Engine: 36 V, forward

    On the Web site for the Älmhult bargain department, it describes the bicycle as a three-speed, available in both men's and women's styles. Limit one per customer.

  • The Old New Thing

    It rather involved being on the other side of this airtight hatchway: Denial of service by high CPU usage


    We received the following security vulnerability report:

    Windows is vulnerable to a denial of service attack that consumes 100% CPU.

    1. Use the following procedure to create a file that is enchanted by magic pixie dust: [...]
    2. Rename the file to TEST.EXE.
    3. Execute as many copies of the program as you have CPU cores.

    Observe that CPU usage climbs to 100% and never goes down. This a clear demonstration that Windows is vulnerable to a denial of service attack from magic pixie dust.

    The magic pixie dust is a red herring. This vulnerability report is basically saying "If you are allowed to run arbitrary programs, then it is possible to run a program that consumes all the available CPU."

    Well, duh.

    This is another case of if I can run an arbitrary program, then I can do arbitrary things, also known as MS07-052: Code execution results in code execution. (Or in the lingo of Internet memes, "High CPU is high.")

    Now, of course, if the magic pixie dust somehow allows a user to do things like access resources they do not have access to, or to circumvent resource usage quotas, then there would be a serious problem here, and if if the high CPU usage could be triggered remotely, then there is a potential for a denial-of-service attack. But there was nothing of the sort. Here's a much less complicated version of magic pixie dust:

    int __cdecl main(int, char **) { for (;;) { } /*NOTREACHED*/ }
  • The Old New Thing

    How to take down the entire Internet with this one weird trick, according to Crisis


    According to the television documentary Crisis which aired on NBC last Sunday, a cyberattack took over the entire Internet.

    Timecode 13:00: "Anything connected to the Internet. Banking systems, power grid, air traffic control, emergency services. The virus has spread into them all."

    And the show includes an amazing journalistic scoop: A screen shot of the attack being launched! Timecode 11:40:

    Threads Progress Remaining Speed
    0:000> u eip-30 eip+20 notepad+0x5cfc: 01005cfc 0001 add [ecx],al 01005cfe 3bc7 cmp eax,edi 01005d00 7407 jz notepad+0x5d09 (01005d09) 01005d02 50 push eax 01005d03 ff15dc100001 call dword ptr [notepad+0x10dc (010010dc)] 01005d09 8b45fc mov eax,[ebp-0x4] 01005d0c 57 push edi 01005d0d 57 push edi 01005d0e 68c50000 push 0xc5

    That's right, my friends. This elite virus that shut down the Internet was an upload of Notepad!

  • The Old New Thing

    Cargo-cult registry settings and the people who swear by them


    Two customers (so far) wanted to know how to increase the duration of taskbar balloon notifications on Windows Vista. (By the way, I gave the answer some time ago.)

    They claimed that on Windows XP, they were using the registry key HKEY_CURRENT_USER\Software\Microsoft\Windows\Current­Version\Explorer\Tray­Notify, setting the value Balloon­Tip to a REG_DWORD specifying the number of seconds the balloon should appear. They wanted to know if this still worked in Vista.

    Heck, it didn't work even in Windows XP!

    That undocumented registry key actually controls whether the Windows XP taskbar should show the "To see the hidden icons, click this button" tip. It has nothing to do with how long the balloon stays on the screen.

    A quick Web search suggests that that particular setting has reached cult status, with everybody saying that the setting controls balloon duration, and nobody actually testing it. It's just a matter of faith.

    Even the sometimes-suggested trick of putting the registry key name in MSDN so searches can find it and direct users to the correct method wouldn't have helped, because this was the wrong registry key to begin with.

    (Remember, the answer is in the linked article.)

  • The Old New Thing

    Only senior executives can send email to the All Employees distribution list, but mistakes still happen


    Some time ago, a senior executive sent email to the All Employees distribution list at Microsoft announcing that a particular product was now available for dogfood. The message included a brief introduction to the product and instructions on how to install it.

    A few hours later, a second message appeared in reply to the announcement. The second message came from a different senior executive, and it went

    I got your note and tried it out. Looks good so far.

    Oopsie. The second senior executive intended to reply just to the first senior executive, but hit the Reply All button by mistake. This would normally have been caught by the You do not have permission to send mail to All Employees rule, but since the mistake was made by a senior executive, that rule did not apply, and the message went out to the entire company.

    People got a good chuckle out of this. At least he didn't say anything embarrassing.

    Bonus chatter: I'd have thought that these extra-large distribution lists would be marked Nobody can send to this distribution list, and then when somebody needed to send a message to the entire company, the email admins would create a one-day-only rule which allowed a specific individual to send one message.

  • The Old New Thing

    Find the index of the smallest element in a JavaScript array


    Today's Little Program isn't even a program. It's just a function.

    The problem statement is as follows: Given a nonempty JavaScript array of numbers, find the index of the smallest value. (If the smallest value appears more than once, then any such index is acceptable.)

    One solution is simply to do the operation manually, simulating how you would perform the operation with paper and pencil: You start by saying that the first element is the winner, and then you go through the rest of the elements. If the next element is smaller than the one you have, you declare that element the new provisional winner.

    function indexOfSmallest(a) {
     var lowest = 0;
     for (var i = 1; i < a.length; i++) {
      if (a[i] < a[lowest]) lowest = i;
     return lowest;

    Another solution is to use the reduce intrinsic to run the loop, so you merely have to provide the business logic of the initial guess and the if statement.

    function indexOfSmallest(a) {
     return a.reduce(function(lowest, next, index) {
                       return next < a[lowest] : index ? lowest; },

    A third solution is to use JavaScript intrinsics to find the smallest element and then convert the element to its index.

    function indexOfSmallest(a) {
     return a.indexOf(Math.min.apply(Math, a));

    Which one is fastest?

    Okay, well, first, before you decide which one is fastest, you need to make sure they are all correct. One thing you discover is that the min/indexOf technique fails once the array gets really, really large, or at least it does in Internet Explorer and Firefox. (In my case, Internet Explorer and Firefox gave up at around 250,000 and 500,000 elements, respectively.) That's because you start hitting engine limits on the number of parameters you can pass to a single function. Invoking apply on an array of 250,000 elements is the equivalent of calling min with 250,000 function parameters.

    So we'll limit ourselves to arrays of length at most 250,000.

    Before I share the results, I want you to guess which algorithm you think will be the fastest and which will be the slowest.

    Still waiting.

    I expected the manual version to come in last place, because, after all, it's doing everything manually. I expected the reduce version to be slightly faster, because it offloads some of the work into an intrinsic (though the function call overhead may have negated any of that improvement). I expected the min/indexOf version to be fastest because nearly all the work is being done in intrinsics, and the cost of making two passes over the data would be made up by the improved performance of the intrinsics.

    Here are the timings of the three versions with arrays of different size, running on random data. I've normalized run times so that the results are independent of CPU speed.

    Relative running time per array element
    Elements manual reduce min/indexOf
    Internet Explorer 9
    100,000 1.000 2.155 2.739
    200,000 1.014 2.324 3.099
    250,000 1.023 2.200 2.330
    Internet Explorer 10
    100,000 1.000 4.057 4.302
    200,000 1.028 4.057 4.642
    250,000 1.019 4.091 4.068

    Are you surprised? I sure was!

    Not only did I have it completely backwards, but the margin of victory for the manual version was way beyond what I imagined.

    (This shows that the only way to know your program's performance characteristics for sure is to sit down and measure it.)

    What I think is going on is that the JavaScript optimizer can do a really good job of optimizing the manual code since it is very simple. There are no function calls, the loop body is just one line, it's all right out there in the open. The versions that use intrinsics end up hiding some of the information from the optimizer. (After all, the optimizer cannot predict ahead of time whether somebody has overridden the default implementation of Array.prototype.reduce or Math.prototype.min, so it cannot blindly inline the calls.) The result is that the manual version can run over twice as fast on IE9 and over four times as fast on IE10.

    I got it wrong because I thought of JavaScript too much like an interpreted language. In a purely interpreted language, the overhead of the interpreter is roughly proportional to the number of things you ask it to do, as opposed to how hard it was to do any of those things. It's like a fixed service fee imposed on every transaction, regardless of whether the transaction was for $100 or 50 cents. You therefore try to make one big purchase (call a complex intrinsic) instead of a lot of small ones (read an array element, compare two values, increment a variable, copy one variable to another).

    Bonus chatter: I ran the test on Firefox, too, because I happened to have it handy.

    Relative running time per array element
    Elements manual reduce min/indexOf
    Firefox 16
    100,000 1.000 21.598 3.958
    200,000 0.848 21.701 2.515
    250,000 0.839 21.788 2.090

    The same data collected on Firefox 16 (which sounds ridiculously old because Firefox will be on version 523 by the time this article reaches the head of the queue) shows a different profile, although the winner is the same. The manual loop and the min/indexOf get more efficient as the array size increases. This suggests that there is fixed overhead that becomes gradually less significant as you increase the size of the data set.

    One thing that jumps out is that the reduce method way underperforms the others. My guess is that setting up the function call (in order to transition between the intrinsic and the script) is very expensive, and that implementors of the JavaScript engines haven't spent any time optimizing this case because reduce is not used much in real-world code.

    Update: I exaggerated my naïveté to make for a better narrative. As I point out in the preface to my book, my stories may not be completely true, but they are true enough. Of course I know that JavaScript is jitted nowadays, and that changes the calculus. (Also, the hidden array copy.)

  • The Old New Thing

    Why is the debugger telling me I crashed because my DLL was unloaded, when I see it loaded right here happily executing code?


    A customer was puzzled by what appeared to be contradictory information coming from the debugger.

    We have Windows Error Reporting failures that tell us that we are executing code in our DLL which has been unloaded. Here's a sample stack:

    Child-SP          RetAddr           Call Site
    00000037`7995e8b0 00007ffb`fe64b08e ntdll!RtlDispatchException+0x197
    00000037`7995ef80 000007f6`e5d5390c ntdll!KiUserExceptionDispatch+0x2e
    00000037`7995f5b8 00007ffb`fc977640 <Unloaded_contoso.dll>+0x3390c
    00000037`7995f5c0 00007ffb`fc978296 RPCRT4!NDRSRundownContextHandle+0x18
    00000037`7995f610 00007ffb`fc9780ed RPCRT4!DestroyContextHandlesForGuard+0xea
    00000037`7995f650 00007ffb`fc9b5ff4 RPCRT4!ASSOCIATION_HANDLE::~ASSOCIATION_HANDLE+0x39
    00000037`7995f680 00007ffb`fc9b5f7c RPCRT4!LRPC_SASSOCIATION::`scalar deleting destructor'+0x14
    00000037`7995f6b0 00007ffb`fc978b25 RPCRT4!LRPC_SCALL_BROKEN_FLOW::FreeObject+0x14
    00000037`7995f6e0 00007ffb`fc982e44 RPCRT4!LRPC_SASSOCIATION::MessageReceivedWithClosePending+0x6d
    00000037`7995f730 00007ffb`fc9825be RPCRT4!LRPC_ADDRESS::ProcessIO+0x794
    00000037`7995f870 00007ffb`fe5ead64 RPCRT4!LrpcIoComplete+0xae
    00000037`7995f910 00007ffb`fe5e928a ntdll!TppAlpcpExecuteCallback+0x204
    00000037`7995f980 00007ffb`fc350ce5 ntdll!TppWorkerThread+0x70a
    00000037`7995fd00 00007ffb`fe60f009 KERNEL32!BaseThreadInitThunk+0xd
    00000037`7995fd30 00000000`00000000 ntdll!RtlUserThreadStart+0x1d

    But if we ask the debugger what modules are loaded, our DLL is right there, loaded as happy as can be:

    0:000> lm
    start             end                 module name
    000007f6`e6000000 000007f6`e6050000   contoso    (deferred)

    In fact, we can view other threads in the process, and they are happily running code in our DLL. What's going on here?

    All the information you need to solve this problem is given right there in the problem report. You just have to put the pieces together.

    Let's take a closer look at that <Unloaded_contoso.dll>+0x3390c entry. The address that the symbol refers to is the return address from the previous frame: 000007f6`e5d5390c. Subtract 0x3390c from that, and you get 000007f6`e5d20000, which is the base address of the unloaded module.

    On the other hand, the lm command says that the currently-loaded copy of contoso.dll is loaded at 000007f6`e6000000. This is a different address.

    What happened here is that contoso.dll was loaded into memory at 000007f6`e5d20000, and then it ran for a while. The DLL was then unloaded from memory, and later loaded back into memory. When it returned, it was loaded at a different address 000007f6`e6000000. For some reason (improper cleanup when unloading the first copy, most likely), there was still a function pointer pointing into the old unloaded copy, and when NDRS­Rundown­Context­Handle tries to call into that function pointer, it calls into an unloaded DLL, and you crash.

    When faced with something that seems impossible, you need to look more closely for clues that suggest how your implicit assumptions may be incorrect. In this case, the assumption was that there was only one copy of contoso.dll.

Page 4 of 426 (4,251 items) «23456»