September, 2004

  • The Old New Thing

    Converting a byte[] to a System.String

    • 7 Comments

    For some reason, this question gets asked a lot. How do I convert a byte[] to a System.String? (Yes, this is a CLR question. Sorry.)

    You can use String System.Text.UnicodeEncoding.GetString() which takes a byte[] array and produces a string.

    Note that this is not the same as just blindly copying the bytes from the byte[] array into a hunk of memory and calling it a string. The GetString() method must validate the bytes and forbid invalid surrogates, for example.

    You might be tempted to create a string and just mash the bytes into it, but that violates string immutability and can lead to subtle problems.

  • The Old New Thing

    A visual history of spam (and virus) email

    • 156 Comments

    I have kept every single piece of spam and virus email since mid-1997. Occasionally, it comes in handy, for example, to add naïve Bayesian spam filter to my custom-written email filter. And occasionally I use it to build a chart of spam and virus email.

    The following chart plots every single piece of spam and virus email that arrived at my work email address since April 1997. Blue dots are spam and red dots are email viruses. The horizontal axis is time, and the vertical axis is size of mail (on a logarithmic scale). Darker dots represent more messages. (Messages larger than 1MB have been treated as if they were 1MB.)

    Note that this chart is not scientific. Only mail which makes it past the corporate spam and virus filters show up on the chart.

    Why does so much spam and virus mail get through the filters? Because corporate mail filters cannot take the risk of accidentally classifying valid business email as spam. Consequently, the filters have to make sure to remove something only if they has extremely high confidence that the message is unwanted.

    Okay, enough dawdling. Let's see the chart.

    Overall statistics and extrema:

    • First message in chart: April 22, 1997.
    • Last message in chart: September 10, 2004.
    • Smallest message: 372 bytes, received March 11, 1998.
      From: 15841.
      To: 15841.
      Subject: About your account...
      Content-Type: text/plain; charset=ISO-8859-1
      Content-Transfer-Encoding: 7bit
      
      P
      
    • Largest message: 1,406,967 bytes, received January 8, 2004. HTML mail with a lot of text including 41 large images. A slightly smaller version was received the previous day. (I guess they figured that their first version wasn't big enough, so they sent out an updated version the next day.)
    • Single worst spam day by volume: January 8, 2004. That one monster message sealed the deal.
    • Single worst spam day by number of messages: August 22, 2002. 67 pieces of spam. The vertical blue line.
    • Single worst virus day: August 24, 2003. This is the winner both by volume (1.7MB) and by number (49). The red splotch.
    • Totals: 227.6MB of spam in roughly 19,000 messages. 61.8MB of viruses in roughly 3500 messages.

    Things you can see on the chart:

    • Spam went ballistic starting in 2002. You could see it growing in 2001, but 2002 was when it really took off.
    • Vertical blue lines are "bad spam days". Vertical red lines are "bad virus days".
    • Horizontal red lines let you watch the lifetime of a particular email virus. (This works only for viruses with a fixed-size payload. Viruses with variable-size payload are smeared vertically.)
    • The big red splotch in August 2003 around the 100K mark is the Sobig virus.
    • The horizontal line in 2004 that wanders around the 2K mark is the Netsky virus.
    • For most of this time, the company policy on spam filtering was not to filter it out at all, because all the filters they tried had too high a false-positive rate. (I.e., they were rejecting too many valid messages as spam.) You can see that in late 2003, the blue dot density diminished considerably. That's when mail administrators found a filter whose false-positive rate was low enough to be acceptable.

    As a comparison, here's the same chart based on email received at one of my inactive personal email addresses.

    This particular email address has been inactive since 1995; all the mail it gets is therefore from harvesting done prior to 1995. (That's why you don't see any red dots: None of my friends have this address in their address book since it is inactive.) The graph doesn't go back as far because I didn't start saving spam from this address until late 2000.

    Overall statistics and extrema:

    • First message in chart: September 2, 2000.
    • Last message in chart: September 10, 2004.
    • Smallest message: 256 bytes, received July 24, 2004.
      Received: from dhcp065-025-005-032.neo.rr.com ([65.25.5.32]) by ...
               Sat, 24 Jul 2004 12:30:35 -0700
      X-Message-Info: 10
      
    • Largest message: 3,661,900 bytes, received April 11, 2003. Mail with four large bitmap attachments, each of which is a Windows screenshot of Word with a document open, each bitmap showing a different page of the document. Perhaps one of the most inefficient ways of distributing a four-page document.
    • Single worst spam day by volume: April 11, 2003. Again, the monster message drowns out the competition.
    • Single worst spam day by number of messages: October 3, 2003. 74 pieces of spam.
    • Totals: 237MB of spam in roughly 35,000 messages.

    I cannot explain the mysterious "quiet period" at the beginning of 2004. Perhaps my ISP instituted a filter for a while? Perhaps I didn't log on often enough to pick up my spam and it expired on the server? I don't know.

    One theory is that the lull was due to uncertainty created by the CAN-SPAM Act, which took effect on January 1, 2004. I don't buy this theory since there was no significant corresponding lull at my other email account, and follow-up reports indicate that CAN-SPAM was widely disregarded. Even in its heyday, compliance was only 3%.

    Curiously, the trend in spam size for this particular account is that it has been going down since 2002. In the previous chart, you could see a clear upward trend since 1997. My theory is that since this second dataset is more focused on current trends, it missed out on the growth trend in the late 1990's and instead is seeing the shift in spam from text to <IMG> tags.

  • The Old New Thing

    Why does Windows keep your BIOS clock on local time?

    • 44 Comments

    Even though Windows NT uses UTC internally, the BIOS clock stays on local time. Why is that?

    There are a few reasons. One is a chain of backwards compatibility.

    In the early days, people often dual-booted between Windows NT and MS-DOS/Windows 3.1. MS-DOS and Windows 3.1 operate on local time, so Windows NT followed suit so that you wouldn't have to keep changing your clock each time you changed operating systems.

    As people upgraded from Windows NT to Windows 2000 to Windows XP, this choice of time zone had to be preserved so that people could dual-boot between their previous operating system and the new operating system.

    Another reason for keeping the BIOS clock on local time is to avoid confusing people who set their time via the BIOS itself. If you hit the magic key during the power-on self-test, the BIOS will go into its configuration mode, and one of the things you can configure here is the time. Imagine how confusing it would be if you set the time to 3pm, and then when you started Windows, the clock read 11am.

    "Stupid computer. Why did it even ask me to change the time if it's going to screw it up and make me change it a second time?"

    And if you explain to them, "No, you see, that time was UTC, not local time," the response is likely to be "What kind of totally propeller-headed nonsense is that? You're telling me that when the computer asks me what time it is, I have to tell it what time it is in London? (Except during the summer in the northern hemisphere, when I have to tell it what time it is in Reykjavik!?) Why do I have to remember my time zone and manually subtract four hours? Or is it five during the summer? Or maybe I have to add. Why do I even have to think about this? Stupid Microsoft. My watch says three o'clock. I type three o'clock. End of story."

    (What's more, some BIOSes have alarm clocks built in, where you can program them to have the computer turn itself on at a particular time. Do you want to have to convert all those times to UTC each time you want to set a wake-up call?)

  • The Old New Thing

    Why does my mouse/touchpad sometimes go berzerk?

    • 67 Comments

    Each time you move a PS/2-style mouse, the mouse send three bytes to the computer. For the sake of illustration, let's say the three bytes are x, y, and buttons.

    The operating system sees this byte stream and groups them into threes:

    x y b x y b x y b x y b

    Now suppose the cable is a bit jiggled loose and one of the "y"s gets lost. The byte stream loses an entry, but the operating system doesn't know this has happened and keeps grouping them in threes.

    x y b x b x y b x y b x

    The operating system is now out of sync with the mouse and starts misinterpreting all the data. It receives a "y b x" from the mouse and treats the y byte as the x-delta, the b byte as the y-delta, and the x byte as the button state. Result: A mouse that goes crazy.

    Oh wait, then there are mice with wheels.

    When the operating system starts up, it tries to figure out whether the mouse has a wheel and convinces it to go into wheel mode. (You can influence this negotiation from Device Manager.) If both sides agree on wheeliness, then the mouse generates four bytes for each mouse motion, which therefore must be interpreted something like this:

    x y b w x y b w x y b w x y b w

    Now things get really interesting when you introduce laptops into the mix.

    Many laptop computers have a PS/2 mouse port into which you can plug a mouse on the fly. When this happens, the built-in pointing device is turned off and the PS/2 mouse is used instead. This happens entirely within the laptop's firmware. The operating system has no idea that this switcheroo has happened.

    Suppose that when you turned on your laptop, there was a wheel mouse connected to the PS/2 port. In this case, when the operating system tries to negotiate with the mouse, it sees a wheel and puts the mouse into "wheel mode", expecting (and fortunately receiving) four-byte packets.

    Now unplug your wheel mouse so that you revert to the touchpad, and let's say your touchpad doesn't have a wheel. The touchpad therefore spits out three-byte mouse packets when you use it. Uh-oh, now things are really messed up.

    The touchpad is sending out three-byte packets, but the operating system thinks it's talking to that mouse that was plugged in originally and continues to expect four-byte packets.

    You can imagine the mass mayhem that ensues.

    Moral of the story: If you're going to hot-plug a mouse into your laptop's PS/2 port, you have a few choices.

    • Always use a nonwheel mouse, so that you can plug and unplug with impunity, since the nonwheel mouse and the touchpad both use three-byte packets.
    • If you turn on the laptop with no external mouse, then you can go ahead and plug in either a wheeled or wheel-less mouse. Plugging in a wheel-less mouse is safe because it generates three-byte packets just like the touchpad. And plugging in a wheeled mouse is safe because the wheeled mouse was not around for the initial negotiation, so it operates in compatibility mode (i.e., it pretends to be a wheel-less mouse). In this case, the mouse works, but you lose the wheel.
    • If you turn on the laptop with a wheel mouse plugged in, never unplug it because once you do, the touchpad will take over and send three-byte packets and things will go berzerk.

    Probably the easiest way out is to avoid the PS/2 mouse entirely and just use a USB mouse. This completely sidesteps the laptop's PS/2 switcheroo.

  • The Old New Thing

    How to contact Raymond

    If your subject matter is a suggestion for a future topic, please post it to the Suggestion Box (using the Suggestion Box link on the side of the page). If you send it to me directly, I will probably lose track of it.

    If your subject matter is personal (for example "Hi, Raymond, remember me? We went to the same high school..."), you can send email using the Contact page. (Update 25 March 2007: The contact page has been disabled because over 90% of the messages that arrive are spam.)

    I do not promise to respond to your message.

    If you are making a suggestion for a future version of a Microsoft product, submit it directly to Microsoft rather than sending it to me, because I won't know what to do with it either.

    I do not provide one-on-one technical support. If your question is of general interest, you can post it to the Suggestion Box. If you are looking for technical support, there are a variety of options available, such as newgroups, bulletin boards, forums (fora?), and Microsoft's own product support web site.

  • The Old New Thing

    How to host an IContextMenu, part 1 - Initial foray

    • 20 Comments

    Most documentation describes how to plug into the shell context menu structure and be a context menu provider. If you read the documentation from the other side, then you also see how to host the context menu. (This is the first of an eleven-part series with three digressions. Yes, eleven parts—sorry for all you folks who are in it just for the history articles. I'll try to toss in an occasional amusing diversion.)

    The usage pattern for an IContextMenu is as follows:

    The details of this are explained in Creating Context MenuHandlers from the point of view of the IContextMenu implementor.

    The Shell first calls IContextMenu::QueryContextMenu. It passes in an HMENU handle that the method can use to add items to the context menu. If the user selects one of the commands, IContextMenu::GetCommandString is called to retrieve the Help string that will be displayed on the Microsoft Windows Explorer status bar. If the user clicks one of the handler's items, the Shell calls IContextMenu::InvokeCommand. The handler can then execute the appropriate command.

    Read it from the other side to see what it says you need to do as the IContextMenu host:

    The IContextMenu host first calls IContextMenu::QueryContextMenu. It passes in an HMENU handle that the method can use to add items to the context menu. If the user selects one of the commands, IContextMenu::GetCommandString is called to retrieve the Help string that will be displayed on the host's status bar. If the user clicks one of the handler's items, the IContextMenu host calls IContextMenu::InvokeCommand. The handler can then execute the appropriate command.

    Exploring the consequences of this new interpretation of the context menu documentation will be our focus for the next few weeks.

    Okay, let's get started. We begin, as always, with our scratch program. I'm going to assume you're already familiar with the shell namespace and pidls so I can focus on the context menu part of the issue.

    #include <shlobj.h>
    
    HRESULT GetUIObjectOfFile(HWND hwnd, LPCWSTR pszPath, REFIID riid, void **ppv)
    {
      *ppv = NULL;
      HRESULT hr;
      LPITEMIDLIST pidl;
      SFGAOF sfgao;
      if (SUCCEEDED(hr = SHParseDisplayName(pszPath, NULL, &pidl, 0, &sfgao))) {
        IShellFolder *psf;
        LPCITEMIDLIST pidlChild;
        if (SUCCEEDED(hr = SHBindToParent(pidl, IID_IShellFolder,
                                          (void**)&psf, &pidlChild))) {
          hr = psf->GetUIObjectOf(hwnd, 1, &pidlChild, riid, NULL, ppv);
          psf->Release();
        }
        CoTaskMemFree(pidl);
      }
      return hr;
    }
    

    This simple function takes a path and gets a shell UI object from it. We convert the path to a pidl with SHParseDisplayName, then bind to the pidl's parent with SHBindToParent, then ask the parent for the UI object of the child with IShellFolder::GetUIObjectOf. I'm assuming you've had enough experience with the namespace that this is ho-hum.

    (The helper functions SHParseDisplayName and SHBindToParent don't do anything you couldn't have done yourself. They just save you some typing. Once you start using the shell namespace for any nontrivial amount of time, you build up a library of little functions like these.)

    For our first pass, all we're going to do is invoke the "Play" verb on the file when the user right-clicks. (Why right-click? Because a future version of this program will display a context menu.)

    #define SCRATCH_QCM_FIRST 1
    #define SCRATCH_QCM_LAST  0x7FFF
    
    void OnContextMenu(HWND hwnd, HWND hwndContext, UINT xPos, UINT yPos)
    {
      IContextMenu *pcm;
      if (SUCCEEDED(GetUIObjectOfFile(hwnd, L"C:\\Windows\\clock.avi",
                       IID_IContextMenu, (void**)&pcm))) {
        HMENU hmenu = CreatePopupMenu();
        if (hmenu) {
          if (SUCCEEDED(pcm->QueryContextMenu(hmenu, 0,
                                 SCRATCH_QCM_FIRST, SCRATCH_QCM_LAST,
                                 CMF_NORMAL))) {
            CMINVOKECOMMANDINFO info = { 0 };
            info.cbSize = sizeof(info);
            info.hwnd = hwnd;
            info.lpVerb = "play";
            pcm->InvokeCommand(&info);
          }
          DestroyMenu(hmenu);
        }
        pcm->Release();
      }
    }
    
        HANDLE_MSG(hwnd, WM_CONTEXTMENU, OnContextMenu);
    

    As noted in the checklist above, first we create the IContextMenu, then initialize it by calling IContextMenu::QueryContextMenu. Notice that even though we don't intend to display the menu, we still have to create a popup menu because IContextMenu::QueryContextMenu requires on. We don't actually display the resulting menu, however; instead of asking the user to pick an item from the menu, we make the choice for the user and select "Play", filling in the CMINVOKECOMMANDINFO structure and invoking it.

    But how did we know that the correct verb was "Play"? In this case, we knew because we hard-coded the file to "clock.avi" and we knew that AVI files have a "Play" verb. But of course that doesn't work in general. Before getting to invoking the default verb, let's first take the easier step of asking the user what verb to invoke. That exercise will actually distract us for a while, but we'll come back to the issue of the default verb afterwards.

    If the code above is all you really wanted (invoking a fixed verb on a file), then you didn't need to go through all the context menu stuff. The code above is equivalent to calling the ShellExecuteEx function, passing the SEE_MASK_INVOKEIDLIST flag to indicate that you want the invoke to go through the IContextMenu.

    [Typo fixed 25 September.]

  • The Old New Thing

    Interlocked operations don't solve everything

    • 17 Comments

    Interlocked operations are a high-performance way of updating DWORD-sized or pointer-sized values in an atomic manner. Note, however, that this doesn't mean that you can avoid the critical section.

    For example, suppose you have a critical section that protects a variable, and in some other part of the code, you want to update the variable atomically. "Well," you say, "this is a simple imcrement, so I can skip the critical section and just do a direct InterlockedIncrement. Woo-hoo, I avoided the critical section bottleneck."

    Well, except that the purpose of that critical section was to ensure that nobody changed the value of the variable while the protected section of code was running. You just ran in and changed the value behind that code's back.

    Conversely, some people suggested emulating complex interlocked operations by having a critical section whose job it was to protect the variable. For example, you might have an InterlockedMultiply that goes like this:

    // Wrong!
    LONG InterlockedMultiply(volatile LONG *plMultiplicand, LONG lMultiplier)
    {
      EnterCriticalSection(&SomeCriticalSection);
      LONG lResult = *plMultiplicand *= lMultiplier;
      LeaveCriticalSection(&SomeCriticalSection);
      return lResult;
    }
    

    While this code does protect against two threads performing an InterlockedMultiply against the same variable simultaneously, it fails to protect against other code performing a simple atomic write to the variable. Consider the following:

    int x = 2;
    Thread1()
    {
      InterlockedIncrement(&x);
    }
    
    Thread2()
    {
      InterlockedMultiply(&x, 5);
    }
    

    If the InterlockedMultiply were truly interlocked, the only valid results would be x=15 (if the interlocked increment beat the interlocked multiply) or x=11 (if the interlocked multiply beat the interlocked increment). But since it isn't truly interlocked, you can get other weird values:

    Thread 1Thread 2
    x = 2 at start
    InterlockedMultiply(&x, 5)
    EnterCriticalSection
    load x (loads 2)
    InterlockedIncrement(&x);
    x is now 3
    multiply by 5 (result: 10)
    store x (stores 10)
    LeaveCriticalSection
    x = 10 at end

    Oh no, our interlocked multiply isn't very interlocked after all! How can we fix it?

    If the operation you want to perform is a function solely of the starting numerical value and the other function parameters (with no dependencies on any other memory locations), you can write your own interlocked-style operation with the help of InterlockedCompareExchange.

    LONG InterlockedMultiply(volatile LONG *plMultiplicand, LONG lMultiplier)
    {
      LONG lOriginal, lResult;
      do {
        lOriginal = *plMultiplicand;
        lResult = lOriginal * lMultiplier;
      } while (InterlockedCompareExchange(plMultiplicand,
                                          lResult, lOriginal) != lOriginal);
      return lResult;
    }
    

    [Typo in algorithm fixed 9:00am.]

    To perform a complicated function on the multiplicand, we perform three steps.

    First, capture the value from memory: lOriginal = *plMultiplicand;

    Second, compute the desired result from the captured value: lResult = lOriginal * lMultiplier;

    Third, store the result provided the value in memory has not changed: InterlockedCompareExchange(plMultiplicand, lResult, lOriginal)

    If the value did change, then this means that the interlocked operation was unsucessful because somebody else changed the value while we were busy doing our computation. In that case, loop back and try again.

    If you walk through the scenario above with this new InterlockedMultiply function, you will see that after the interloping InterlockedIncrement, the loop will detect that the value of "x" has changed and restart. Since the final update of "x" is performed by an InterlockedCompareExchange operation, the result of the computation is trusted only if "x" did not change value.

    Note that this technique works only if the operation being performed is a pure function of the memory value and the function parameters. If you have to access other memory as part of the computation, then this technique will not work! That's because those other memory locations might have changed during the computation and you would have no way of knowing, since InterlockedCompareExchange checks only the memory value being updated.

    Failure to heed the above note results in problems such as the so-called "ABA Problem". I'll leave you to google on that term and read about it. Fortunately, everybody who talks about it also talks about how to solve the ABA Problem, so I'll leave you to read that, too.

    Once you've read about the ABA Problem and its solution, you should be aware that the solution has already been implemented for you, via the Interlocked SList functions.

  • The Old New Thing

    How does Windows exploit hyperthreading?

    • 42 Comments

    It depends which version of Windows you're asking about.

    For Windows 95, Windows 98, and Windows Me, the answer is simple: Not at all. These are not multiprocessor operating systems.

    For Windows NT and Windows 2000, the answer is "It doesn't even know." These operating systems are not hyperthreading-aware because they were written before hyperthreading was invented. If you enable hyperthreading, then each of your CPUs looks like two separate CPUs to these operating systems. (And will get charged as two separate CPUs for licensing purposes.) Since the scheduler doesn't realize the connection between the virtual CPUs, it can end up doing a worse job than if you had never enabled hyperthreading to begin with.

    Consider a dual-hyperthreaded-processor machine. There are two physical processors A and B, each with two virtual hyperthreaded processors, call them A1, A2, B1, and B2.

    Suppose you have two CPU-intensive tasks. As far as the Windows NT and Windows 2000 schedulers are concerned, all four processors are equivalent, so it figure it doesn't matter which two it uses. And if you're unlucky, it'll pick A1 and A2, forcing one physical processor to shoulder two heavy loads (each of which will probably run at something between half-speed and three-quarter speed), leaving physical processor B idle; completely unaware that it could have done a better job by putting one on A1 and the other on B1.

    Windows XP and Windows Server 2003 are hyperthreading-aware. When faced with the above scenario, those schedulers will know that it is better to put one task on one of the A's and the other on one of the B's.

    Note that even with a hyperthreading-aware processor, you can concoct pathological scenarios where hyperthreading ends up a net loss. (For example, if you have four tasks, two of which rely heavily on L2 cache and two of which don't, you'd be better off putting each of the L2-intensive tasks on separate processors, since the L2 cache is shared by the two virtual processors. Putting them both on the same processor would result in a lot of L2-cache misses as the two tasks fight over L2 cache slots.)

    When you go to the expensive end of the scale (the Datacenter Servers, the Enterprise Servers), things get tricky again. I refer still-interested parties to the Windows Support for Hyper-Threading Technology white paper.

    Update 06/2007: The white paper appears to have moved.

  • The Old New Thing

    The shift key overrides NumLock

    • 22 Comments

    Perhaps not as well-known today as it was in the days when the arrow keys and numeric keypad shared space is that the shift key overrides NumLock.

    If NumLock is on (as it usually is), then pressing a key on the numeric keypad while holding the shift key overrides NumLock and instead generates the arrow key (or other navigation key) printed in small print under the big digits.

    (The shift key also overrides CapsLock. If you turn on CapsLock then hold the shift key while typing a letter, that letter comes out in lowercase.)

    Perhaps you might decide that this little shift key quirk is completely insignificant, at least until you try to do something like assign Shift+Numpad0 as a hotkey and wonder why it doesn't work. Now you know.

  • The Old New Thing

    How to find the Internet Explorer binary

    • 45 Comments

    For some reason, some people go to enormous lengths to locate the Internet Explorer binary so they can launch it with some options.

    The way to do this is not to do it.

    If you just pass "IEXPLORE.EXE" to the ShellExecute function [link fixed 9:41am], it will go find Internet Explorer and run it.

    ShellExecute(NULL, "open", "iexplore.exe",
                 "http://www.microsoft.com", NULL,
                 SW_SHOWNORMAL);
    

    The ShellExecute function gets its hands dirty so you don't have to.

    (Note: If you just want to launch the URL generically, you should use

    ShellExecute(NULL, "open", "http://www.microsoft.com",
                 NULL, NULL, SW_SHOWNORMAL);
    

    so that the web page opens in the user's preferred web browser. Forcing Internet Explorer should be avoided under normal circumstances; we are forcing it here because the action is presumably being taken response to an explicit request to open the web page specifically in Internet Explorer.)

    If you want to get your hands dirty, you can of course do it yourself. It involves reading the specification from the other side, this time the specification on how to register your program's name and path ("Registering Application Path Information").

    The document describes how a program should enter its properties into the registry so that the shell can launch it. To read it backwards, then, interpret this as a list of properties you (the launcher) need to read from the registry.

    In this case, the way to run Internet Explorer (or any other program) the same way ShellExecute does is to look in HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\App Paths\IEXPLORE.EXE (substituting the name of the program if it's not Internet Explorer you're after). The default value is the full path to the program and the the "Path" value specifies a custom path that you should prepend to the environment before launching the target program.

    When you do this, don't forget to call the ExpandEnvironmentStrings function if the registry value's type is REG_EXPAND_SZ. (Lots of people forget about REG_EXPAND_SZ.)

    Of course, my opinion is that it's much easier just to let ShellExecute do the work for you.

Page 1 of 4 (31 items) 1234