May, 2011

  • The Old New Thing

    Watching the battle between Facebook and Facebook spammers


    I am watching the continuing battle between Facebook and Facebook spammers with detached amusement. When I see a spam link posted to a friend's Facebook wall, I like to go and figure out how they got fooled. Internet Explorer's InPrivate Browsing comes in handy here, because I can switch to InPrivate mode before visiting the site, so that the site can't actually cause any harm to my Facebook account since I'm not logged in and it doesn't know how to log me in.

    The early versions were simply Web pages that hosted an embedded YouTube video, but they placed an invisible "Like" button over the playback controls, so that any attempt to play the video resulted in a Like being posted to your wall.

    Another early version of Facebook spam pages sent you to a page with an embedded YouTube video, but they also ran script that monitored your mouse position and positioned a 1×1 pixel Like button under it. That way, no matter where you clicked, you clicked on the Like button.

    A more recent variant is one that displayed a simple math problem and asked you to enter the answer. The excuse for this is that it is to "slow down robots", but really, that answer box is a disguised Facebook comment box. You can see the people who fell for this because their Facebook wall consists of a link to the page with the comment "7".

    My favorite one is a spam page that said, "In order to see the video, copy this text and paste it into your Address bar." The text was, of course, some script that injected code into the page so it could run around sending messages to all your Facebook friends. The kicker was that the script being injected was called owned.js. (The spam was so unsophisticated, it made you copy the text yourself! Not like this one which puts the attack string on your clipboard automatically.)

    I started to think, "Who could possibly fall for this?" And then I realized that the answer is "There will always be people who will fall for this." These are the people who would fall for the honor system virus.

    Update: On May 20, I saw a new variant. This one puts up a fake Youtube [sic] "security" dialog that says, "To comply with our Anti-SPAM™ regulations for a safe internet experience we are required to verify your identity" by solving a CAPTCHA. (This makes no sense.) The words in the CAPTCHA by an amazing coincidence happen to be a comment somebody might make on a hot video. Because the alleged CAPTCHA dialog is a disguised Facebook comment box. The result is that the victim posts a comment like "so awesome" to their own wall, thereby propagating the spam.

  • The Old New Thing

    How long do taskbar notification balloons appear on the screen?


    We saw some time ago that taskbar notification balloons don't penalize you for being away from the computer. But how long does the balloon stay up when the user is there?

    Originally, the balloon appeared for whatever amount of time the application specified in the uTimeout member of the NOTIFYICONDATA structure, subject to a system-imposed minimum of 10 seconds and maximum of 60 seconds.

    In Windows XP, some animation was added to the balloon, adding 2 seconds of fade-in and fade-out animation to the display time.

    Starting in Windows Vista, applications are no longer allowed to specify how long they wanted the balloon to appear; the uTimeout member is ignored. Instead, the display time is the amount of time specified by the SPI_GETMESSAGEDURATION system parameter, with 1 second devoted to fade-in and 5 seconds devoted to fade-out, with a minimum of 3 seconds of full visibility. In other words, if you set the message duration to less than 1+3+5=9 seconds, the taskbar behaves as if you had set it to 9 seconds.

    The default message duration is 5 seconds, so in fact most systems are in the "shorted possible time" case. If you want to extend the time for which balloons notification appear, you can use the SystemParametersInfo function to change it:

    BOOL SetMessageDuration(DWORD seconds, UINT flags)
     return SystemParametersInfo(SPI_SETMESSAGEDURATION,
                                 0, IntToPtr(seconds), flags);

    (You typically don't need to mess with this setting, because you can rescue a balloon from fading out by moving the mouse over it.)

    Note that an application can also set the NIF_REALTIME flag, which means "If I can't display the balloon right now, then just skip it."

  • The Old New Thing

    Why does Explorer show a thumbnail for my image that's different from the image?


    A customer (via a customer liaison) reported that Explorer somestimes showed a thumbnail for an image file that didn't exactly match the image itself.

    I have an image that consists of a collage of other images. When I switch Explorer to Extra Large Icons mode, the thumbnail is a miniature representation of the image file. But in Large Icons and Medium Icons mode, the thumbnail image shows only one of the images in the collage. I've tried deleting the thumbnail cache, but that didn't help; Explorer still shows the wrong thumbnails for the smaller icon modes. What is wrong?

    The customer provided screenshots demonstrating the problem, but the customer did not provide the image files themselves that were exhibiting the problem. I therefore was reduced to using my psychic powers.

    My psychic powers tell me that your JPG file has the single-item image as the camera-provided thumbnail. The shell will use the camera-provided thumbnail if suitable.

    The customer liaison replied,

    The customer tells me that the problem began happening after they edited the images. Attached is one of the images that's demonstrating the problem.

    Some image types (most notable TIFF and JPEG) support the EXIF format for encoding image metadata. This metadata includes information such as the model of camera used to take the picture, the date the picture was taken, and various camera settings related to the photograph. But the one that's interesting today is the image thumbnail.

    When Explorer wants to display a thumbnail for an image, it first checks whether the image comes with a precalculated thumbnail. If so, and the thumbnail is at least as large as the thumbnail Explorer wants to show, then Explorer will use the image-provided thumbnail instead of creating its own from scratch. If the thumbnail embeded in the image is wrong, then when Explorer displays the image-provided thumbnail, the result will be incorrect. Explorer has no idea that the image is lying to it.

    Note that the decision whether to use the image-provided thumbnail is not based solely on the view. (In other words, the conclusion is not "Explorer uses the image-provided thumbnail for Large Icons and Medium Icons but ignores it for Extra Large Icons.) The decision is based on both the view and the size of the image-provided thumbnail. If the image-provided thumbnail is at least the size of the view, then Explorer will use it. For example, if your view is set to 64 × 64 thumbnails, then the image-provided thumbnail will be used if it is at least 64 × 64.

    The Wikipedia page on EXIF points out that "Photo manipulation software sometimes fails to update the embedded information after an editing operation." It appears that some major image editing software packages fail to update the EXIF thumbnail when an image is edited, which can result in inadvertent information disclosure: If the image was cropped or otherwise altered to remove information, the information may still linger in the thumbnail. This Web site has a small gallery of examples.

  • The Old New Thing

    Multithreaded UI code may be just as hard as multithreaded non-UI code, but the consequences are different


    Commenter Tim Smith claims that the problems with multithreaded UI code are not significantly more than plain multithreaded code. While that may be true on a theoretical level, the situations are quite different in practice.

    Regardless of whether your multithreaded code does UI or not, you have to deal with race conditions, synchronization, cache coherency, priority inversion, all that mulitthreaded stuff.

    The difference is that multithreaded problems with non-UI code are often rare, relying on race conditions and other timing issues. As a result, you can often get away with a multithreaded bug, because it may shows up in practice only rarely, if ever. (On the other hand, when it does show up, it's often impossible to diagnose.)

    If you mess up multithreaded UI code, the most common effect is a hang. The nice thing about this is that it's easier to diagnose because everything has stopped and you can try to figure out who is waiting for what. On the other hand, the problems also occur with much more frequency.

    So it's true that the problems are the same, but the way they manifest themselves are very different.

  • The Old New Thing

    If undecorated names are given in the DLL export table, why does link /dump /exports show me decorated names?


    If you run the link /dump /exports command on a DLL which exports only undecorated names, you may find that in addition to showing those undecorated names, it also shows the fully-decorated names.

    We're building a DLL and for some functions, we have chosen to suppress the names from the export table by using the NONAME keyword. When we dump the exports, we still see the names. And the functions which we did want to export by name are showing up with their decorated names even though we list them in the DEF file with undecorated names. Where is the decorated name coming from? Is it being stored in the DLL after all?

            1        00004F1D [NONAME] _Function1@4
            2        000078EF [NONAME] _Function2@12
            3        00009063 [NONAME] _Function3@8

    The original decorated names are not stored in the DLL. The link /dump /exports command is sneaky and looks for a matching PDB file and, if finds one, extracts the decorated names from there.

    (How did I know this? I didn't, but I traced each file accessed by the link /dump /exports command and observed that it went looking for the PDB.)

  • The Old New Thing

    Looking at the world through kernel-colored glasses


    During a discussion of the proper way of cancelling I/O, the question was raised as to whether it was safe to free the I/O buffer, close the event handle, and free the OVERLAPPED structure immediately after the call to CancelIo. The response from the kernel developer was telling.

    That's fine. We write back to the buffer under a try/except, so if the memory is freed, we'll just ignore it. And we take a reference to the handle, so closing it does no harm.

    These may be the right answers from a kernel-mode point of view (where the focus is on ensuring that consistency in kernel mode is not compromised), but they are horrible answers from an application point of view: Kernel mode will write back to the buffer and the OVERLAPPED when the I/O completes, thereby corrupting user-mode memory if user-mode had re-used the memory for some other purpose. And if the handle in the OVERLAPPED structure is closed, then user mode has lost its only way of determining when it's safe to continue! You had to look beyond the literal answer to see what the consequences were for application correctness.

    (You can also spot the kernel-mode point of view in the clause "if the memory is freed." The developer is talking about freed from kernel mode's point of view, meaning that it has been freed back to the operating system and is no longer committed in the process address space. But memory that is logically freed from the application's point of view may not be freed back to the kernel. It's usually just freed back into the heap's free pool.)

    The correct answer is that you have to wait for the I/O to complete before you free the buffer, close the event handle, or free the OVERLAPPED structure.

    Don't fall into this trap. The kernel developer was looking at the world through kernel-colored glasses. But you need to look at the situation from the perspective of your customers. When the kernel developer wrote "That's fine", he meant "That's fine for me." Sucks to be you, though.

    It's like programming an autopilot to land an airplane, but sending it through aerobatics that kill all the passengers. If you ask the autopilot team, they would say that they accomplished their mission: Technically, the autopilot did land the airplane.

    Here's another example of kernel-colored glasses. And another.

    Epilogue: To be fair, after I pointed out the kernel-mode bias in the response, the kernel developer admitted, "You're right, sorry. I was too focused on the kernel-mode perspective and wasn't looking at the bigger picture."

  • The Old New Thing

    Why double-null-terminated strings instead of an array of pointers to strings?


    I mentioned this in passing in my description of the format of double-null-terminated strings, but I think it deserves calling out.

    Double-null-terminated strings may be difficult to create and modify, but they are very easy to serialize: You just write out the bytes as a blob. This property is very convenient when you have to copy around the list of strings: Transferring the strings is a simple matter of transferring the memory block as-is. No conversion is necessary. This makes it easy to do things like wrap the memory inside another container that supports only flat blobs of memory.

    As it turns out, a flat blob of memory is convenient in many ways. You can copy it around with memcpy. (This is important when capturing values across security boundaries.) You can save it to a file or into the registry as-is. It marshals very easily. It becomes possible to store it in an IData­Object. It can be freed with a single call. And in the cases where you can't allocate any memory at all (e.g., you're filling a buffer provided by the caller), it's one of the few options available. This is also why self-relative security descriptors are so popular in Windows: Unlike absolute security descriptors, self-relative security descriptors can be passed around as binary blobs, which makes them easy to marshal, especially if you need to pass one from kernel mode to user mode.

    A single memory block with an array of integers containing offsets would also work, but as the commenter noted, it's even more cumbersome than double-null-terminated strings.

    Mind you, if you don't need to marshal the list of strings (because it never crosses a security boundary and never needs to be serialized), then an array of string pointers works just fine. If you look around Win32, you'll find that most cases where double-null terminated strings exist are for the most part either inherited from 16-bit Windows or are one of the cases where marshalling is necessary.

  • The Old New Thing

    Why is hybrid sleep off by default on laptops? (and how do I turn it on?)


    Hybrid sleep is a type of sleep state that combines sleep and hibernate. When you put the computer into a hybrid sleep state, it writes out all its RAM to the hard drive (just like a hibernate), and then goes into a low power state that keeps RAM refreshed (just like a sleep). The idea is that you can resume the computer quickly from sleep, but if there is a power failure or some other catastrophe, you can still restore the computer from hibernation.

    A hybrid sleep can be converted to a hibernation by simply turning off the power. By comparison, a normal sleep requires resuming the computer to full power in order to write out the hibernation file. Back in the Windows XP days, I would sometimes see the computer in the next room spontaneously turn itself on: I'm startled at first, but then I see on the screen that the system is hibernating, and I understand what's going on.

    Hybrid sleep is on by default for desktop systems but off by default on laptops. Why this choice?

    First of all, desktops are at higher risk of the power outage scenario wherein a loss of power (either due to a genuine power outage or simply unplugging the computer by mistake) causes all work in progress to be lost. Desktop computers typically don't have a backup battery, so a loss of power means instant loss of sleep state. By comparison, laptop computers have a battery which can bridge across power outages.

    Furthermore, laptops have a safety against battery drain: When battery power gets dangerously low, it can perform an emergency hibernate.

    Laptop manufacturers also requested that hybrid sleep be off by default. They didn't want the hard drive to be active for a long time while the system is suspending, because when users suspend a laptop, it's often in the form of "Close the lid, pick up the laptop from the desk, throw it into a bag, head out." Performing large quantities of disk I/O at a moment when the computer is physically being jostled around increases the risk that one of those I/O's will go bad. This pattern doesn't exist for desktops: When you suspend a desktop computer, you just leave it there and let it do its thing.

    Of course, you can override this default easily from the Control Panel. Under Power Options, select Change plan settings, then Changed advanced power settings, and wander over into the Sleep section of the configuration tree.

    If you're a command line sort of person, you can use this insanely geeky command line to enable hybrid sleep when running on AC power in Balanced mode:

    powercfg -setacvalueindex 381b4222-f694-41f0-9685-ff5bb260df2e
                              94ac6d29-73ce-41a6-809f-6363ba21b47e 1

    (All one line. Take a deep breath.) [Update: Or you can use powercfg -setacvalueindex SCHEME_BALANCED SUB_SLEEP HYBRIDSLEEP 1, as pointed out by Random832. I missed this because the ability to substitute aliases is not mentioned in the -setacvalueindex documentation. You have to dig into the -aliases documentation to find it.]

    Okay, what do all these insane options mean?

    -setacvalueindex sets the behavior when running on AC power. To change the behavior when running on battery, use -setdcvalueindex instead. Okay, that was easy.

    The next part is a GUID, specifically, the GUID that represents the balanced power scheme. If you want to modify the setting for a different power scheme, then substitute that scheme's GUID.

    After the scheme GUID comes the subgroup GUID. Here, we give the GUID for the Sleep subgroup.

    Next we have the GUID for the Hybrid Sleep setting.

    Finally, we have the desired new value for the setting. As you might expect, 1 enables it and 0 disables it.

    And where did these magic GUIDs come from? Run the powercfg -aliases command to see all the GUIDs. You can also run powercfg -q to view all the settings and their current values in the current power scheme.

    Bonus reading:

  • The Old New Thing

    Sorting is a state and a verb (and a floor wax and a dessert topping)


    Cliff Barbier points out that after you sort an Explorer view by name, new items are not inserted in their sorted position. This goes back to the question of whether sorting is a state or a verb.

    If you take an Explorer folder and say Sort by Name, do you mean "From now on, always show the contents of this folder sorted by name"? Or do you mean "Rearrange the items currently in this folder so they are sorted by name"? The first case treats sorting a state, where sorting is an attribute of the folder that persists. The second case treats sorting as a verb, where the action is performed so that its effects linger but the action itself is not remembered.

    You might think that sorting is obviously a state, but STL disagrees with you:

    std::vector<Item> v;
    ... fill v with stuff ...
    std::sort(v.begin(), v.end(), Item::ByName);

    When the last line of code appends a new item to the vector, it is not inserted in sorted order because std::sort is a verb which acts on the vector, not a state of the vector itself. The vector doesn't know "Oh, wait, I'm now a sorted vector."

    Okay, so in Explorer, is sorting a state or a verb?

    "Let's do both!"

    Sorting is a state, in the sense that the list of items is presented in sorted order when the folder is first opened. It's a verb in that the sorted order is not maintained when new items are added to the view while the folder is already open.

    Placing new items at the end instead of in their sorted position is necessary to avoid having items move around unbidden. Suppose you're looking at a folder sorted by name, you scroll down the list, find the item you want, and just as your finger is posed to click the mouse button, another process creates a file in the folder, which Explorer picks up and inserts into the view, causing the items to shift, and when your finger goes down on the mouse button, you end up clicking on the wrong item.

    You can imagine how annoying this can end up when there is a lot of file creation activity in the folder. If the items in the view were continuously sorted, then they would keep moving around and make it impossible to click on anything!

    Mind you, you do have this instability problem when files are deleted and you are in a non-placed view (like List or Details), but there's at least a cap on how much trouble deletion can cause (since eventually you delete all the items that were in the view originally).

    It looks like starting in Windows Vista, Explorer tries to insert new items into their sorted position, so at least in the modern versions of Windows, sort is a state. Good luck trying to click on something when the contents of the folder are constantly changing.

  • The Old New Thing

    A function pointer cast is a bug waiting to happen


    A customer reported an application compatibility bug in Windows.

    We have some code that manages a Win32 button control. During button creation, we subclass the window by calling Set­Window­Subclass. On the previous version of Windows, the subclass procedure receives the following messages, in order:


    We do not handle any of these messages and pass them through to Def­Subclass­Proc. On the latest version of Windows, we get only the first two messages, and comctl32 crashes while it's handling the third message before it gets a chance to call us. It looks like it's reading from invalid memory.

    The callback function goes like this:

    LRESULT ButtonSubclassProc(
        HWND hwnd,
        UINT uMsg,
        WPARAM wParam,
        LPARAM lParam,
        UINT_PTR idSubclass,
        DWORD_PTR dwRefData);

    We install the subclass function like this:


    We found that if we changed the callback function declaration to

    LRESULT CALLBACK ButtonSubclassProc(
        HWND hwnd,
        UINT uMsg,
        WPARAM wParam,
        LPARAM lParam,
        UINT_PTR idSubclass,
        DWORD_PTR dwRefData);

    and install the subclass function like this:


    then the problem goes away. It looks like the new version of Windows introduced a compatibility bug; the old code works fine on all previous versions of Windows.

    Actually, you had the problem on earlier versions of Windows, too. You were just lucky that the bug wasn't a crashing bug. But now it is.

    This is a classic case of mismatching the calling convention. The SUB­CLASS­PROC function is declared as requiring the CALLBACK calling convention (which on x86 maps to __stdcall), but the code declared it without any calling convention at all, and the ambient calling convention was __cdecl. When they went to compile the code, they got a compiler error that said something like this:

    error C2664: 'SetWindowSubclass' : cannot convert parameter 2 from 'LRESULT (__cdecl *)(HWND,UINT,WPARAM,LPARAM,UINT_PTR,DWORD_PTR)' to 'SUBCLASSPROC'

    "Since the compiler was unable to convert the parameter, let's give it some help and stick a cast in front. There, that shut up the compiler. Those compiler guys are so stupid. They can't even figure out how to convert one function pointer to another. I bet they need help wiping their butts when they go to the bathroom."

    And there you go, you inserted a cast to shut up the compiler and masked a bug instead of fixing it.

    The only thing you can do with a function pointer after casting it is to cast it back to its original type.¹ If you try to use it as the cast type, you will crash. Maybe not today, maybe not tomorrow, but someday.

    In this case, the calling convention mismatch resulted in the stack being mismatched when the function returns. It looks like earlier versions of Windows managed to hobble along long enough before things got resynchronized (by an EBP frame restoration, most likely) so the damage didn't spread very far. But the new version of Windows, possibly one compiled with more aggressive optimizations, ran into trouble before things resynchronized, and thus occurred the crash.

    The compiler was yelling at you for a reason.

    It so happened that the Windows application compatibility team had already encountered this problem in their test labs, and a shim had already been developed to auto-correct this mistake. (Actually, the shim also corrects another mistake they hadn't noticed yet: They forgot to call Remove­Window­Subclass when they were done.)

    ¹I refer here to pointers to static functions. Pointers to member functions are entirely different animals.

Page 2 of 3 (26 items) 123