April, 2013

  • The Old New Thing

    Dangerous setting is dangerous: This is why you shouldn't turn off write cache buffer flushing


    Okay, one more time about the Write-caching policy setting.

    This dialog box takes various forms depending on what version of Windows you are using.

    Windows XP:

      Enable write caching on the disk
    This setting enables write caching in Windows to improve disk performance, but a power outage or equipment failure might result in data loss or corruption.

    Windows Server 2003:

      Enable write caching on the disk
    Recommended only for disks with a backup power supply. This setting further improves disk performance, but it also increases the risk of data loss if the disk loses power.

    Windows Vista:

      Enable advanced performance
    Recommended only for disks with a backup power supply. This setting further improves disk performance, but it also increases the risk of data loss if the disk loses power.

    Windows 7 and 8:

      Turn off Windows write-cache buffer flushing on the device
    To prevent data loss, do not select this check box unless the device has a separate power supply that allows the device to flush its buffer in case of power failure.

    Notice that the warning text gets more and more scary each time it is updated. It starts out just by saying, "If you lose power, you might have data loss or corruption." Then it adds a recommendation, "Recommended only for disks with a backup power supply." And then it comes with a flat-out directive: "Do not select this check box unless the device has a separate power supply."

    The scary warning is there for a reason: If you check the box when your hardware does not satisfy the criteria, you risk data corruption.

    But it seems that even with the sternest warning available, people will still go in and check the box even though their device does not satisfy the criteria, and the dialog box says right there do not select this check box.

    And then they complain, "I checked this box, and my hard drive was corrupted! You need to investigate the issue and release a fix for it."

    Dangerous setting is dangerous.

    At this point, I think the only valid "fix" for this feature would be to remove it entirely. This is why we can't have dangerous things.

  • The Old New Thing

    How can I figure out which user modified a file?


    The Get­File­Time function will tell you when a file was last modified, but it won't tell you who did it. Neither will Find­First­File, Get­File­Attributes, or Read­Directory­ChangesW, or File­System­Watcher.

    None of these the file system functions will tell you which user modified a file because the file system doesn't keep track of which user modified a file. But there is somebody who does keep track: The security event log.

    To generate an event into the security event log when a file is modified, you first need to enable auditing on the system. In the Local Security Policy administrative tool, go to Local Policies, and then double-click Audit Policy. (These steps haven't changed since Windows 2000; the only thing is that the Administrative Tools folder moves around a bit.) Under Audit Object Access, say that you want an audit raised when access is successfully granted by checking Success (An audited security access attempt that succeeds).

    Once auditing is enabled, you can then mark the files that you want to track modifications to. On the Security tab of each file you are interested in, go to the Auditing page, and select Add to add the user you want to audit. If you want to audit all accesses, then you can choose Everyone; if you are only interested in auditing a specific user or users in specific groups, you can enter the user or group.

    After specifying whose access you want to monitor, you can select what actions should generate security events. In this case, you want to check the Successful box next to Create files / write data. This means "Generate a security event when the user requests and obtains permission to create a file (if this object is a directory) or write data (if this object is a file)."

    If you want to monitor an entire directory, you can set the audit on the directory itself and specify that the audit should apply to objects within the directory as well.

    After you've set up your audits, you can view the results in Event Viewer.

    This technique of using auditing to track who is generating modifications also works for registry keys: Under the Edit menu, select Permissions.

    Exercise: You're trying to debug a problem where a file gets deleted mysteriously, and you're not sure which program is doing it. How can you use this technique to log an event when that specific file gets deleted?

  • The Old New Thing

    Using opportunistic locks to get out of the way if somebody wants the file


    Opportunistic locks allow you to be notified when somebody else tries to access a file you have open. This is usually done if you want to use a file provided nobody else wants it.

    For example, you might be a search indexer that wants to extract information from a file, but if somebody opens the file for writing, you don't want them to get Sharing Violation. Instead, you want to stop indexing the file and let the other person get their write access.

    Or you might be a file viewer application like ildasm, and you want to let the user update the file (in ildasm's case, rebuild the assembly) even though you're viewing it. (Otherwise, they will get an error from the compiler saying "Cannot open file for output.")

    Or you might be Explorer, and you want to abandon generating the preview for a file if somebody tries to delete it.

    (Rats I fell into the trap of trying to motivate a Little Program.)

    Okay, enough motivation. Here's the program:

    #include <windows.h>
    #include <winioctl.h>
    #include <stdio.h>
    int __cdecl wmain(int argc, wchar_t **argv)
      g_o.hEvent = CreateEvent(nullptr, FALSE, FALSE, nullptr);
      HANDLE hFile = CreateFileW(argv[1], GENERIC_READ,
        FILE_FLAG_OVERLAPPED, nullptr);
      if (hFile == INVALID_HANDLE_VALUE) {
        return 0;
      DeviceIoControl(hFile, FSCTL_REQUEST_OPLOCK,
          &g_inputBuffer, sizeof(g_inputBuffer),
          &g_outputBuffer, sizeof(g_outputBuffer),
          nullptr, &g_o);
      if (GetLastError() != ERROR_IO_PENDING) {
        // oplock failed
        return 0;
      DWORD dwBytes;
      if (!GetOverlappedResult(hFile, &g_o, &dwBytes, TRUE)) {
        // oplock failed
        return 0;
      printf("Cleaning up because somebody wants the file...\n");
      Sleep(1000); // pretend this takes some time
      printf("Closing file handle\n");
      return 0;

    Run this program with the name of an existing file on the command line, say scratch x.txt. The program will wait.

    In another command window, run the command type x.txt. The program keeps waiting.

    Next, run the command echo hello > x.txt. Now things get interesting.

    When the command prompt opens x.txt for writing, the Device­Io­Control call completes. At this point we print the Cleaning up... message.

    To simulate the program taking a little while to clean up, we sleep for one second. Observe that the command prompt has not yet returned. Instead of immediately failing the request to open for writing with a sharing violation, the kernel puts the open request on hold to give our program time to clean up and close our handle.

    Finally, our simulated clean-up is complete, and we close the handle. At this point, the kernel allows the command processor to proceed and open the file for writing so it can write hello into it.

    That's the basics of opportunistic locks, but your program will almost certainly not be structured this way. You will probably not wait synchronously on the overlapped I/O but rather have the completion queued up to a completion function, an I/O completion port, or have a thread pool task listen on the event handle. When you do that, remember that you need to keep the OVERLAPPED structure as well as the REQUEST_OPLOCK_INPUT_BUFFER and REQUEST_OPLOCK_OUTUT_BUFFER structures valid until the I/O completes.

    (You may find the Cancel­Io function handy to try to accelerate the clean-up of the file handle and any other actions that are dependent upon it.)

    You can read more about opportunistic locks on MSDN. Note that there are limitations on explicitly-managed opportunistic locks; for example, they don't work across the network.

  • The Old New Thing

    Some trivia about the //build/ 2011 conference


    Registration for //build/ 2013 opens tomorrow. I have no idea what's in store this year, but I figure I'd whet your appetite by sharing some additional useless information about //build/ 2011.

    The internal code name for the prototype tablets handed out at //build/ 2011 was Nike. I think we did a good job of keeping the code name from public view, but one person messed up and accidentally let it slip to Mary-Jo Foley when they said that the contact email for people having tax problems related to the device is nikedistⓐmicrosoft.com.

    The advance crew spent an entire week preparing those devices. One of the first steps was unloading the devices from the pallettes. This was done in a disassembly line: The boxes were opened, the devices were fished out, then removed from the protective sleeve. At the end of this phase, you had one neat stack of boxes and one neat stack of devices.

    The advance crew also configured the hall so they would be ready to start once Redmond sent down the final bits of the Developer Preview build. The hall was divided into sections, and each section consisted of eight long tables. Four of the tables were arranged in a square, and the other four tables were placed outside the square, one parallel to each side, forming four lanes.


    Along the inner tables, there were docking stations, each with power, wired access to a private network, and a USB thumb drive. Along the outer tables, there were desk organizers like this one, ready to hold several devices in a vertical position, and next to the organizer was a power strip with power cables at the ready.

    In this phase of the preparation, the person working the station would take a device, pop it into a docking station, and power it on with the magic sequence to boot from USB. The USB stick copied itself to a RAM drive, then ran scripts to reformat the hard drive and copy all the setup files from the private network onto the hard drive, then it installed the build onto the machine, installed Visual Studio, installed the sample applications, flashed the firmware, and otherwise prepared the machine for unboxing. (Not necessarily in that order; I didn't write the scripts, so I don't know what they did exactly. But I figure these were the basic steps.) Once the setup files were copied from the private network, the rest of the installation could proceed autonomously. It didn't need any further access to the USB stick or the network. Everything it needed was on the RAM drive or the hard drive.

    The scripts changed the screen color based on what step of the process it was in, so that the person working the station could glance over all the devices to see which ones needed attention. Once all the files were copied from the network, the devices were unplugged from the docking station and moved to the vertical desk organizer. There, it got hooked up with a power cable and left to finish the installation. Moving the device to the second table freed up the docking station to accept another device.

    Assuming everything went well, the screen turned green to indicate that installation was complete, and the device was unplugged, powered down, and placed in the stack of devices that were ready for quality control.

    The devices that passed quality control then needed to be boxed up so they could be handed out to the conference attendees. Another assembly line formed: The devices were placed back in the protective sleeves, nestled snugly in their boxes, and the boxes closed back up.

    Now, I'm describing this all as if everything ran perfectly smoothly. Of course there were problems which arose, some minor and some serious, and the process got tweaked as the days progressed in order to make things more efficient or to address a problem that was discovered.

    For example, the devices were labeled preview devices, but shortly before the conference was set to begin, the manufacturer registered their objection to the term, since preview implies that the device will actually turn into a retail product. They insisted that the devices be called prototype devices. This meant that mere days before the conference opened, a rush print job of 5000 stickers had to be shipped down to the convention center in order to cover the word preview with the word prototype. A new step was added to the assembly line: place sticker over offending word.

    Another example of problem-solving on the fly: The SIM chip for the wireless data plan was preinstalled in the device. The chip came on a punch-out card, and the manufacturer decided to leave the card shell in the box. Okay, I guess, except that the card shell had the SIM card's account number printed on it. Since the reassembly process didn't match up the devices with the original boxes, you had all these devices with unmatched card shells. In theory, somebody might call the service provider and give the account number on the shell rather than the number on the SIM card. To fix this, a new step was added to the assembly line: Remove the card shells. All the previously-assembled boxes had to be unpacked so the shells could be removed. (At some point, somebody discovered that you could extract the shells without removing the foam padding if you held the box at just the right angle and shook it, so that saved a few seconds.)

    Now about the devices themselves: They were a very limited run of custom hardware, and they were not cheap. I think the manufacturing cost was in the high $2000s per unit, and that doesn't count all the sunk costs. I found it amusing when people wrote, "What do you mean a free tablet? Obviously they baked that into the cost of the conference registration, so you paid for it anyway." Conference registeration was $2,095 (or $1,595 if you registered early), which nowhere near covered the cost of the device.

    Some people whined that Microsoft should have made these devices available to the general public for purchase. First of all, these are developer prototypes, not consumer-quality devices. They are suitable for developing Windows 8 software but aren't ready for prime time. (For one thing, they run hot. More on that later.) Second of all, there aren't any to sell. We gave them all away! It's not like there's a factory sitting there waiting for orders. It was a one-shot production run. When they ran out, they ran out.¹

    Third, these devices, by virtue of being prototypes, had a high infant morality rate. I don't know exactly, but I'm guessing that maybe a quarter of them ended up not being viable. One of the things that the advance crew had to do was burn in the devices to try to catch the dead devices. I remember the team being very worried that the hardware helpdesk at the conference would be overwhelmed by machines that slipped through the on-site testing. Luckily, that didn't happen. (Perhaps they were too successful, because everybody ended up assuming that pumping out these puppies was a piece of cake!)

    Doing a little back-of-the-envelope calculations, let's say that the machines cost around $2,750 to produce, and that a quarter of them failed burn-in. Add on top of that a 25% buffer for administrative overhead, and you're looking at a cost-per-device of over $4,500. I doubt there would be many people interested in buying one at that price.

    Especially since you could buy something very similar for around $1100 to $1400. It won't have the hardware customizations, but it'll be close.

    The hardware glitches that occurred during the keynote never appeared during rehearsals in Redmond. But when rehearing in Anaheim, the hardware started flaking out like crazy and eventually self-destructing. (And like I said, those devices weren't cheap!) One of my colleagues got a call from Los Angeles: "When you come down here, bring as many extra Nikes as you can. We're burning through them like mad!" My colleague ended up pissing off everybody in the airport security line behind her when she got to the X-ray machine and unloaded nine devices onto the conveyer belt. "Great, I just put tens of thousands of dollars worth of top-secret hardware on an airport X-ray machine. I hope nothing happens to them."

    Why did the devices start failing during rehearsals in Anaheim, when the ran just fine in Redmond? Because in Anaheim, the devices were being run at full brightness all the time (so they show up better on camera), and they were driving giant video displays, and they were sitting under hot stage lights for hours on end. On top of that, I'm told that the HDMI protocol is bi-directional, so it's possible that the giant video displays at the convention center were feeding data back into the devices in a way that they couldn't handle. Put all that together, and you can see why the devices would start overheating.

    What made it worse was that in order to cram all the extra doodads and sensors into the device, the intestines had to be rearranged, and the touch processor chip ended up being placed directly over the HDMI processor chip. That meant that when the HDMI chip overheated, it caused the touch processor to overheat, too. If you watched the keynote carefully, you'd see that shortly before the machine on stage blew up, you saw the touch sensor flip out and generate phantom touches all over the screen. That was the clue that the machine was about to die from overheating and it would be in the presenter's best interest to switch to another machine quickly. (The problem, of course, is that the presenter is looking out into the audience giving the talk, not staring at the device's screen the whole time. As a result, this helpful early warning signal typically goes unnoticed by the very person who can do the most about it.)

    The day before the conference officially began, Jensen Harris did a preview presentation to the media. One of the glitches that hit during his presentation was that the system started hallucinating an invisible hand that kept swiping the Word Hunt sample game back onto the screen. Jensen quipped, "This is our new auto-Word Hunt feature. We want to make sure you always have Word Hunt when you need it. We've moved beyond touch. Now you don't even need to touch your PC to get access to Word Hunt."

    Jensen's phenomenal calm in the face of adversity also manifested itself during his keynote presentation. You in the audience never noticed it, but at one point, one of the demo applications hit a bug and hung. Jensen spotted the problem before it became obvious and smoothly transitioned to another device and continued. What's more, while he was talking, he went back to the first device and surreptitiously called up Task Manager, killed the the hung application, and prepared the device for the next demo. All this without skipping a beat.

    We are all in awe of Jensen.

    When he stopped by the booth, Jensen said to me, "I don't know how you can stand it, Raymond. Now I can't walk down the hallway without a dozen people coming up to me and wanting to say something or shake my hand or get my autograph!" (One of the rare times we are both in the same room.)

    Welcome to nerd celebrity, Jensen. You just have to smile and be polite.

    Bonus chatter: What happened to the devices that failed quality control? A good number of them were rejected for cosmetic reasons (scuff marks, mostly). As a thank-you gift to the advance crew for all their hard work, everybody was given their choice of a scuffed-up device to take home. The remaining devices that were rejected for purely cosmetic reasons were taken back to Redmond and distributed to the product team to be used for internal testing purposes.

    ¹ My group had one of these scuffed-up devices that we used for internal testing. Somebody dropped it, and a huge spiderweb crack covered the left third of the screen, so you had to squint to see what was on the screen through the cracks. We couldn't order a replacement because there was nowhere to order replacements from. We just had to continue testing with a device that had a badly cracked screen.

  • The Old New Thing

    Dark corners of C/C++: The typedef keyword doesn't need to be the first word on the line


    Here are some strange but legal declarations in C/C++:

    int typedef a;
    short unsigned typedef b;

    By convention, the typedef keyword comes at the beginning of the line, but this is not actually required by the language. The above declarations are equivalent to

    typedef int a;
    typedef short unsigned b;

    The C language (but not C++) also permits you to say typedef without actually defining a type!

    typedef enum { c }; // legal in C, not C++

    In the above case, the typedef is ignored, and it's the same as just declaring the enum the plain boring way.

    enum { c };

    Other weird things you can do with typedef in C:

    typedef int;
    typedef int short;

    None of the above statements do anything, but they are technically legal in pre-C89 versions of the C language. They are just alternate manifestations of the quirk in the grammar that permits you to say typedef without actually defining a type. (In C89, this loophole was closed: Clause 6.7 Constraint 2 requires that "A declaration shall declare at least a declarator, a tag, or the members of an enumeration.")

    That last example of typedef int short; is particularly misleading, since at first glance it sounds like it's redefining the short data type. But then you realize that int short and short int are equivalent, and this is just an empty declaration of the short int data type. It doesn't actually widen your shorts. If you need to widen your shorts, go see a tailor.¹

    Note that just because it's legal doesn't mean it's recommended. You should probably stick to using typedef the way most people use it, unless you're looking to enter the IOCCC.

    ¹ The primary purpose of this article was to tell that one stupid joke. And it's not even my joke!

  • The Old New Thing

    Technically not lying, but not exactly admitting fault either


    I observed a spill suspiciously close to a three-year-old's play table. I asked, "How did the floor get wet?"

    She replied, "Water."

    It's not lying, but it's definitely not telling the whole story. She'll probably grow up to become a lawyer.

  • The Old New Thing

    If you don't know what you're going to do with the answer to a question, then there's not much point in making others work hard to answer it


    A customer asked the following question:

    We've found that on Windows XP, when we call the XYZ function with the Awesome flag, the function fails for no apparent reason. However, it works correctly on Windows 7. Do you have any ideas about this?

    So far, the customer has described what they have observed, but they haven't actually asked a question. It's just nostalgia, and nostalgia is not a question. (I'm rejecting "Do you have an ideas about this?" as a question because it too vague to be a meaningful question.)

    Please be more specific about your question. Do you want to obtain Windows 7-style behavior on Windows XP? Do you want to obtain Windows XP-style behavior on Windows 7? Do you merely want to understand why the two behave differently?

    The customer replied,

    Why do they behave differently? Was it a new design for Windows 7? If so, how do the two implementations differ?

    I fired up a handy copy of Windows XP in a virtual machine and started stepping through the code, and then I stopped and realized I was about to do a few hours' worth of investigation for no clear benefit. So I stopped and responded to their question with my own question.

    Why do you want to know the reason for the change in behavior? How will the answer affect what you do next? Consider the following three answers:

    1. "The behavior was redesigned in Windows 7."
    2. "The Windows XP behavior was a bug that was fixed in Windows 7."
    3. "The behavior change was a side-effect of a Windows Update hotfix."

    What will you do differently if the answer is (1) rather than (2) or (3)?

    The customer never responded. That saved me a few hours of my life.

    If you don't know what you're going to do with the answer to a question, then there's not much point in others working hard to answer it. You're just creating work for others for no reason.

  • The Old New Thing

    If you're going to use an interlocked operation to generate a unique value, you need to use it before it's gone


    Is the Interlocked­Increment function broken? One person seemed to think so.

    We're finding that the Interlocked­Increment is producing duplicate values. Are there are any know bugs in Interlocked­Increment?

    Because of course when something doesn't work, it's because you are the victim of a vast conspiracy. There is a fundamental flaw in the Interlocked­Increment function that only you can see. You are not a crackpot.

    LONG g_lNextAvailableId = 0;
    DWORD GetNextId()
      // Increment atomically
      // Subtract 1 from the current value to get the value
      // before the increment occurred.
      return (DWORD)g_lNextAvailableId - 1;

    Recall that Interlocked­Increment function increments a value atomically and returns the incremented value. If you are interested in the result of the increment, you need to use the return value directly and not try to read the variable you incremented, because that variable may have been modified by another thread in the interim.

    Consider what happens when two threads call Get­Next­Id simultaneously (or nearly so). Suppose the initial value of g_lNext­Available­Id is 4.

    • First thread calls Interlocked­Increment to increment from 4 to 5. The return value is 5.
    • Second thread calls Interlocked­Increment to increment from 5 to 6. The return value is 6.
    • First thread ignores the return value and instead reads the current value of g_lNext­Available­Id, which is 6. It subtracts 1, leaving 5, and returns it.
    • Second thread ignores the return value and instead reads the current value of g_lNext­Available­Id, which is still 6. It subtracts 1, leaving 5, and returns it.

    Result: Both calls to Get­Next­Id return 5. Interpretation: "Interlocked­Increment is broken."

    Actually, Interlocked­Increment is working just fine. What happened is that the code threw away the unique information that Interlocked­Increment returned and instead went back to the shared variable, even though the shared variable changed its value in the meantime.

    Since this code cares about the result of the increment, it needs to use the value returned by Interlocked­Increment.

    DWORD GetNextId()
      // Increment atomically and subtract 1 from the
      // incremented value to get the value before the
      // increment occurred.
      return (DWORD)InterlockedIncrement(&g_lNextAvailableId) - 1;

    Exercise: Criticize this implementation of IUnknown::Release:

    STDMETHODIMP_(ULONG) CObject::Release()
     if (m_cRef == 0)
      delete this;
      return 0;
     return m_cRef;
  • The Old New Thing

    Is it legal to have a cross-process parent/child or owner/owned window relationship?


    A customer liaison asked whether it was legal to use Set­Parent to create a parent/child relationship between windows which belong to different processes. "If I remember correctly, the documentation for Set­Parent used to contain a stern warning that it is not supported, but that remark does not appear to be present any more. I have a customer who is reparenting windows between processes, and their application is experiencing intermittent instability."

    Is it technically legal to have a parent/child or owner/owned relationship between windows from different processes?

    Yes, it is technically legal.

    It is also technically legal to juggle chainsaws.

    Creating a cross-thread parent/child or owner/owned window relationship implicitly attaches the input queues of the threads which those windows belong to, and this attachment is transitive: If one of those queues is attached to a third queue, then all three queues are attached to each other. More generally, queues of all windows related by a chain of parent/child or owner/owned or shared-thread relationships are attached to each other.

    Exercise: What are the equivalence classes generated by taking the transitive closure of parent/child windows, and what would be a natural choice of class representative? What about the equivalence classes generated by the transitive closure of parent/child and owner/owned windows?

    This gets even more complicated when the parent/child or owner/owned relationship crosses processes, because cross-process coordination is even harder than cross-thread coordination. Sharing variables within a process is much easier than sharing variables across processes. On top of that, some window messages are blocked between processes.

    So yes, it is technically legal, but if you create a cross-process parent/child or owner/owned relationship, the consequences can be very difficult to manage. And they become near-impossible to manage if one or both of the windows involved is unaware that it is participating in a cross-process window tree. (I often see this question in the context of somebody who wants to grab a window belonging to another process and forcibly graft it into their own process. That other process was totally unprepared for its window being manipulated in this way, and things may stop working. Indeed, things will definitely stop working if you change that other window from a top-level window to a child window.)

    The existing text was probably removed when somebody pointed out that the action is technically legal (though not recommended for beginners), and instead of trying to come up with new text that describes the situation, merely removed the text that was incorrect. The problem with coming up with new text that describes the situation is that it only leads to more questions from people who want to do it in spite of the warnings. (It's one of those "if you don't already know what the consequences are, then you are not smart enough to do it correctly" things. You must first become the master of the rules before you can start breaking them.)

  • The Old New Thing

    Why does CoCreateInstance work even though my thread never called CoInitialize? The curse of the implicit MTA


    While developing tests, a developer observed erratic behavior with respect to Co­Create­Instance:

    In my test, I call Co­Create­Instance and it fails with CO_E_NOT­INITIALIZED. Fair enough, because my test forgot to call Co­Initialize.

    But then I went and checked the production code: In response to a client request, the production code creates a brand new thread to service the request. The brand new thread does not call Co­Initialize, yet its call to Co­Create­Instance succeeds. How is that possible? I would expect the production code to also get a CO_E_NOT­INITIALIZED error.

    I was able to debug this psychically, but only because I knew about the implicit MTA.

    The implicit MTA is not something I can find very much documentation on, except in the documentation for the APP­TYPE­QUALIFIER enumeration, where it mentions:

    [The APT­TYPE­QUALIFIER_IMPLICIT_MTA] qualifier is only valid when the pAptType parameter of the Co­Get­Apartment­Type function specifies APT­TYPE_MTA on return. A thread has an implicit MTA apartment type if it does not initialize the COM apartment itself, and if another thread has already initialized the MTA in the process. This qualifier informs the API caller that the MTA of the thread is implicitly inherited from other threads and is not initialized directly.

    Did you get that? If any thread in the process calls Co­Initialize­[Ex] with the COINIT_MULTI­THREADED flag, then that not only initializes the current thread as a member of the multi-threaded apartment, but it also says, "Any thread which has never called Co­Initialize­[Ex] is also part of the multi-threaded apartment."

    Further investigation revealed that yes, some other thread in the process called Co­Initialize­Ex(0, COINIT_MULTI­THREADED), which means that the thread which forgot to call Co­Initialize was implicitly (and probably unwittingly) placed in the MTA.

    The danger of this implicit MTA, of course, is that since you didn't know you were getting it, you also don't know if you're going to lose it. If that other thread which called Co­Initialize­Ex(0, COINIT_MULTI­THREADED) finally gets around to calling Co­Un­initialize, then it will tear down the MTA, and your thread will have the MTA rug ripped out from under it.

    Moral of the story: If you want the MTA, make sure you ask for it explicitly. And if you forget, you may end up in the implicit MTA, whether you wanted it or not. (Therefore, conversely, if you don't want the MTA, make sure to deny it explicitly!)

    Exercise: Use your psychic debugging skills to diagnose the following problem. "When my code calls Get­Open­File­Name, it behaves erratically. I saw a Knowledge Base article that says that this can happen if I initialize my thread in the multi-threaded apartment, but my thread does not do that."

Page 1 of 3 (30 items) 123