April, 2010

  • The Old New Thing

    Welcome to Taiwan's premier English-only nightclub

    • 12 Comments

    One of my friends is fluent in both Mandarin and English. When she lived in Taiwan, she paid a visit to a nightclub whose gimmick was that you had to speak English. The target audience was not foreigners but rather native Taiwanese who learned English as a second language. My friend didn't have any problems with this rule, but many of the guests appeared to be struggling to conform.

    My friend paid a visit to the ladies' room, and there she overheard a conversation between two other guests. (They were speaking in Mandarin. Apparently, the rules aren't enforced in the bathroom.)

    "There's this cute guy out on the dance floor, but I don't know what to say to him. My English is not very good."

    My friend told her, "That's okay. His English isn't very good either."

  • The Old New Thing

    If it's not yours, then don't mess with it without permission from the owner

    • 19 Comments

    It's surprising how many principles of real life also apply to computer programming. For example, one of the rules of thumb for real life is that is that if something doesn't belong to you, then you shouldn't mess with it unless you have permission from the owner. If you want to ride Jimmy's bike, then you need to have Jimmy's permission. Even if Jimmy leaves his bicycle unlocked in his driveway, that doesn't mean that it's available for anyone to take or borrow.

    In computer programming, the code that creates an object (or on whose behalf the object is created) controls what is done with the object, and if you're not that component, then it's only right to get the permission of that component before you start messing with that it thought was its private property.

    Application compatibility is, in large part, dealing with programs which violate this rule of civilized society, programs which directly manipulate the contents of list views they did not create, use reflection to access private members of classes, that sort of thing. But I won't use that as the motivating example this time, because you're all sick and tired of that.

    Instead, let's look at the low-fragmentation heap. The question is, "Under what conditions can I convert a heap to a low-fragmentation heap?"

    Well, if you called Heap­Create, then that heap is yours and you decide what the rules are. If you want that heap to be a low-fragmentation heap, then more power to you.

    If you didn't call Heap­Create then that heap doesn't belong to you; you're just a guest. But of course the owner of the heap can grant permission to you, at which point you are free to do whatever it was the owner said you could do. If Jimmy says, "You can borrow my bike if it's just sitting in the driveway," then you can borrow his bicycle if it is just sitting in the driveway. But if it's in the garage, then you can't borrow it. And even if it's sitting in the driveway, you can't sell it. You can only borrow it.

    Okay, let's look at heaps again. If you are an executable, then the process heap was created on your behalf. (This is not obvious, but that's the guidance I've received from the people who work with this sort of thing.) Therefore, if you want, you can call Get­Process­Heap and convert that heap to a low-fragmentation heap. It's the heap for your process, so if you want it to be a low-fragmentation heap, the heap folks say that's okay with them.

    On the other hand, if you're writing a DLL, then the process heap does not belong to you, nor was it created on your behalf. It belongs to the executable that loaded your DLL, and it is that executable which decides what type of heap it wants. If you would prefer that your DLL use a low-fragmentation heap, you can include that in the guidance in your DLL's documentation, but be aware that the process heap is shared with all DLLs in the process, so the hosting application may not be able to comply with your guidance if it is also using another DLL whose guidance documentation says that it should not be used with a low-fragmentation heap. If a low-fragmentation heap is really important to your DLL, then you can create your own heap with Heap­Create and set it into low-fragmentation mode. When you create a heap with Heap­Create, it's your heap, and you get to decide what the rules are.

    If you use the C runtime library default heap, then that heap is under the control of the C runtime library, and you don't have the rights to change its parameters. However, the C runtime library is one of the examples where you're allowed to use an object that's not yours if you have permission from the owner: The _get_heap_handle function was specifically created so that you could convert the heap to a low-fragmentation heap. But now that you've unwrapped one layer of ownership, there is still the matter of which of the C runtime's clients is the decision-maker with regard to how that heap is to be configured?

    Remember that a DLL is a guest in the host process. You don't go changing the carpets in someone's house just because you're visiting.

    If you linked the C runtime library statically, then you are the only client of that heap, and you are therefore free to convert it to a low-fragmentation heap. (If you bring your own towels to someone's house, then you are free to abuse them in any manner you choose.) On the other hand, if you linked the C runtime library dynamically, then you're using the shared C runtime heap, and the authority to determine the mode of that heap belongs to the hosting executable.

  • The Old New Thing

    A short puzzle about heap expansion

    • 18 Comments

    At the 2008 PDC, somebody stopped by the Ask the Experts table with a question about the heap manager.

    I don't understand why the heap manager is allocating a new segment. I allocated a bunch of small blocks, then freed nearly all of them. And then when my program makes a large allocation, it allocates a new segment instead of reusing the memory I had just freed.

    Under the classical model of the heap, the heap manager allocates a large chunk of memory from lower-level operating system services, and then when requests for memory come in from the application, it carves blocks of memory from the big chunk and gives them to the application. (These blocks are called busy.) When those blocks of memory are freed, they are returned to the pool of available memory, and if there are two blocks of free memory adjacent to each other, they are combined (coalesced) to form a single larger block. That way, the block can be used to satisfy a larger allocation in the future.

    Under the classical model, allocating memory and then freeing it is a net no-operation. (Nitpicky details notwithstanding.) The allocation carves the memory out of the big slab of memory, and the free returns it to the slab. Therefore, the situation described above is a bit puzzling. After the memory is freed back to the heap, the little blocks should coalesce back into a block big enough to hold a larger allocation.

    I sat and wondered for a moment, trying to think of cases where coalescing might fail, like if they happened to leave an allocated block right in the middle of the chunk. Or maybe there's some non-classical behavior going on. For example, maybe the look-aside list was keeping those blocks live.

    As I considered the options, the person expressed disbelief in a different but telling way:

    You'd think the low-fragmentation heap (LFH) would specifically avoid this problem.

    Oh wait, you're using the low-fragmentation heap! This is a decidedly non-classical heap implementation: Instead of coalescing free blocks, it keeps the free blocks distinct. The idea of the low-fragmentation heap is to reduce the likelihood of various classes of heap fragmentation problems:

    • You want to make a large allocation, and you almost found it, except that there's a small allocation in the middle of your large block that is in your way.
    • You have a lot of free memory, but it's all in the form of teeny tiny useless blocks.

    That first case is similar to what I had been considering: where you allocated a lot of memory, free most of it, but leave little islands behind.

    The second case occurs when you have a free block of size N, and somebody allocates a block of size M < N. The heap manager breaks the large block into two smaller blocks: a busy block of size M and a free block of size (N − M). These "leftover" free blocks aren't a problem if your program later requests a block of size N − M: The leftover block can be used to satisfy the allocation, and no memory goes wasted. But if your program never asks for a block of size N − M, then the block just hangs around as one of those useless blocks.

    Imagine, for concreteness, a program that allocates memory in a loop like this:

    • p1 = alloc(128)
    • p2 = alloc(128)
    • free(p1)
    • p3 = alloc(96)
    • (Keep p2 and p3 allocated.)
    • Repeat

    Under the classical model, when the request for 96 bytes comes in, the memory manager sees that 128-byte block (formerly known as p1) and splits it into two parts, a 96-byte block and a 32-byte block. The 96-byte block becomes block p3, and the 32-byte block sits around waiting for somebody to ask for 32 bytes (which never happens).

    Each time through this loop, the heap grows by 256 bytes. Of those 256 bytes, 224 are performing useful work in the application, and 32 bytes are sitting around being one of those useless tiny memory allocations which contributes to fragmentation.

    The low-fragmentation heap tries to avoid this problem by keeping similar-sized allocations together. A heap block only gets re-used for the same size allocation it was originally created for. (This description is simplified for the purpose of the discussion.) (I can't believe I had to write that.)

    In the above scenario, the low-fragmentation heap would respond to the request to allocate 96 bytes not by taking the recently-freed 128-byte block and splitting it up, but rather by making a brand new 96-byte allocation. This seems wasteful. After all, you now allocated 128 + 128 + 96 = 352 bytes even though the application requested only 128 + 96 = 224 bytes. (The classical heap would have re-used the first 96 bytes of the second 128-byte block, for a total allocation of 128 + 128 = 256 bytes.)

    This seemingly wasteful use of memory is really an investment in the future. (I need to remember to use that excuse more. "No, I'm not being wasteful. I'm just investing in the future.")

    The investment pays off at the next loop iteration: When the request for 128 bytes comes in, the heap manager can return the 128-byte block that was freed by the previous iteration. Now there is no waste in the heap at all!

    Suppose the above loop runs 1000 times. A classical heap would end up with a thousand 128-byte allocations, a thousand 96-byte allocations, and a thousand 32-byte free blocks on the heap. That's 31KB of memory in the heap lost to fragmentation, or about 12%. On the other hand, the low-fragmentation heap would end up with a thousand 128-byte allocations, a thousand 96-byte allocations, and one 128-byte free block. Only 128 bytes has been lost to fragmentation, or just 0.06%.

    Of course, I exaggerated this scenario in order to make the low-fragmentation heap look particularly good. The low-fragmentation heap operates well when heap allocation sizes tend to repeat, because the repeated-size allocation will re-use a freed allocation of the same size. It operates poorly when you allocate blocks of a certain size, free them, then never ask for blocks of that size again (since those blocks just sit around waiting for their chance to shine, which never comes). Fortunately, most applications don't fall into this latter category: Allocations tend to be for a set of fixed sizes (fixed-size objects), and even allocations for variable-sized objects tend to cluster around a few popular sizes.

    Generally speaking, the low-fragmentation heap works pretty well for most classes of applications, and you should consider using it. (In fact, I'm told that the C runtime libraries have converted the default C runtime heap to be a low-fragmentation heap starting in Visual Studio 2010.)

    On the other hand, it's also good to know a little of how the low-fragmentation heap operates, so that you won't be caught out by its non-classical behavior. For example, you should now be able to answer the question which was posed at Ask the Experts. As you can see, it often doesn't take much to be an expert. You can do it, too.

    Sidebar: Actually, I was able to answer the customer's question even without knowing anything about the low-fragmentation heap prior to the customer mentioning it. (Indeed, I had barely even heard of it until that point.) Just given the name low-fragmentation heap, I was able to figure out roughly how such a beast would have operated. I wasn't correct on the details, but the underlying principles were good. So you see, you don't even have to know what you're talking about to be an expert. You just have to be able to take a scenario and think, "How would somebody have designed a system to solve this problem?"

  • The Old New Thing

    What happens to the contents of a memory-mapped file when a process is terminated abnormally?

    • 8 Comments

    Bart wonders what happens to the dirty contents of a memory-mapped file when an application is terminated abnormally.

    From the kernel's point of view, there isn't much difference between a normal and an abnormal termination. In fact, the last thing that Exit­Process does is Terminate­Process(Get­Current­Process(), Exit­Code), so in a very real sense the two operations are identical from the kernel's point of view. The only difference is that in a controlled termination, DLLs get their DLL_PROCESS_DETACH notifications, whereas in an abnormal termination, they don't. But given that the advice for DLLs is to do as little as possible during process termination (including suppressing normal cleanup), the difference even there is negligible.

    Therefore, the real question is What happens to the dirty contents of a memory-mapped file when an application exits without closing the handle?

    If a process exits without closing all its handles, the kernel will close them on the process's behalf. Now, in theory, the kernel could change its behavior depending on why a handle is closed—skipping some steps if the handle is being closed as part of cleanup and performing additional ones if it came from an explicit Close­Handle call. So it's theoretically possible that the unwritten memory-mapped data may be treated differently. (Although it does violate the principle of not keeping track of information you don't need. But as we've seen, sometimes you have to violate a principle.)

    But there's also the guarantee that multiple memory-mapped views of the same local file are coherent; that is, that changes made to one view are immediately reflected in other views. Therefore, if there were another view of that memory-mapped file which you neglected to close manually, any changes you had made to that view would still be visible in other views, so the contents were not lost. It's not like the kernel is going to fire up its time machine and say, "Okay, those writes to the memory-mapped file which this terminated application made, I'm going to go back and undo them even though I had already shown them to other applications."

    In other words, in the case where the memory-mapped view is to a local file, and there happens to be another view on the file, then the changes are not discarded, since they are being kept alive by that other view.

    Therefore, if the kernel were to discard unflushed changes to the memory-mapped view, it would have to have not one but two special-cases. One for the "this handle is being closed implicitly due to an application exiting without closing all its handles" case and another for the "this handle being closed implicitly due to an application exiting without closing all its handles when the total number of active views is less than two."

    I don't know what the final answer is, but if the behavior were any different from the process closing the handle explicitly, it would require two special-case behaviors in the kernel. I personally consider this unlikely. Certainly if I were writing an operating system, I wouldn't bother writing these two special cases.

    If you think like the memory manager, then you come to the same conclusion from a different direction. If you think about the lifetime of a committed page, there are a small set of operations each page type needs to perform.

    • Page in: Produce the contents of the page.
    • Make dirty: The page has been written to for the first time.
    • Page out dirty: The page is about to be removed from memory. The application has written to the page since it was paged in.
    • Page out clean: The page is about to be removed from memory. The application has not written to the page since it was paged in.
    • Decommit dirty: The page is about to be removed from memory, and it was written to since it was paged in.
    • Decommit clean: The page is about to be removed from memory, and it was not written to since it was paged in.

    The different types of committed pages implement these operations in different ways. Because, after all, that's what makes them different.

    • Zero-initialized memory
      • Page in: Fill the page with zeroes.
      • Make dirty: Locate a free page in the swap file, assign it to this page, set type to "allocated memory".
      • Page out dirty: (never happens)
      • Page out clean: Do nothing.
      • Decommit dirty: (never happens)
      • Decommit clean: Do nothing.
    • Allocated memory
      • Page in: Read page contents from swap file.
      • Make dirty: Do nothing.
      • Page out dirty: Write page contents to swap file.
      • Page out clean: Do nothing.
      • Decommit dirty: Free the page from the swap file.
      • Decommit clean: Free the page from the swap file.
    • Memory-mapped file
      • Page in: Read page contents from file.
      • Make dirty: Do nothing.
      • Page out dirty: Write page contents to file.
      • Page out clean: Do nothing.
      • Decommit dirty: Write page contents to file.
      • Decommit clean: Do nothing.

    There are other types of pages (such as copy-on-write pages, the null page, and physical pages, but they aren't relevant here.)

    Note that these operations apply to the pages and not to the address space. Memory can be committed without being visible in the address space, and a single page can be visible in multiple address spaces at once, or even multiple times within the same address space! The reason two views onto the same local file are coherent is that they are merely two manifestations of the same underlying committed page. The part of the memory manager that manages committed memory doesn't know where in the address space (if anywhere) the memory is going to be mapped, nor does it know why the requested operation is taking place (beyond the circumstances implied by the operation itself).

    When a memory-mapped file page is decommitted, the appropriate Decommit function is called, and if the page is dirty, then the contents are flushed to the underlying file. It doesn't know why the decommit happened, so it can't perform any special shortcuts depending on the circumstances that led to the decommit.

    Consider a memory-mapped file with two views. One view closes normally. The page is still committed (the second view is still using it), so no Decommit happens yet. Then the process which was using the second view terminates abnormally. The Decommit must still be treated as a normal (not abnormal) decommit, because the first process did terminate normally, and therefore is under the not unreasonable expectation that its changes will make it into the file. In order to protect against discarding changes which earlier (now-closed) views had made, an extra bit would have to be carried for each committed page that says, "This page contains data that we promised to write back to the file (because somebody wrote to it and then closed the view normally)." You would set this flag on every page in a view when you close the view normally, or if you close the view due to abnormal process termination if there are other still-running processes that are using the view (because the changes are visible to them), and you would clear this flag after each Page out operation. Then you could add another type of decommit, Decommit leaked, which is used when a page that contains no changes from properly-closed views is decommitted because the last remaining reference to it was from a process that terminated abnormally.

    You could do all this work in your memory manager, but why bother? It's additional bookkeeping just to optimize the case where somebody is doing something wrong.

  • The Old New Thing

    He bought the whole seat, but we only needed the edge

    • 28 Comments

    After the Windows 95 project was released to manufacturing, but before the launch event itself, the team finally had a chance to relax and unwind after many years of hard work. The project manager decided to have a morale event to get everyone together to do something fun. A typical morale event might be going to see a baseball game, renting out a movie theater to watch the latest action flick, or something as simple as a picnic or a softball game.

    But this time, the project manager decided to do something different, something wild, something crazy, something everybody would talk about for days: He bought everyone tickets to the monster truck rally. (And he bought the whole seat, even though we'd only need the edge.)

  • The Old New Thing

    Why doesn't TryEnterCriticalSection try harder?

    • 23 Comments

    Bart wants to know why the Try­Enter­Critical­Section gives up if the critical section is busy instead of trying the number of times specified by the critical section spin count.

    Actually, there was another condition on the proposed new behavior: "but does not release its timeslice to the OS if it fails to get it while spinning." This second condition is a non-starter because you can't prevent the operating system from taking your timeslice away from you. The best you can do is detect that you lost your previous timeslice when you receive the next one. And even that is expensive: You have to keep watching the CPU cycle counter, and if it jumps by too much, then you lost your timeslice. (And you might have lost it due to a hardware interrupt or paging. Good luck stopping those from happening.)

    Even if there were a cheap way of detecting that the operating system was about to take your timeslice away from you, what good would it do? "Oh, my calculations indicate that if I spin one more time, I will lose my timeslice, so I'll just fail and return." Now the application regains control with 2 instructions left in its timeslice. That's not even enough time to test the return value and take a conditional jump! Even if the Try­Enter­Critical­Section managed to return just before the timeslice expired, that's hardly any consolation, because the timeslice is going to expire before the application can react to it. Whatever purpose there was to "up to the point where you're about to release the timeslice" is lost.

    Okay, maybe the intention of that clause was "without intentionally releasing its timeslice (but if it loses its timeslice in the normal course of events, well that's the way the cookie crumbles)." That brings us back to the original question. Why doesn't Try­Enter­Critical­Section try harder? Well, because if it tried harder, then the people who didn't want it to try hard at all would complain that it tried too hard.

    The function Try­Enter­Critical­Section may have been ambiguously named, because it doesn't describe how hard the function should try. Though in general, functions named TryXxx try only once, and that's the number of times Try­Enter­Critical­Section tries. Perhaps a clearer (but bulkier name) would have been Enter­Critical­Section­If­Not­Owned­By­Another­Thread.

    The Try­Enter­Critical­Section function represents the core of the Enter­Critical­Section function. In pseudocode, the two functions work like this:

    BOOL TryEnterCriticalSection(CRITICAL_SECTION *CriticalSection)
    {
      atomically {
       if (CriticalSection is free or is owned by the current thread) {
         claim the critical section and return TRUE;
       }
      }
      return FALSE;
    }
    
    void EnterCriticalSection(CRITICAL_SECTION *CriticalSection)
    {
     for (;;) {
      DWORD SpinTimes = 0;
      do {
        if (TryEnterCriticalSection(CriticalSection)) return;
      } while (++SpinTimes < GetSpinCount(CriticalSection));
      WaitForCriticalSectionOwnerToLeave(CriticalSection);
     }
    }
    

    The Try­Enter­Critical­Section function represents the smallest meaningful part of the Enter­Critical­Section process. If you want it to spin, you can write your own Try­Enter­Critical­Section­With­Spin­Count function:

    BOOL TryEnterCriticalSectionWithSpinCount(
        CRITICAL_SECTION *CriticalSection,
        DWORD SpinCount)
    {
      DWORD SpinTimes = 0;
      do {
        if (TryEnterCriticalSection(CriticalSection)) return TRUE;
      } while (++SpinTimes < SpinCount);
      return FALSE;
    }
    

    (Unfortunately, there is no Get­Critical­Section­Spin­Count function, so you'll just have to keep track of it yourself.)

  • The Old New Thing

    Our legal department suggests you skip our salad dressing and just eat an avocado

    • 29 Comments

    I saw a bottle of salad dressing with very strange fine print. The picture on the bottle is of half an avocado. But the fine print on the bottle reads "Does not contain avocados."

    Okay, so the picture on the bottle isn't a picture of the product. This is strange but not entirely unheard of. After all, a box of Girl Scout cookies has pictures of Girl Scouts, not cookies.

    The thing that struck me was the second half of the fine print. It reads "Serving suggestion."

    Huh?

    Apparently, the suggested way of enjoying their salad dressing is to eat half an avocado with no salad dressing on it.

    Pre-emptive snarky comment: "PCs should have come with a suggestion to use the computer without Windows Vista on it."

  • The Old New Thing

    Why can't I get my regular expression pattern to match words that begin with %?

    • 20 Comments

    A customer asked for help writing a regular expression that, in the customer's words, matched the string %1 when it appeared as a standalone word.

    MatchNo match
    %1%1b
    :%1:x%1

    One of the things that people often forget to do when asking a question is to describe the things that they tried and what the results were. This is important information to include, because it saves the people who try to answer the question from wasting their time repeating the things that you already tried.

    PatternStringResultExpected
    \b%1\b%1 No matchMatch
    \b%1\b:%1: No matchMatch
    \b%1\bx%1 MatchNo match
    ^..$%1 MatchMatch

    That last entry was just to make sure that the test app was working, a valuable step when chasing a problem: First, make sure the problem is where you think it is. If the ^..$ hadn't worked, then the problem would not have been with the regular expression but with some other part of the program.

    "Is the \b operator broken?"

    No, the \b operator is working just fine. The problem is that the \b operator doesn't do what you think it does.

    For those not familiar with this notation, well, first you were probably confused by the \b in the original question and skipped the rest of this article. Anyway, \w matches A through Z (either uppercase or lowercase), a digit 0 through 9, or an underscore. (It's actually more complicated than that, but the above description is good enough for the current discussion.) By contrast, \W matches every other character. And in regular expression speak, a "word" is a maximal contiguous string of \w characters. Finally, the \b operator matches the location between a \w and a \W, treating the beginning and end of the string as an invisible \W. I will stop mentioning the pretend \W at the ends of the string; just mentally insert them where applicable.

    Okay, let's go back to the original regular expression of \b%1\b. Notice that the percent sign is not one of the things which is matched by \w. Therefore, in order for the \b that comes before it to match, the character before the percent sign must be a \W. That way, the \b comes between a \w and a \W. The pattern \b%1\b means "A percent sign which comes after a \w, followed by a 1 which comes before a \W."

    Looking at it another way, the string %1 breaks down like this:

    \Wbeginning of string (virtual)
    \W%
    \w1
    \Wend of string (virtual)

    There is a \b between the % and the 1 and another one between the 1 and the end of the string, but there is no \b before the percent sign, because that location has \W on both sides.

    The question started off on the wrong foot: You are having trouble writing a regular expression that matches a word that begins with % because there are no words which begin with %. The percent sign is not a \w and therefore cannot be part of a word.

    What the customer is looking for is something more like (?<!\w)%1\b, a regular expression which means a percent sign not preceded by a \w, followed by a 1 which comes before a \W.

    The customer realized the mistake once it was pointed out. "I keep forgetting that I can't get % included in \w just because I want it to."

    Michael Kaplan covered this same topic some time ago.

  • The Old New Thing

    Email tip: When asking for help with a problem, also mention what you've already tried

    • 21 Comments

    When you ask a question, you should also mention what steps you've already taken when attempting to solve it on your own.

    First of all, it saves the people who decide to help you with your problem from exploring lines of investigation which you've already tried (and which you know don't work). "I tried setting the timeout to 60 seconds before issuing the call, but it still failed with the error ERROR_NETWORK_UNREACHABLE."

    Second, it cuts down on noise on the discussion list.

    Try setting the timeout to a higher value.

    "I already tried that; it didn't work."

    Third, it demonstrates that you cared enough about the problem to try to solve it yourself. It's surprising how many questions come in from people who didn't even make the slightest effort to solve their problem on their own. "When I issue the command show active, it shows the active tags. How do I filter it to show only active tags that I created?"

    This is explained in the online help: show active -?. If the online help is unclear, please describe what you're having trouble with and we'll work to improve the documentation.

    "Thanks! The explanation in the help is just fine."

    (Yes, that was an actual response.)

  • The Old New Thing

    Email tip: When you say that something didn't work, you have to say how it didn't work

    • 47 Comments

    I illustrate this point with an imaginary conversation, inspired by actual ones I've seen (and, occasionally, been a frustrated party to).

    From: X

    I want to do ABC, but I don't have a DEF. Anybody know of a workaround?

    Somebody has an idea:

    From: Y

    Try mounting this ISO file into a virtual machine and trying the ABC from there.

    Unfortunately, it didn't work:

    From: X

    I tried that, but it didn't work. Any other ideas?

    When somebody suggests a troubleshooting step or a workaround, but when you try it and it doesn't work, you need to say how it didn't work. The person who made the suggestion had some expectation that it would work, and just saying that it didn't work will probably just generate an unhelpful response like "Well, try again." Which doesn't help anybody.

    In this example (which I just made up), a better response from X would be something like this:

    • "I tried that, but it didn't work. Virtual PC refused to load the ISO image, putting up the error message 'The CD image could not be captured. You may not have the proper access privileges to the CD image files.'"
    • "I tried that, but it didn't work. Virtual PC loaded the ISO image, but when I tried to view the contents of the CD, I got 'Not ready reading drive D.'"
    • "I tried that, but it didn't work. Virtual PC loaded the ISO image, but when I double-clicked the ABC file, I got the same error that I got when I tried to do ABC directly."

    Each of these is a different failure mode that suggests a different course of action.

    And then the response probably won't be, "Well, try again."

Page 1 of 3 (24 items) 123