• The Old New Thing

    Windows Server 2003 can take you back in time

    • 34 Comments

    If you are running Windows Server 2003, you owe it to yourself to enable the Volume Shadow Copy service. What this service does is periodically (according to a schedule you set) capture a snapshot of the files you specify so they can be recovered later. The copies are lazy: If a file doesn't change between snapshots, a new copy isn't made. Up to 64 versions of a file can be recorded in the snapshot database. Bear this in mind when setting your snapshot schedule. If you take a snapshot twice a day, you're good for a month, but if you take a snapshot every minute, you get only an hour's worth of snapshots. You are trading off snapshot quality against quantity.

    Although I can count on my hand the number of times the Volume Shadow Copy service has saved my bacon, each time I needed it, it saved me at least a day's work. Typically, it's because I wasn't paying attention and deleted the wrong file. Once it was because I make some changes to a file and ended up making a bigger mess of things and would have been better off just returning to the version I had the previous day.

    I just click on "View previous versions of this folder" in the Tasks Pane, pick the snapshot from yesterday, and drag yesterday's version of the file to my desktop. Then I can take that file and compare it to the version I have now and reconcile the changes. In the case of a deleted file, I just click the "Restore" button and back to life it comes. (Be careful about using "Restore" for a file that still exists, however, because that will overwrite the current version with the snapshot version.)

    One tricky bit about viewing snapshots is that it works only on network drives. If you want to restore a file from a local hard drive, you'll need to either connect to the drive from another computer or (what I do) create a loopback connection and restore it via the loopback.

    Note that the Volume Shadow Copy service is not a replacement for backups. The shadow copies are kept on the drive itself, so if you lose the drive, you lose the shadow copies too.

    Given the ability of the Volume Shadow Copy service to go back in time and recover previous versions of a file, you're probably not surprised that the code name for the feature was "Timewarp".

    John, a colleague in security, points out that shadow copies provide a curious backdoor to the quota system. Although you have access to shadow copies of your file, they do not count against your quota. Counting them against your quota would be unfair since it is the system that created these files, not you. (Of course, this isn't a very useful way to circumvent quota, because the system will also delete shadow copies whenever it feels the urge.)

  • The Old New Thing

    Why are INI files deprecated in favor of the registry?

    • 139 Comments

    Welcome, Slashdot readers. Remember, this Web site is for entertainment purposes only.

    Why are INI files deprecated in favor of the registry? There were many problems with INI files.

    • INI files don't support Unicode. Even though there are Unicode functions of the private profile functions, they end up just writing ANSI text to the INI file. (There is a wacked out way you can create a Unicode INI file, but you have to step outside the API in order to do it.) This wasn't an issue in 16-bit Windows since 16-bit Windows didn't support Unicode either!
    • INI file security is not granular enough. Since it's just a file, any permissions you set are at the file level, not the key level. You can't say, "Anybody can modify this section, but that section can be modified only by administrators." This wasn't an issue in 16-bit Windows since 16-bit Windows didn't do security.
    • Multiple writers to an INI file can result in data loss. Consider two threads that are trying to update an INI file. If they are running simultaneously, you can get this:
      Thread 1Thread 2
      Read INI file
      Read INI file
      Write INI file + X
      Write INI file + Y
      Notice that thread 2's update to the INI file accidentally deleted the change made by thread 1. This wasn't a problem in 16-bit Windows since 16-bit Windows was co-operatively multi-tasked. As long as you didn't yield the CPU between the read and the write, you were safe because nobody else could run until you yielded.
    • INI files can suffer a denial of service. A program can open an INI file in exclusive mode and lock out everybody else. This is bad if the INI file was being used to hold security information, since it prevents anybody from seeing what those security settings are. This was also a problem in 16-bit Windows, but since there was no security in 16-bit Windows, a program that wanted to launch a denial of service attack on an INI file could just delete it!
    • INI files contain only strings. If you wanted to store binary data, you had to encode it somehow as a string.
    • Parsing an INI file is comparatively slow. Each time you read or write a value in an INI file, the file has to be loaded into memory and parsed. If you write three strings to an INI file, that INI file got loaded and parsed three times and got written out to disk three times. In 16-bit Windows, three consecutive INI file operations would result in only one parse and one write, because the operating system was co-operatively multi-tasked. When you accessed an INI file, it was parsed into memory and cached. The cache was flushed when you finally yielded CPU to another process.
    • Many programs open INI files and read them directly. This means that the INI file format is locked and cannot be extended. Even if you wanted to add security to INI files, you can't. What's more, many programs that parsed INI files were buggy, so in practice you couldn't store a string longer than about 70 characters in an INI file or you'd cause some other program to crash.
    • INI files are limited to 32KB in size.
    • The default location for INI files was the Windows directory! This definitely was bad for Windows NT since only administrators have write permission there.
    • INI files contain only two levels of structure. An INI file consists of sections, and each section consists of strings. You can't put sections inside other sections.
    • [Added 9am] Central administration of INI files is difficult. Since they can be anywhere in the system, a network administrator can't write a script that asks, "Is everybody using the latest version of Firefox?" They also can't deploy scripts that say "Set everybody's Firefox settings to XYZ and deny write access so they can't change them."

    The registry tried to address these concerns. You might argue whether these were valid concerns to begin with, but the Windows NT folks sure thought they were.

    Commenter TC notes that the pendulum has swung back to text configuration files, but this time, they're XML. This reopens many of the problems that INI files had, but you have the major advantage that nobody writes to XML configuration files; they only read from them. XML configuration files are not used to store user settings; they just contain information about the program itself. Let's look at those issues again.

    • XML files support Unicode.
    • XML file security is not granular enough. But since the XML configuration file is read-only, the primary objection is sidestepped. (But if you want only administrators to have permission to read specific parts of the XML, then you're in trouble.)
    • Since XML configuration files are read-only, you don't have to worry about multiple writers.
    • XML configuration files files can suffer a denial of service. You can still open them exclusively and lock out other processes.
    • XML files contain only strings. If you want to store binary data, you have to encode it somehow.
    • Parsing an XML file is comparatively slow. But since they're read-only, you can safely cache the parsed result, so you only need to parse once.
    • Programs parse XML files manually, but the XML format is already locked, so you couldn't extend it anyway even if you wanted to. Hopefully, those programs use a standard-conforming XML parser instead of rolling their own, but I wouldn't be surprised if people wrote their own custom XML parser that chokes on, say, processing instructions or strings longer than 70 characters.
    • XML files do not have a size limit.
    • XML files do not have a default location.
    • XML files have complex structure. Elements can contain other elements.

    XML manages to sidestep many of the problems that INI files have, but only if you promise only to read from them (and only if everybody agrees to use a standard-conforming parser), and if you don't require security granularity beyond the file level. Once you write to them, then a lot of the INI file problems return.

  • The Old New Thing

    Why was Pinball removed from Windows Vista?

    • 115 Comments

    Windows XP was the last client version of Windows to include the Pinball game that had been part of Windows since Windows 95. There is apparently speculation that this was done for legal reasons.

    No, that's not why.

    One of the things I did in Windows XP was port several millions of lines of code from 32-bit to 64-bit Windows so that we could ship Windows XP 64-bit Edition. But one of the programs that ran into trouble was Pinball. The 64-bit version of Pinball had a pretty nasty bug where the ball would simply pass through other objects like a ghost. In particular, when you started the game, the ball would be delivered to the launcher, and then it would slowly fall towards the bottom of the screen, through the plunger, and out the bottom of the table.

    Games tended to be really short.

    Two of us tried to debug the program to figure out what was going on, but given that this was code written several years earlier by an outside company, and that nobody at Microsoft ever understood how the code worked (much less still understood it), and that most of the code was completely uncommented, we simply couldn't figure out why the collision detector was not working. Heck, we couldn't even find the collision detector!

    We had several million lines of code still to port, so we couldn't afford to spend days studying the code trying to figure out what obscure floating point rounding error was causing collision detection to fail. We just made the executive decision right there to drop Pinball from the product.

    If it makes you feel better, I am saddened by this as much as you are. I really enjoyed playing that game. It was the location of the one Windows XP feature I am most proud of.

    Update: Hey everybody asking that the source code be released: The source code was licensed from another company. If you want the source code, you have to go ask them.

  • The Old New Thing

    Why aren't console windows themed on Windows XP?

    • 124 Comments

    Commenter Andrej Budja asks why cmd.exe is not themed in Windows XP. (This question was repeated by Serge Wautier, proving that nobody checks whether their suggestion has already been submitted before adding their own. It was also asked by a commenter who goes by the name "S", and then repeated again just a few hours later, which proves again that nobody reads the comments either.) Knowledge Base article 306509 explains that this behavior exists because the command prompt window (like all console windows) is run under the ClientServer Runtime System (CSRSS), and CSRSS cannot be themed.

    But why can't CSRSS be themed?

    CSRSS runs as a system service, so any code that runs as part of CSRSS creates potential for mass havoc. The slightest mis-step could crash CSRSS, and with it the entire system. The CSRSS team decided that they didn't want to take the risk of allowing the theme code to run in their process, so they disabled theming for console windows. (There's also an architectural reason why CSRSS cannot use the theming services: CSRSS runs as a subsystem, and the user interface theming services assume that they're running as part of a Win32 program.)

    In Windows Vista, the window frame is drawn by the desktop window manager, which means that your console windows on Vista get the glassy frame just like other windows. But if you take a closer look, you will see that CSRSS itself doesn't use themed windows: Notice that the scroll bars retain the classic look.

    The window manager giveth and the window manager taketh away, for at the same time console windows gained the glassy frame, they also lost drag and drop. You used to be able to drag a file out of Explorer and drop it onto a command prompt, but if you try that in Windows Vista, nothing happens. This is a consequence of tighter security around the delivery of messages from a process running at lower integrity to one running at a higher integrity level (see UIPI). Since CSRSS is a system process, it runs at very high security level and won't let any random program (like Explorer) send it messages, such as the ones used to mediate OLE drag and drop. You'll see the same thing if you log on as a restricted administrator and then kick off an elevated copy of Notepad. You won't be able to drag a file out of Explorer and drop it onto Notepad, for the same reason.

  • The Old New Thing

    A 90-byte "whereis" program

    • 41 Comments

    Sometimes people try too hard.

    You can download a C# program to look for a file on your PATH, or you can use a 90-character batch file:

    @for %%e in (%PATHEXT%) do @for %%i in (%1%%e) do @if NOT "%%~$PATH:i"=="" echo %%~$PATH:i
    
  • The Old New Thing

    What was the role of MS-DOS in Windows 95?

    • 134 Comments

    Welcome, Slashdot readers. Remember, this Web site is for entertainment purposes only.

    Sean wants to know what the role of MS-DOS was in Windows 95. I may regret answering this question since it's clear Slashdot bait. (Even if Sean didn't intend it that way, that's what it's going to turn into.)

    Here goes. Remember, what I write here may not be 100% true, but it is "true enough." (In other words, it gets the point across without getting bogged down in nitpicky details.)

    MS-DOS served two purposes in Windows 95.

    • It served as the boot loader.
    • It acted as the 16-bit legacy device driver layer.

    When Windows 95 started up, a customized version of MS-DOS was loaded, and it's that customized version that processed your CONFIG.SYS file, launched COMMAND.COM, which ran your AUTOEXEC.BAT and which eventually ran WIN.COM, which began the process of booting up the VMM, or the 32-bit virtual machine manager.

    The customized version of MS-DOS was fully functional as far as the phrase "fully functional" can be applied to MS-DOS in the first place. It had to be, since it was all that was running when you ran Windows 95 in "single MS-DOS application mode."

    The WIN.COM program started booting what most people think of as "Windows" proper. It used the copy of MS-DOS to load the virtual machine manager, read the SYSTEM.INI file, load the virtual device drivers, and then it turned off any running copy of EMM386 and switched into protected mode. It's protected mode that is what most people think of as "the real Windows."

    Once in protected mode, the virtual device drivers did their magic. Among other things those drivers did was "suck the brains out of MS-DOS," transfer all that state to the 32-bit file system manager, and then shut off MS-DOS. All future file system operations would get routed to the 32-bit file system manager. If a program issued an int 21h, the 32-bit file system manager would be responsible for handling it.

    And that's where the second role of MS-DOS comes into play. For you see, MS-DOS programs and device drivers loved to mess with the operating system itself. They would replace the int 21h service vector, they would patch the operating system, they would patch the low-level disk I/O services int 25h and int 26h. They would also do crazy things to the BIOS interrupts such as int 13h, the low-level disk I/O interrupt.

    When a program issued an int 21h call to access MS-DOS, the call would go first to the 32-bit file system manager, who would do some preliminary munging and then, if it detected that somebody had hooked the int 21h vector, it would jump back into the 16-bit code to let the hook run. Replacing the int 21h service vector is logically analogous to subclassing a window. You get the old vector and set your new vector. When your replacement handler is called, you do some stuff, and then call the original vector to do "whatever would normally happen." After the original vector returned, you might do some more work before returning to the original caller.

    One of the 16-bit drivers loaded by CONFIG.SYS was called IFSMGR.SYS. The job of this 16-bit driver was to hook MS-DOS first before the other drivers and programs got a chance! This driver was in cahoots with the 32-bit file system manager, for its job was to jump from 16-bit code back into 32-bit code to let the 32-bit file system manager continue its work.

    In other words, MS-DOS was just an extremely elaborate decoy. Any 16-bit drivers and programs would patch or hook what they thought was the real MS-DOS, but which was in reality just a decoy. If the 32-bit file system manager detected that somebody bought the decoy, it told the decoy to quack.

    Let's start with a system that didn't contain any "evil" drivers or programs that patched or hooked MS-DOS.

    Program calls int 21h

    32-bit file system manager
    checks that nobody has patched or hooked MS-DOS
    performs the requested operation
    updates the state variables inside MS-DOS
    returns to caller
    Program gets result

    This was paradise. The 32-bit file system manager was able to do all the work without having to deal with pesky drivers that did bizarro things. Note the extra step of updating the state variables inside MS-DOS. Even though we extracted the state variables from MS-DOS during the boot process, we keep those state variables in sync because drivers and programs frequently "knew" how those state variables worked and bypassed the operating system and accessed them directly. Therefore, the file system manager had to maintain the charade that MS-DOS was running the show (even though it wasn't) so that those drivers and programs saw what they wanted.

    Note also that those state variables were per-VM. (I.e., each MS-DOS "box" you opened got its own copy of those state variables.) After all, each MS-DOS box had its idea of what the current directory was, what was in the file tables, that sort of thing. This was all an act, however, because the real list of open files was kept in by the 32-bit file system manager. It had to be, because disk caches had to be kept coherent, and file sharing need to be enforced globally. If one MS-DOS box opened a file for exclusive access, then an attempt by a program running in another MS-DOS box to open the file should fail with a sharing violation.

    Okay, that was the easy case. The hard case is if you had a driver that hooked int 21h. I don't know what the driver does, let's say that it's a network driver that intercepts I/O to network drives and handles them in some special way. Let's suppose also that there's some TSR running in the MS-DOS box which has hooked int 21h so it can print a 1 to the screen when the int 21h is active and a 2 when the int 21h completes. Let's follow a call to a local device (not a network device, so the network driver doesn't do anything):

    Program calls int 21h

    32-bit file system manager
    notices that somebody has patched or hooked MS-DOS
    jumps to the hook (which is the 16-bit TSR)

    16-bit TSR (front end)
    prints a 1 to the screen
    calls previous handler (which is the 16-bit network driver)

    16-bit network driver (front end)
    decides that this isn't a network I/O request calls previous handler (which is the 16-bit IFSMGR hook)

    16-bit IFSMGR hook
    tells 32-bit file system manager
      that it's time to make the donuts

    32-bit file system manager
    regains control
    performs the requested operation
    updates the state variables inside MS-DOS
    returns to caller

    16-bit network driver (back end)
    returns to caller

    16-bit TSR (back end)
    prints a 2 to the screen
    returns to caller
    Program gets result

    Notice that all the work is still being done by the 32-bit file system manager. It's just that the call gets routed through all the 16-bit stuff to maintain the charade that 16-bit MS-DOS is still running the show. The only 16-bit code that actually ran (in red) is the stuff that the TSR and network driver installed, plus a tiny bit of glue in the 16-bit IFSMGR hook. Notice that no 16-bit MS-DOS code ran. The 32-bit file manager took over for MS-DOS.

    A similar sort of "take over but let the crazy stuff happen if somebody is doing crazy stuff" dance took place when the I/O subsystem took over control of your hard drive from 16-bit device drivers. If it recognized the drivers, it would "suck their brains out" and take over all the operations, in the same way that the 32-bit file system manager took over operations from 16-bit MS-DOS. On the other hand, if the driver wasn't one that the I/O subsystem recognized, it let the driver be the one in charge of the drive. If this happened, it was said that you were going through the "real-mode mapper" since "real mode" was name for the CPU mode when protected mode was not running; in other words, the mapper was letting the 16-bit drivers do the work.

    Now, if you were unlucky enough to be using the real-mode mapper, you probably noticed that system performance to that drive was pretty awful. That's because you were using the old clunky single-threaded 16-bit drivers instead of the faster, multithread-enabled 32-bit drivers. (When a 16-bit driver was running, no other I/O could happen because 16-bit drivers were not designed for multi-threading.)

    This awfulness of the real-mode mapper actually came in handy in a backwards way, because it was an early indication that your computer got infected with an MS-DOS virus. After all, MS-DOS viruses did what TSRs and drivers did: They hooked interrupt vectors and took over control of your hard drive. From the I/O subsystem's point of view, they looked just like a 16-bit hard disk device driver! When people complained, "Windows suddenly started running really slow," we asked them to look at the system performance page in the control panel and see if it says that "Some drives are using MS-DOS compatiblity." If so, then it meant that the real-mode mapper was in charge, and if you didn't change hardware, it probably means a virus.

    Now, there are parts of MS-DOS that are unrelated to file I/O. For example, there are functions for allocating memory, parsing a string containing potential wildcards into FCB format, that sort of thing. Those functions were still handled by MS-DOS since they were just "helper library" type functions and there was no benefit to reimplementing them in 32-bit code aside from just being able to say that you did it. The old 16-bit code worked just fine, and if you let it do the work, you preserved compatibility with programs that patched MS-DOS in order to alter the behavior of those functions.

  • The Old New Thing

    What is the command line length limit?

    • 16 Comments

    It depends on whom you ask.

    The maximum command line length for the CreateProcess function is 32767 characters. This limitation comes from the UNICODE_STRING structure.

    CreateProcess is the core function for creating processes, so if you are talking directly to Win32, then that's the only limit you have to worry about. But if you are reaching CreateProcess by some other means, then the path you travel through may have other limits.

    If you are using the CMD.EXE command processor, then you are also subject to the 8192 character command line length limit imposed by CMD.EXE.

    If you are using the ShellExecute/Ex function, then you become subject to the INTERNET_MAX_URL_LENGTH (around 2048) command line length limit imposed by the ShellExecute/Ex functions. (If you are running on Windows 95, then the limit is only MAX_PATH.)

    While I'm here, I may as well mention another limit: The maximum size of your environment is 32767 characters. The size of the environment includes the all the variable names plus all the values.

    Okay, but what if you want to pass more than 32767 characters of information to a process? You'll have to find something other than the command line. We'll discuss some options tomorrow.

  • The Old New Thing

    When should your destructor be virtual?

    • 20 Comments

    When should your C++ object's destructor be virtual?

    First of all, what does it mean to have a virtual destructor?

    Well, what does it mean to have a virtual method?

    If a method is virtual, then calling the method on an object always invokes the method as implemented by the most heavily derived class. If the method is not virtual, then the implementation corresponding to the compile-time type of the object pointer.

    For example, consider this:

    class Sample {
    public:
     void f();
     virtual void vf();
    };
    
    class Derived : public Sample {
    public:
     void f();
     void vf();
    }
    
    void function()
    {
     Derived d;
     Sample* p = &d;
     p->f();
     p->vf();
    }
    

    The call to p->f() will result in a call to Sample::f because p is a pointer to a Sample. The actual object is of type Derived, but the pointer is merely a pointer to a Sample. The pointer type is used because f is not virtual.

    On the other hand, the call to The call to p->vf() will result in a call to Derived::vf, the most heavily derived type, because vf is virtual.

    Okay, you knew that.

    Virtual destructors work exactly the same way. It's just that you rarely invoke the destructor explicitly. Rather, it's invoked when an automatic object goes out of scope or when you delete the object.

    void function()
    {
     Sample* p = new Derived;
     delete p;
    }
    

    Since Sample does not have a virtual destructor, the delete p invokes the destructor of the class of the pointer (Sample::~Sample()), rather than the destructor of the most derived type (Derived::~Derived()). And as you can see, this is the wrong thing to do in the above scenario.

    Armed with this information, you can now answer the question.

    A class must have a virtual destructor if it meets both of the following criteria:

    • You do a delete p.
    • It is possible that p actually points to a derived class.

    Some people say that you need a virtual destructor if and only if you have a virtual method. This is wrong in both directions.

    Example of a case where a class has no virtual methods but still needs a virtual destructor:

    class Sample { };
    class Derived : public Sample
    {
     CComPtr<IStream> m_p;
    public:
     Derived() { CreateStreamOnHGlobal(NULL, TRUE, &m_p); }
    };
    
    Sample *p = new Derived;
    delete p;
    

    The delete p will invoke Sample::~Sample instead of Derived::~Derived, resulting in a leak of the stream m_p.

    And here's an example of a case where a class has virtual methods but does not require a virtual destructor.

    class Sample { public: virtual void vf(); }
    class Derived : public Sample { public: virtual void vf(); }
    
    Derived *p = new Derived;
    delete p;
    

    Since the object deletion occurs from the pointer type that matches the type of the actual object, the correct destructor will be invoked. This pattern happens often in COM objects, which expose several virtual methods corresponding to its interfaces, but where the object itself is destroyed by its own implementation and not from a base calss pointer. (Notice that no COM interfaces contain virtual destructors.)

    The problem with knowing when to make your destructor virtual or not is that you have to know how people will be using your class. If C++ had a "sealed" keyword, then the rule would be simpler: If you do a "delete p" where p is a pointer to an unsealed class, then that class needs have a virtual destructor. (The imaginary "sealed" keyword makes it explicit when a class can act as the base class for another class.)

  • The Old New Thing

    No matter where you put an advanced setting, somebody will tell you that you are an idiot

    • 127 Comments

    There are advanced settings in Windows, settings which normal users not only shouldn't be messing with, but which they arguably shouldn't even know about, because that would give them just enough knowledge to be dangerous. And no matter where you put that advanced setting, somebody will tell you that you are an idiot.

    Here they are on an approximate scale. If you dig through the comments on this blog, you can probably find every single position represented somewhere.

    1. It's okay if the setting is hidden behind a registry key. I know how to set it myself.
    2. I don't want to mess with the registry. Put the setting in a configuration file that I pass to the installer.
    3. I don't want to write a configuration file. The program should have an Advanced button that calls up a dialog which lets the user change the advanced setting.
    4. Every setting must be exposed in the user interface.
    5. Every setting must be exposed in the user interface by default. Don't make me call up the extended context menu.
    6. The first time the user does X, show users a dialog asking if they want to change the advanced setting.

    If you implement level N, people will demand that you implement level N+1. It doesn't stop until you reach the last step, which is aggressively user-hostile. (And then there will also be people who complain that you went too far.)

    From a technical standpoint, each of the above steps is about ten to a hundred times harder than the previous one. If you put it in a configuration file, you have to write code to load a parser and extract the value. If you want an Advanced button, now you have to write a dialog box (which is already a lot of work), consult with the usability and user assistance to come up with the correct wording for the setting, write help text, provide guidance to the translators, and now since it is exposed in the user interface, you need to write automated tests and add the setting to the test matrices. It's a huge amount of work to add a dialog box, work that could be spent on something that benefits a much larger set of customers in a more direct manner.

    That's why most advanced settings hang out at level 1 or, in the case of configuring program installation, level 2. If you're so much of a geek that you want to change these advanced settings, it probably won't kill you to fire up a text editor and write a little configuration file.

    Sidebar

    Joel's count of "fifteen ways to shut down Windows" is a bit disingenuous, since he's counting six hardware affordances: "Four FN+key combinations... an on-off button... you can close the lid." Okay, fine, Joel, we'll play it your way. Your proposal to narrow it down to one "Bye" button, still leaves seven ways to shut down Windows.

    And then people will ask how to log off.

  • The Old New Thing

    If you are trying to understand an error, you may want to look up the error code to see what it means instead of just shrugging

    • 64 Comments

    A customer had a debug trace log and needed some help interpreting it. The trace log was generated by an operating system component, but the details aren't important to the story.

    I've attached the log file. I think the following may be part of the problem.

    [07/17/2005:18:31:19] Creating process D:\Foo\bar\blaz.exe
    [07/17/2005:18:31:19] CreateProcess failed with error 2
    

    Any ideas?

    Thanks,
    Bob Smith
    Senior Test Engineer
    Tailspin Toys

    What struck me is that Bob is proud of the fact that he's a Senior Test Engineer, perhaps because it makes him think that we will take him more seriously because he has some awesome title.

    But apparently a Senior Test Engineer doesn't know what error 2 is. There are some error codes that you end up committing to memory because you run into them over and over. Error 32 is ERROR_SHARING_VIOLATION, error 3 is ERROR_PATH_NOT_FOUND, and in this case, error 2 is ERROR_FILE_NOT_FOUND.

    And even if Bob didn't have error 2 memorized, he should have known to look it up.

    Error 2 is ERROR_FILE_NOT_FOUND. Does the file D:\Foo\bar\blaz.exe exist?

    No, it doesn't.

    -Bob

    Bob seems to have shut off his brain and decided to treat troubleshooting not as a collaborative effort but rather as a game of Twenty Questions in which the person with the problem volunteers as little information as possible in order to make things more challenging. I had to give Bob a nudge.

    Can you think of a reason why the system would be looking at D:\Foo\bar\blaz.exe? Where did you expect it to be looking for blaz.exe?

    This managed to wake Bob out of his stupor, and the investigation continued. (And no, I don't remember what the final resolution was. I didn't realize I would have to remember the fine details of this support incident three years later.)

Page 2 of 429 (4,287 items) 12345»