• The Old New Thing

    Why are INI files deprecated in favor of the registry?

    • 139 Comments

    Welcome, Slashdot readers. Remember, this Web site is for entertainment purposes only.

    Why are INI files deprecated in favor of the registry? There were many problems with INI files.

    • INI files don't support Unicode. Even though there are Unicode functions of the private profile functions, they end up just writing ANSI text to the INI file. (There is a wacked out way you can create a Unicode INI file, but you have to step outside the API in order to do it.) This wasn't an issue in 16-bit Windows since 16-bit Windows didn't support Unicode either!
    • INI file security is not granular enough. Since it's just a file, any permissions you set are at the file level, not the key level. You can't say, "Anybody can modify this section, but that section can be modified only by administrators." This wasn't an issue in 16-bit Windows since 16-bit Windows didn't do security.
    • Multiple writers to an INI file can result in data loss. Consider two threads that are trying to update an INI file. If they are running simultaneously, you can get this:
      Thread 1Thread 2
      Read INI file
      Read INI file
      Write INI file + X
      Write INI file + Y
      Notice that thread 2's update to the INI file accidentally deleted the change made by thread 1. This wasn't a problem in 16-bit Windows since 16-bit Windows was co-operatively multi-tasked. As long as you didn't yield the CPU between the read and the write, you were safe because nobody else could run until you yielded.
    • INI files can suffer a denial of service. A program can open an INI file in exclusive mode and lock out everybody else. This is bad if the INI file was being used to hold security information, since it prevents anybody from seeing what those security settings are. This was also a problem in 16-bit Windows, but since there was no security in 16-bit Windows, a program that wanted to launch a denial of service attack on an INI file could just delete it!
    • INI files contain only strings. If you wanted to store binary data, you had to encode it somehow as a string.
    • Parsing an INI file is comparatively slow. Each time you read or write a value in an INI file, the file has to be loaded into memory and parsed. If you write three strings to an INI file, that INI file got loaded and parsed three times and got written out to disk three times. In 16-bit Windows, three consecutive INI file operations would result in only one parse and one write, because the operating system was co-operatively multi-tasked. When you accessed an INI file, it was parsed into memory and cached. The cache was flushed when you finally yielded CPU to another process.
    • Many programs open INI files and read them directly. This means that the INI file format is locked and cannot be extended. Even if you wanted to add security to INI files, you can't. What's more, many programs that parsed INI files were buggy, so in practice you couldn't store a string longer than about 70 characters in an INI file or you'd cause some other program to crash.
    • INI files are limited to 32KB in size.
    • The default location for INI files was the Windows directory! This definitely was bad for Windows NT since only administrators have write permission there.
    • INI files contain only two levels of structure. An INI file consists of sections, and each section consists of strings. You can't put sections inside other sections.
    • [Added 9am] Central administration of INI files is difficult. Since they can be anywhere in the system, a network administrator can't write a script that asks, "Is everybody using the latest version of Firefox?" They also can't deploy scripts that say "Set everybody's Firefox settings to XYZ and deny write access so they can't change them."

    The registry tried to address these concerns. You might argue whether these were valid concerns to begin with, but the Windows NT folks sure thought they were.

    Commenter TC notes that the pendulum has swung back to text configuration files, but this time, they're XML. This reopens many of the problems that INI files had, but you have the major advantage that nobody writes to XML configuration files; they only read from them. XML configuration files are not used to store user settings; they just contain information about the program itself. Let's look at those issues again.

    • XML files support Unicode.
    • XML file security is not granular enough. But since the XML configuration file is read-only, the primary objection is sidestepped. (But if you want only administrators to have permission to read specific parts of the XML, then you're in trouble.)
    • Since XML configuration files are read-only, you don't have to worry about multiple writers.
    • XML configuration files files can suffer a denial of service. You can still open them exclusively and lock out other processes.
    • XML files contain only strings. If you want to store binary data, you have to encode it somehow.
    • Parsing an XML file is comparatively slow. But since they're read-only, you can safely cache the parsed result, so you only need to parse once.
    • Programs parse XML files manually, but the XML format is already locked, so you couldn't extend it anyway even if you wanted to. Hopefully, those programs use a standard-conforming XML parser instead of rolling their own, but I wouldn't be surprised if people wrote their own custom XML parser that chokes on, say, processing instructions or strings longer than 70 characters.
    • XML files do not have a size limit.
    • XML files do not have a default location.
    • XML files have complex structure. Elements can contain other elements.

    XML manages to sidestep many of the problems that INI files have, but only if you promise only to read from them (and only if everybody agrees to use a standard-conforming parser), and if you don't require security granularity beyond the file level. Once you write to them, then a lot of the INI file problems return.

  • The Old New Thing

    Why does Windows not recognize my USB device as the same device if I plug it into a different port?

    • 66 Comments

    You may have noticed that if you take a USB device and plug it into your computer, Windows recognizes it and configures it. Then if you unplug it and replug it into a different USB port, Windows gets a bout of amnesia and thinks that it's a completely different device instead of using the settings that applied when you plugged it in last time. Why is that?

    The USB device people explained that this happens when the device lacks a USB serial number.

    Serial numbers are optional on USB devices. If the device has one, then Windows recognizes the device no matter which USB port you plug it into. But if it doesn't have a serial number, then Windows treats each appearance on a different USB port as if it were a new device.

    (I remember that one major manufacturer of USB devices didn't quite understand how serial numbers worked. They gave all of their devices serial numbers, that's great, but they all got the same serial number. Exciting things happened if you plugged two of their devices into a computer at the same time.)

    But why does Windows treat it as a different device if it lacks a serial number and shows up on a different port? Why can't it just say, "Oh, there you are, over there on another port."

    Because that creates random behavior once you plug in two such devices. Depending on the order in which the devices get enumerated by Plug and Play, the two sets of settings would get assigned seemingly randomly at each boot. Today the settings match up one way, but tomorrow when the devices are enumerated in the other order, the settings are swapped. (You get similarly baffling behavior if you plug in the devices in different order.)

    In other words: Things suck because (1) things were already in bad shape—this would not have been a problem if the device had a proper serial number—and (2) once you're in this bad state, the alternative sucks more. The USB stack is just trying to make the best of a bad situation without making it any worse.

  • The Old New Thing

    Why aren't console windows themed on Windows XP?

    • 124 Comments

    Commenter Andrej Budja asks why cmd.exe is not themed in Windows XP. (This question was repeated by Serge Wautier, proving that nobody checks whether their suggestion has already been submitted before adding their own. It was also asked by a commenter who goes by the name "S", and then repeated again just a few hours later, which proves again that nobody reads the comments either.) Knowledge Base article 306509 explains that this behavior exists because the command prompt window (like all console windows) is run under the ClientServer Runtime System (CSRSS), and CSRSS cannot be themed.

    But why can't CSRSS be themed?

    CSRSS runs as a system service, so any code that runs as part of CSRSS creates potential for mass havoc. The slightest mis-step could crash CSRSS, and with it the entire system. The CSRSS team decided that they didn't want to take the risk of allowing the theme code to run in their process, so they disabled theming for console windows. (There's also an architectural reason why CSRSS cannot use the theming services: CSRSS runs as a subsystem, and the user interface theming services assume that they're running as part of a Win32 program.)

    In Windows Vista, the window frame is drawn by the desktop window manager, which means that your console windows on Vista get the glassy frame just like other windows. But if you take a closer look, you will see that CSRSS itself doesn't use themed windows: Notice that the scroll bars retain the classic look.

    The window manager giveth and the window manager taketh away, for at the same time console windows gained the glassy frame, they also lost drag and drop. You used to be able to drag a file out of Explorer and drop it onto a command prompt, but if you try that in Windows Vista, nothing happens. This is a consequence of tighter security around the delivery of messages from a process running at lower integrity to one running at a higher integrity level (see UIPI). Since CSRSS is a system process, it runs at very high security level and won't let any random program (like Explorer) send it messages, such as the ones used to mediate OLE drag and drop. You'll see the same thing if you log on as a restricted administrator and then kick off an elevated copy of Notepad. You won't be able to drag a file out of Explorer and drop it onto Notepad, for the same reason.

  • The Old New Thing

    What was the role of MS-DOS in Windows 95?

    • 134 Comments

    Welcome, Slashdot readers. Remember, this Web site is for entertainment purposes only.

    Sean wants to know what the role of MS-DOS was in Windows 95. I may regret answering this question since it's clear Slashdot bait. (Even if Sean didn't intend it that way, that's what it's going to turn into.)

    Here goes. Remember, what I write here may not be 100% true, but it is "true enough." (In other words, it gets the point across without getting bogged down in nitpicky details.)

    MS-DOS served two purposes in Windows 95.

    • It served as the boot loader.
    • It acted as the 16-bit legacy device driver layer.

    When Windows 95 started up, a customized version of MS-DOS was loaded, and it's that customized version that processed your CONFIG.SYS file, launched COMMAND.COM, which ran your AUTOEXEC.BAT and which eventually ran WIN.COM, which began the process of booting up the VMM, or the 32-bit virtual machine manager.

    The customized version of MS-DOS was fully functional as far as the phrase "fully functional" can be applied to MS-DOS in the first place. It had to be, since it was all that was running when you ran Windows 95 in "single MS-DOS application mode."

    The WIN.COM program started booting what most people think of as "Windows" proper. It used the copy of MS-DOS to load the virtual machine manager, read the SYSTEM.INI file, load the virtual device drivers, and then it turned off any running copy of EMM386 and switched into protected mode. It's protected mode that is what most people think of as "the real Windows."

    Once in protected mode, the virtual device drivers did their magic. Among other things those drivers did was "suck the brains out of MS-DOS," transfer all that state to the 32-bit file system manager, and then shut off MS-DOS. All future file system operations would get routed to the 32-bit file system manager. If a program issued an int 21h, the 32-bit file system manager would be responsible for handling it.

    And that's where the second role of MS-DOS comes into play. For you see, MS-DOS programs and device drivers loved to mess with the operating system itself. They would replace the int 21h service vector, they would patch the operating system, they would patch the low-level disk I/O services int 25h and int 26h. They would also do crazy things to the BIOS interrupts such as int 13h, the low-level disk I/O interrupt.

    When a program issued an int 21h call to access MS-DOS, the call would go first to the 32-bit file system manager, who would do some preliminary munging and then, if it detected that somebody had hooked the int 21h vector, it would jump back into the 16-bit code to let the hook run. Replacing the int 21h service vector is logically analogous to subclassing a window. You get the old vector and set your new vector. When your replacement handler is called, you do some stuff, and then call the original vector to do "whatever would normally happen." After the original vector returned, you might do some more work before returning to the original caller.

    One of the 16-bit drivers loaded by CONFIG.SYS was called IFSMGR.SYS. The job of this 16-bit driver was to hook MS-DOS first before the other drivers and programs got a chance! This driver was in cahoots with the 32-bit file system manager, for its job was to jump from 16-bit code back into 32-bit code to let the 32-bit file system manager continue its work.

    In other words, MS-DOS was just an extremely elaborate decoy. Any 16-bit drivers and programs would patch or hook what they thought was the real MS-DOS, but which was in reality just a decoy. If the 32-bit file system manager detected that somebody bought the decoy, it told the decoy to quack.

    Let's start with a system that didn't contain any "evil" drivers or programs that patched or hooked MS-DOS.

    Program calls int 21h

    32-bit file system manager
    checks that nobody has patched or hooked MS-DOS
    performs the requested operation
    updates the state variables inside MS-DOS
    returns to caller
    Program gets result

    This was paradise. The 32-bit file system manager was able to do all the work without having to deal with pesky drivers that did bizarro things. Note the extra step of updating the state variables inside MS-DOS. Even though we extracted the state variables from MS-DOS during the boot process, we keep those state variables in sync because drivers and programs frequently "knew" how those state variables worked and bypassed the operating system and accessed them directly. Therefore, the file system manager had to maintain the charade that MS-DOS was running the show (even though it wasn't) so that those drivers and programs saw what they wanted.

    Note also that those state variables were per-VM. (I.e., each MS-DOS "box" you opened got its own copy of those state variables.) After all, each MS-DOS box had its idea of what the current directory was, what was in the file tables, that sort of thing. This was all an act, however, because the real list of open files was kept in by the 32-bit file system manager. It had to be, because disk caches had to be kept coherent, and file sharing need to be enforced globally. If one MS-DOS box opened a file for exclusive access, then an attempt by a program running in another MS-DOS box to open the file should fail with a sharing violation.

    Okay, that was the easy case. The hard case is if you had a driver that hooked int 21h. I don't know what the driver does, let's say that it's a network driver that intercepts I/O to network drives and handles them in some special way. Let's suppose also that there's some TSR running in the MS-DOS box which has hooked int 21h so it can print a 1 to the screen when the int 21h is active and a 2 when the int 21h completes. Let's follow a call to a local device (not a network device, so the network driver doesn't do anything):

    Program calls int 21h

    32-bit file system manager
    notices that somebody has patched or hooked MS-DOS
    jumps to the hook (which is the 16-bit TSR)

    16-bit TSR (front end)
    prints a 1 to the screen
    calls previous handler (which is the 16-bit network driver)

    16-bit network driver (front end)
    decides that this isn't a network I/O request calls previous handler (which is the 16-bit IFSMGR hook)

    16-bit IFSMGR hook
    tells 32-bit file system manager
      that it's time to make the donuts

    32-bit file system manager
    regains control
    performs the requested operation
    updates the state variables inside MS-DOS
    returns to caller

    16-bit network driver (back end)
    returns to caller

    16-bit TSR (back end)
    prints a 2 to the screen
    returns to caller
    Program gets result

    Notice that all the work is still being done by the 32-bit file system manager. It's just that the call gets routed through all the 16-bit stuff to maintain the charade that 16-bit MS-DOS is still running the show. The only 16-bit code that actually ran (in red) is the stuff that the TSR and network driver installed, plus a tiny bit of glue in the 16-bit IFSMGR hook. Notice that no 16-bit MS-DOS code ran. The 32-bit file manager took over for MS-DOS.

    A similar sort of "take over but let the crazy stuff happen if somebody is doing crazy stuff" dance took place when the I/O subsystem took over control of your hard drive from 16-bit device drivers. If it recognized the drivers, it would "suck their brains out" and take over all the operations, in the same way that the 32-bit file system manager took over operations from 16-bit MS-DOS. On the other hand, if the driver wasn't one that the I/O subsystem recognized, it let the driver be the one in charge of the drive. If this happened, it was said that you were going through the "real-mode mapper" since "real mode" was name for the CPU mode when protected mode was not running; in other words, the mapper was letting the 16-bit drivers do the work.

    Now, if you were unlucky enough to be using the real-mode mapper, you probably noticed that system performance to that drive was pretty awful. That's because you were using the old clunky single-threaded 16-bit drivers instead of the faster, multithread-enabled 32-bit drivers. (When a 16-bit driver was running, no other I/O could happen because 16-bit drivers were not designed for multi-threading.)

    This awfulness of the real-mode mapper actually came in handy in a backwards way, because it was an early indication that your computer got infected with an MS-DOS virus. After all, MS-DOS viruses did what TSRs and drivers did: They hooked interrupt vectors and took over control of your hard drive. From the I/O subsystem's point of view, they looked just like a 16-bit hard disk device driver! When people complained, "Windows suddenly started running really slow," we asked them to look at the system performance page in the control panel and see if it says that "Some drives are using MS-DOS compatiblity." If so, then it meant that the real-mode mapper was in charge, and if you didn't change hardware, it probably means a virus.

    Now, there are parts of MS-DOS that are unrelated to file I/O. For example, there are functions for allocating memory, parsing a string containing potential wildcards into FCB format, that sort of thing. Those functions were still handled by MS-DOS since they were just "helper library" type functions and there was no benefit to reimplementing them in 32-bit code aside from just being able to say that you did it. The old 16-bit code worked just fine, and if you let it do the work, you preserved compatibility with programs that patched MS-DOS in order to alter the behavior of those functions.

  • The Old New Thing

    A 90-byte "whereis" program

    • 41 Comments

    Sometimes people try too hard.

    You can download a C# program to look for a file on your PATH, or you can use a 90-character batch file:

    @for %%e in (%PATHEXT%) do @for %%i in (%1%%e) do @if NOT "%%~$PATH:i"=="" echo %%~$PATH:i
    
  • The Old New Thing

    What is the command line length limit?

    • 16 Comments

    It depends on whom you ask.

    The maximum command line length for the CreateProcess function is 32767 characters. This limitation comes from the UNICODE_STRING structure.

    CreateProcess is the core function for creating processes, so if you are talking directly to Win32, then that's the only limit you have to worry about. But if you are reaching CreateProcess by some other means, then the path you travel through may have other limits.

    If you are using the CMD.EXE command processor, then you are also subject to the 8192 character command line length limit imposed by CMD.EXE.

    If you are using the ShellExecute/Ex function, then you become subject to the INTERNET_MAX_URL_LENGTH (around 2048) command line length limit imposed by the ShellExecute/Ex functions. (If you are running on Windows 95, then the limit is only MAX_PATH.)

    While I'm here, I may as well mention another limit: The maximum size of your environment is 32767 characters. The size of the environment includes the all the variable names plus all the values.

    Okay, but what if you want to pass more than 32767 characters of information to a process? You'll have to find something other than the command line. We'll discuss some options tomorrow.

  • The Old New Thing

    If you are trying to understand an error, you may want to look up the error code to see what it means instead of just shrugging

    • 64 Comments

    A customer had a debug trace log and needed some help interpreting it. The trace log was generated by an operating system component, but the details aren't important to the story.

    I've attached the log file. I think the following may be part of the problem.

    [07/17/2005:18:31:19] Creating process D:\Foo\bar\blaz.exe
    [07/17/2005:18:31:19] CreateProcess failed with error 2
    

    Any ideas?

    Thanks,
    Bob Smith
    Senior Test Engineer
    Tailspin Toys

    What struck me is that Bob is proud of the fact that he's a Senior Test Engineer, perhaps because it makes him think that we will take him more seriously because he has some awesome title.

    But apparently a Senior Test Engineer doesn't know what error 2 is. There are some error codes that you end up committing to memory because you run into them over and over. Error 32 is ERROR_SHARING_VIOLATION, error 3 is ERROR_PATH_NOT_FOUND, and in this case, error 2 is ERROR_FILE_NOT_FOUND.

    And even if Bob didn't have error 2 memorized, he should have known to look it up.

    Error 2 is ERROR_FILE_NOT_FOUND. Does the file D:\Foo\bar\blaz.exe exist?

    No, it doesn't.

    -Bob

    Bob seems to have shut off his brain and decided to treat troubleshooting not as a collaborative effort but rather as a game of Twenty Questions in which the person with the problem volunteers as little information as possible in order to make things more challenging. I had to give Bob a nudge.

    Can you think of a reason why the system would be looking at D:\Foo\bar\blaz.exe? Where did you expect it to be looking for blaz.exe?

    This managed to wake Bob out of his stupor, and the investigation continued. (And no, I don't remember what the final resolution was. I didn't realize I would have to remember the fine details of this support incident three years later.)

  • The Old New Thing

    No matter where you put an advanced setting, somebody will tell you that you are an idiot

    • 127 Comments

    There are advanced settings in Windows, settings which normal users not only shouldn't be messing with, but which they arguably shouldn't even know about, because that would give them just enough knowledge to be dangerous. And no matter where you put that advanced setting, somebody will tell you that you are an idiot.

    Here they are on an approximate scale. If you dig through the comments on this blog, you can probably find every single position represented somewhere.

    1. It's okay if the setting is hidden behind a registry key. I know how to set it myself.
    2. I don't want to mess with the registry. Put the setting in a configuration file that I pass to the installer.
    3. I don't want to write a configuration file. The program should have an Advanced button that calls up a dialog which lets the user change the advanced setting.
    4. Every setting must be exposed in the user interface.
    5. Every setting must be exposed in the user interface by default. Don't make me call up the extended context menu.
    6. The first time the user does X, show users a dialog asking if they want to change the advanced setting.

    If you implement level N, people will demand that you implement level N+1. It doesn't stop until you reach the last step, which is aggressively user-hostile. (And then there will also be people who complain that you went too far.)

    From a technical standpoint, each of the above steps is about ten to a hundred times harder than the previous one. If you put it in a configuration file, you have to write code to load a parser and extract the value. If you want an Advanced button, now you have to write a dialog box (which is already a lot of work), consult with the usability and user assistance to come up with the correct wording for the setting, write help text, provide guidance to the translators, and now since it is exposed in the user interface, you need to write automated tests and add the setting to the test matrices. It's a huge amount of work to add a dialog box, work that could be spent on something that benefits a much larger set of customers in a more direct manner.

    That's why most advanced settings hang out at level 1 or, in the case of configuring program installation, level 2. If you're so much of a geek that you want to change these advanced settings, it probably won't kill you to fire up a text editor and write a little configuration file.

    Sidebar

    Joel's count of "fifteen ways to shut down Windows" is a bit disingenuous, since he's counting six hardware affordances: "Four FN+key combinations... an on-off button... you can close the lid." Okay, fine, Joel, we'll play it your way. Your proposal to narrow it down to one "Bye" button, still leaves seven ways to shut down Windows.

    And then people will ask how to log off.

  • The Old New Thing

    Controlling which devices will wake the computer out of sleep

    • 62 Comments

    I haven't experienced this problem, but I know of people who have. They'll put their laptop into suspend or standby mode, and after a few seconds, the laptop will spontaneously wake itself up. Someone gave me this tip that might (might) help you figure out what is wrong.

    Open a command prompt and run the command

    powercfg -devicequery wake_from_any

    This lists all the hardware devices that are capable of waking up the computer from standby. But the operating system typically ignores most of them. To see the ones that are not being ignored, run the command

    powercfg -devicequery wake_armed

    This second list is typically much shorter. On my computer, it listed just the keyboard, the mouse, and the modem. (The modem? I never use that thing!)

    You can disable each of these devices one by one until you find the one that is waking up the computer.

    powercfg -devicedisablewake "device name"

    (How is this different from unchecking Allow this device to wake the computer from the device properties in Device Manager? Beats me.)

    Once you find the one that is causing problems, you can re-enable the others.

    powercfg -deviceenablewake "device name"

    I would start by disabling wake-up for the keyboard and mouse. Maybe the optical mouse is detecting tiny vibrations in your room. Or the device might simply be "chatty", generating activity even though you aren't touching it.

    This may not solve your problem, but at least's something you can try. I've never actually tried it myself, so who knows whether it works.

    Exercise: Count how many disclaimers there are in this article, and predict how many people will ignore them.

  • The Old New Thing

    We've traced the call and it's coming from inside the house: A function call that always fails

    • 64 Comments

    A customer reported that they had a problem with a particular function added in Windows 7. The tricky bit was that the function was used only on very high-end hardware, not the sort of thing your average developer has lying around.

    GROUP_AFFINITY GroupAffinity;
    ... code that initializes the GroupAffinity structure ...
    if (!SetThreadGroupAffinity(hThread, &GrouAffinity, NULL));
    {
     printf("SetThreadGroupAffinity failed: %d\n", GetLastError());
     return FALSE;
    }
    

    The customer reported that the function always failed with error 122 (ERROR_INSUFFICIENT_BUFFER) even though the buffer seems perfectly valid.

    Since most of us don't have machines with more than 64 processors, we couldn't run the code on our own machines to see what happens. People asked some clarifying questions, like whether this code is compiled 32-bit or 64-bit (thinking that maybe there is an issue with the emulation layer), until somebody noticed that there was a stray semicolon at the end of the if statement.

    The customer was naturally embarrassed, but was gracious enough to admit that, yup, removing the semicolon fixed the problem.

    This reminds me of an incident many years ago. I was having a horrible time debugging a simple loop. It looked like the compiler was on drugs and was simply ignoring my loop conditions and always dropping out of the loop. At wit's end, I asked a colleague to come to my office and serve as a second set of eyes. I talked him through the code as I single-stepped:

    "Okay, so we set up the loop here..."

    NODE pn = GetActiveNode();
    

    "And we enter the loop, continuing while the node still needs processing."

    if (pn->NeedsProcessing())
    {
    

    "Okay, we entered the loop. Now we realign the skew rods on the node."

     pn->RealignSkewRods();
    

    "If the treadle is splayed, we need to calibrate the node against it."

     if (IsSplayed()) pn->Recalibrate(this);
    

    "And then we loop back to see if there is more work to be done on this node."

    }
    

    "But look, even though the node needs processing «view node members», we don't loop back. We just drop out of the loop. What's going on?"

    Um, that's an if statement up there, not a while statement.

    A moment of silence while I process this piece of information.

    "All right then, sorry to bother you, hey, how about that sporting event last night, huh?"

Page 2 of 436 (4,355 items) 12345»