• The Old New Thing

    Why doesn't the Print command appear when I select 20 files and right-click?

    This is explained in the MSDN documentation:

    When the number of items selected does not match the verb selection model or is greater than the default limits outlined in the following table, the verb fails to appear.

    Type of verb implementation Document Player
    Legacy 15 items 100 items
    COM 15 items No limit

    The problem here is that users will select a large number of files, then accidentally Print all of them. This fires up 100 copies of Notepad or Photoshop or whatever, and all of them start racing to the printer, and most of the time, the user is frantically trying to close 100 windows to stop the documents from printing, which is a problem because 100 new processes is putting a heavy load on the system, so it's slow to respond to all the frantic clicks, and even if the click manages to make it to the printing application, the application is running so slowly due to disk I/O contention that it takes a long time for it to respond to the click anyway.

    In panic, the user pulls the plug to the computer.

    The limit of 15 documents for legacy verbs tries to limit the scope of the damage. You will get at most 15 new processes starting at once, which is still a lot, but is significantly more manageable than 100 processes.

    Player verbs and COM-based verbs have higher limits because they are typically all handled by a single program, so there's only one program that you need to close. (Although there is one popular player that still runs a separate process for each media file, so if you select 1000 music files, right-click, and select "Add to playlist", it runs 1000 copies of the program, which basically turns your computer into a space heater. An arbitrary limit of 100 was chosen to keep the damage under control.)

    If you want to raise the 15-document limit, you can adjust the Multiple­Invoke­Prompt­Minimum setting. Note that this setting is not contractual, so don't get too attached to it.

  • The Old New Thing

    Hazy memories of the Windows 95 ship party


    One of the moments from the Windows 95 ship party (20 years ago today) was when one of the team members drove his motorcycle through the halls, leaving burns in the carpet.

    The funny part of that story (beyond the fact that it happened) is that nobody can agree on who it was! I seem to recall that it was Todd, but another of my colleagues remembers that it was Dave, and yet another remembers that it was Ed. We all remember the carpet burns, but we all blame it on different people.

    As one of my colleagues noted, "I'm glad all of this happened before YouTube."

    Brad Silverberg, the vice president of the Personal Systems Division (as it was then known), recalled that "I had a lot of apologizing to do to Facilities [about all the shenanigans that took place that day], but it was worth it."

  • The Old New Thing

    Generating different types of timestamps from quite a long way away


    Today's Little Program does the reverse of what we had last time. It takes a point in time and then generates timestamps in various formats.

    using System;
    class Program
     static void TryFormat(string format, Func<long> func)
      try {
       long l = func();
       if ((ulong)l > 0x00000000FFFFFFFF) {
           Console.WriteLine("{0} 0x{1:X16}", format, l);
       } else {
           Console.WriteLine("{0} 0x{1:X08}", format, l);
      } catch (ArgumentException) {
       Console.WriteLine("{0} - invalid", format);

    Like last time, the Try­Format method executes the passed-in function inside a try/catch block. If the function executes successfully, then we print the result. There is a tiny bit of cleverness where we choose the output format depending on the number of bits in the result.

     static long DosDateTimeFromDateTime(DateTime value)
      int result = ((value.Year - 1980) << 25) |
                   (value.Month << 21) |
                   (value.Day << 16) |
                   (value.Hour << 11) |
                   (value.Minute << 5) |
                   (value.Second >> 1);
      return (uint)result;

    The Dos­Date­Time­From­Date­Time converts the Date­Time into a 32-bit date/time stamp in MS-DOS format. This is not quite correct because MS-DOS format date/time stamps are in local time, but we are not converting the incoming Date­Time to local time. It's up to you to understand what's going on.

     public static void Main(string[] args)
      int[] parts = new int[7];
      for (int i = 0; i < 7; i++) {
       parts[i] = args.Length > i ? int.Parse(args[i]) : 0;
      DateTime value = new DateTime(parts[0], parts[1], parts[2],
                                    parts[3], parts[4], parts[5],
                                    parts[6], DateTimeKind.Utc);
      Console.WriteLine("Timestamp {0} UTC", value);
      TryFormat("Unix time",
        () => value.ToFileTimeUtc() / 10000000 - 11644473600);
      TryFormat("UTC FILETIME",
        () => value.ToFileTimeUtc());
      TryFormat("Binary DateTime",
        () => value.ToBinary());
      TryFormat("MS-DOS Date/Time",
        () => DosDateTimeFromDateTime(value));
      TryFormat("OLE Date/Time",
        () => BitConverter.DoubleToInt64Bits(value.ToOADate()));

    The parameters on the command line are the year, month, day, hour, minute, second, and millisecond; any omitted parameters are taken as zero. We create a UTC Date­Time from it, and then try to convert that Date­Time into the other formats.

    [Raymond is currently away; this message was pre-recorded.]

  • The Old New Thing

    On the various ways of creating large files in NTFS


    For whatever reason, you may want to create a large file.

    The most basic way of doing this is to use Set­File­Pointer to move the pointer to a large position into the file (that doesn't exist yet), then use Set­End­Of­File to extend the file to that size. This file has disk space assigned to it, but NTFS doesn't actually fill the bytes with zero yet. It will do that lazily on demand. If you intend to write to the file sequentially, then that lazy extension will not typically be noticeable because it can be combined with the normal writing process (and possibly even optimized out). On the other hand, if you jump ahead and write to a point far past the previous high water mark, you may find that your single-byte write lasts forever.

    Another option is to make the file sparse. I refer you to the remarks I made some time ago on the pros and cons of this technique. One thing to note is that when a file is sparse, the virtual-zero parts do not have physical disk space assigned to them. Consequently, it's possible for a Write­File into a previously virtual-zero section of the file may fail with an ERROR_DISK_QUOTA_EXCEEDED error.

    Yet another option is to use the Set­File­Valid­Data function. This tells NTFS to go grab some physical disk space, assign it to the file, and to set the "I already zero-initialized all the bytes up to this point" value to the file size. This means that the bytes in the file will contain uninitialized garbage, and it also poses a security risk, because somebody can stumble across data that used to belong to another user. That's why Set­File­Valid­Data requires administrator privileges.

    From the command line, you can use the fsutil file setvaliddata command to accomplish the same thing.

    Bonus chatter: The documentation for Set­End­Of­File says, "If the file is extended, the contents of the file between the old end of the file and the new end of the file are not defined." But I just said that it will be filled with zero on demand. Who is right?

    The formal definition of the Set­End­Of­File function is that the extended content is undefined. However, NTFS will ensure that you never see anybody else's leftover data, for security reasons. (Assuming you're not intentionally bypassing the security by using Set­File­Valid­Data.)

    Other file systems, however, may choose to behave differently.

    For example, in Windows 95, the extended content is not zeroed out. You will get random uninitialized junk that happens to be whatever was lying around on the disk at the time.

    If you know that the file system you are using is being hosted on a system running some version of Windows NT (and that the authors of the file system passed their Common Criteria security review), then you can assume that the extra bytes are zero. But if there's a chance that the file is on a computer running Windows for Workgroups or Windows 95, then you need to worry about those extra bytes. (And if the file system is hosted on a computer running a non-Windows operating system, then you'll have to check the documentation for that operating system to see whether it guarantees zeroes when files are extended.)

    [Raymond is currently away; this message was pre-recorded.]

  • The Old New Thing

    Why is my x64 process getting heap address above 4GB on Windows 8?


    A customer noticed that when they ran their program on Windows 8, memory allocations were being returned above the 4GB boundary. They included a simple test program:

    #include <stdio.h>
    #include <stdlib.h>
    int main(int argc, char** argv)
        void *testbuffer = malloc(256);
        printf("Allocated address = %p\n", testbuffer);
        return 0;

    When run on Windows 7, the function prints addresses like 0000000000179B00, but on Windows 8, it prints addresses like 00000086E60EA410.

    The customer added that they care about this difference because pointers above 4GB will be corrupted when the value is truncated to a 32-bit value. As part of their experimentation, they found that they could force pointers above 4GB to occur even on Windows 7 by allocating very large chunks of memory, but on Windows 8, it's happening right off the bat.

    The memory management team explained that this is expected for applications linked with the /HIGH­ENTROPY­VA flag, which the Visual Studio linker enables by default for 64-bit programs.

    High-entropy virtual address space is more commonly known as Address Space Layout Randomization (ASLR). ASLR is a feature that makes addresses in your program less predictable, which significantly improves its resiliance to many categories of security attacks. Windows 8 expands the scope of ASLR beyond just the code pages in your process so that it also randomizes where the heap goes.

    The customer accepted that answer, and that was the end of the conversation, but there was something in this exchange that bothered me: The bit about truncating to a 32-bit value.

    Why are they truncating 64-bit pointers to 32-bit values? That's the bug right there. And they even admit that they can trigger the bug by forcing the program to allocate a lot of memory. They need to stop truncating pointers! Once they do that, all the problems will go away, and it won't matter where the memory gets allocated.

    If there is some fundamental reason that they have to truncate pointers to 32-bit values, then they should build without /LARGEADDRESSAWARE so that the process will be given an address space of only 2GB, and then they can truncate their pointers all they want.

    (Of course, if you're going to do that, then you probably should just compile the program as a 32-bit program, since you're not really gaining much from being a 64-bit program any more.)

  • The Old New Thing

    What would be the point of creating a product that can't do its job?


    Yuhong Bao for some reason lamented that there is no 32-bit version of Windows Server 2008 R2.

    Well, duh.

    Why would anybody want a 32-bit version of a server product? You would run into address space limitations right off the bat. you couldn't use it as an Exchange server, your Terminal Server couldn't support more than 100 or so users, your file server disk cache can't get more than 2GB (and probably much less), your SQL Server will be forced into AWE mode and even then, AWE memory is used only for database page caches, not for anything else.

    Basically, a 32-bit server would be pretty much useless for anything it would be asked to do in its mission as a server.

    (Device driver compatibility is a much less significant issue for servers, because servers rarely run on exotic hardware. Indeed, servers typically run on the most boring hardware imaginable and explicitly run the lamest video driver available. You don't want to take the risk that a fancy video card's fancy video driver is going to have a bug that crashes your server, and besides, nobody is sitting at the server console anyway—all the administration is done remotely.)

  • The Old New Thing

    Intentionally making the suggestion look nothing like any scripting language, yet understandable enough to get the point across


    In an internal peer-to-peer discussion list for an internal tool I'll call Program Q, somebody asked,

    How can I query the number of free frobs in every table in my table repository?

    I suggested that they could use the command

    q query-property "*::frobs-free"

    taking advantage of the fact that in Program Q, you can specify a wildcard for the table name to query across all tables.

    Thanks, this looks promising, but my repository has a huge number of tables, so the q query-property command refuses to expand the * wildcard that much. I can get around this by issuing 26 queries, one for each letter of the alphabet:

    q query-property "a*::frobs-free"
    q query-property "b*::frobs-free"
    q query-property "c*::frobs-free"
    q query-property "z*::frobs-free"

    Is there a better way to do this?

    I replied with some pseudocode.

      from table in `q list-tables`
      select table + "::frobs-free"
    ) | q query-property @-

    (The @ means that it should take the list of properties from a file, and we give - as the file name, meaning standard input. Not that it's important because I completely made this up.)

    A colleague of mine noted that I provided just enough syntax to explain the algorithm clearly, but in a form that cannot be executed in any scripting language, so the user understands that it is just an algorithm that needs to be massaged into something that will actually run.

    It's a neat trick when it works. But when it fails, it fails spectacularly. Fortunately, in this case, it worked.

    Bonus chatter: For all I know, that's valid PowerShell.

  • The Old New Thing

    Trying out all the different ways of recognizing different types of timestamps from quite a long way away


    Today's Little Program takes a 64-bit integer and tries to interpret it in all the various timestamp formats. This comes in handy when you have extracted a timestamp from a crash dump and want to see it in a friendly format.

    using System;
    class Program
     static void TryFormat(string format, Func<DateTime> func)
       DateTime d = func();
       Console.WriteLine("{0} {1}", format, d);
      catch (ArgumentException)
       Console.WriteLine("{0} - invalid", format);

    The Try­Format method executes the passed-in function inside a try/catch block. If the function executes successfully, then we print the result. If it raises an argument exception, then we declare the value as invalid.

     static DateTime DateTimeFromDosDateTime(long value)
      if ((ulong)value > 0x00000000FFFFFFFF) {
       throw new ArgumentOutOfRangeException();
      int intValue = (int)value;
      int year = (intValue >> 25) & 127;
      int month = (intValue >> 21) & 15;
      int day = (intValue >> 16) & 31;
      int hour = (intValue >> 11) & 31;
      int minute = (intValue >> 5) & 63;
      int second = (intValue << 1) & 63;
      return new DateTime(1980 + year, month, day, hour, minute, second);

    The Date­Time­From­Dos­Date­Time function treats the 64-bit value as a 32-bit date/time stamp in MS-DOS format. Assuming the value fits in a 32-bit integer, we extract the bitfields corresponding to the year, month, day, hour, minute, and second, and construct a Date­Time from it.

     public static void Main(string[] args)
      if (args.Length < 1) return;
      long value = ParseLongSomehow(args[0]);
      Console.WriteLine("Timestamp {0} (0x{0:X}) could mean", value);
      TryFormat("Unix time",
        () => DateTime.FromFileTimeUtc(10000000 * value + 116444736000000000));
      TryFormat("UTC FILETIME",
        () => DateTime.FromFileTimeUtc(value));
      TryFormat("Local FILETIME",
        () => DateTime.FromFileTime(value));
      TryFormat("UTC DateTime",
        () => new DateTime(value, DateTimeKind.Utc));
      TryFormat("Local DateTime",
        () => new DateTime(value, DateTimeKind.Local));
      TryFormat("Binary DateTime",
        () => DateTime.FromBinary(value));
      TryFormat("MS-DOS Date/Time",
        () => DateTimeFromDosDateTime(value));
      TryFormat("OLE Automation Date/Time",
        () => DateTime.FromOADate(BitConverter.Int64BitsToDouble(value)));

    Once we have parsed out the command line, we pump the value through all the different conversion functions. Most of them are natively supported by the Date­Time structure, but we had to create a few of them manually.

  • The Old New Thing

    Why does the BackupWrite function take a pointer to a modifiable buffer when it shouldn't be modifying the buffer?


    The Backup­Write function takes a non-const pointer to the buffer to be written to the file being restored. Will it actually modify the buffer? Assuming it doesn't, why wasn't it declared const? It would be much more convenient if it took a const pointer to the buffer, so that people with const buffers didn't have to const_cast every time they called the function. Would changing the parameter from non-const to const create any compatibility problems?

    Okay, let's take the questions in order.

    Will it actually modify the buffer? No.

    Why wasn't it declared const? My colleague Aaron Margosis explained that the function dates back to Windows NT 3.1, when const-correctness was rarely considered. A lot of functions from that area (particularly in the kernel) suffer from the same problem. For example, the computer name passed to the Reg­Connect­Registry function is a non-const pointer even though the function never reads from it.

    Last question: Can the parameter be changed from non-const to const without breaking compatibility?

    It would not cause problems from a binary compatibility standpoint, because a const pointer and a non-const pointer take the same physical form in Win32. However, it breaks source code compatiblity. Consider the following code fragment:

    BOOL WINAPI TestModeBackupWrite(
      HANDLE hFile,
      LPBYTE lpBuffer,
      DWORD nNumberOfBytesToWrite,
      LPDWORD lpNumberOfBytesWritten,
      BOOL bAbort,
      BOOL bProcessSecurity,
      LPVOID *lpContext)
     ... simulate a BackupWrite ...
     return TRUE;
                     LPDWORD, BOOL, BOOL, LPVOID *);
    BACKUPWRITEPROC TestableBackupWrite;
    void SetTestMode(bool testing)
     if (testing) {
      TestableBackupWrite = TestModeBackupWrite;
     } else {
      TestableBackupWrite = BackupWrite;

    The idea here is that the program can be run in test mode, say to do a simulated restore. (You see this sort of thing a lot with DVD-burning software.) The program uses Testable­Backup­Write whenever it wants to write to a file being restored from backup. In test mode, Testable­Backup­Write points to the Test­Mode­Backup­Write function; in normal mode, it points to the Backup­Write function.

    If the second parameter were changed from LPBYTE to const BYTE *, then the above code would hit a compiler error.

    Mind you, maybe it's worth breaking some source code in order to get better const-correctness, but for now, the cost/benefit tradeoff biases toward leaving things alone.

  • The Old New Thing

    Is a SID with zero subauthorities a valid SID? It depends whom you ask


    Here's an interesting table.

    Function Is Sub­Authority­Count=0 valid?
    IsValidSid Yes
    Convert­Sid­To­String­Sid Yes
    ConvertString­­Sid­To­Sid No

    That last entry creates the unfortunate situation where a SID with no subauthorities can be converted to a string, but cannot be converted back.

    If it's any consolation, SIDs with no subauthorities aren't encountered in normal usage, so if you ever accidentally reject one of these, it's not going to inconvenience anyone.

    Oh, and the answer to the question at the top: Yes, a SID with zero subauthorities is technically valid. It's a degenerate case that's not very interesting, but it is technically valid.

Page 7 of 460 (4,594 items) «56789»