October, 2010

  • The Old New Thing

    The evolution of the ICO file format, part 2: Now in color!

    • 26 Comments

    Last time, we looked at the format of classic monochrome icons. But if you want to include color images, too? (Note that it is legal—and for a time it was common—for a single ICO file to offer both monochrome and color icons. After all, a single ICO file can offer both 16-color and high-color images; why not also 2-color images?)

    The representation of color images in an ICO file is almost the same as the representation of monochrome images: All that changes is that the image bitmap is now in color. (The mask remains monochrome.)

    In other words, the image format consists of a BITMAPINFOHEADER where the bmWidth is the width of the image and bmHeight is double the height of the image, followed by the bitmap color table, followed by the image pixels, followed by the mask pixels.

    Note that the result of this is a bizarre non-standard bitmap. The height is doubled because we have both an image and a mask, but the color format changes halfway through!

    Other restrictions: Supported color formats are 4bpp, 8bpp, 16bpp, and 0RGB 32bpp. Note that 24bpp is not supported; you'll have to convert it to a 0RGB 32bpp bitmap. Supported values for biCompression for color images are BI_RGB and (if your bitmap is 16bpp or 32bpp) BI_BITFIELDS.

    The mechanics of drawing the icon are the same as for a monochrome image: First, the mask is ANDed with the screen, then the image is XORed. In other words,

    pixel = (screen AND mask) XOR image

    On the other hand, XORing color pixels is not really a meaningful operation. It's not like people say "Naturally, fuchsia XOR aqua equals yellow. Any idiot knows that." Or "Naturally, blue XOR eggshell equals apricot on 8bpp displays (because eggshell is palette index 56, blue is palette index 1, and palette index 57 is apricot) but is equal to #F0EA29 on 32bpp displays." The only meaningful color to XOR against is black, in which case you have "black XOR Q = Q for all colors Q".

    mask image result operation
    0 Q (screen AND 0) XOR Q = Q copy from icon
    1 0 (screen AND 1) XOR 0 = screen nop
    1 Q (screen AND 1) XOR Q = screen XOR Q dubious

    For pixels you want to be transparent, set your mask to white and your image to black. For pixels you want to come from your icon, set your mask to black and your image to the desired color.

    We now have enough information to answer a common question people have about icons. After that break, we'll return to the evolution of the ICO file format.

    For further reading: Icons in Win32.

  • The Old New Thing

    The evolution of the ICO file format, part 1: Monochrome beginnings

    • 36 Comments

    This week is devoted to the evolution of the ICO file format. Note that the icon resource format is different from the ICO file format; I'll save that topic for another day.

    The ICO file begins with a fixed header:

    typedef struct ICONDIR {
        WORD          idReserved;
        WORD          idType;
        WORD          idCount;
        ICONDIRENTRY  idEntries[];
    } ICONHEADER;
    

    idReserved must be zero, and idType must be 1. The idCount describes how many images are included in this ICO file. An ICO file is really a collection of images; the theory is that each image is an alternate representation of the same underlying concept, but at different sizes and color depths. There is nothing to prevent you, in principle, from creating an ICO file where the 16×16 image looks nothing like the 32×32 image, but your users will probably be confused.

    After the idCount is an array of ICONDIRECTORY entries whose length is given by idCount.

    struct IconDirectoryEntry {
        BYTE  bWidth;
        BYTE  bHeight;
        BYTE  bColorCount;
        BYTE  bReserved;
        WORD  wPlanes;
        WORD  wBitCount;
        DWORD dwBytesInRes;
        DWORD dwImageOffset;
    };
    

    The bWidth and bHeight are the dimensions of the image. Originally, the supported range was 1 through 255, but starting in Windows 95 (and Windows NT 4), the value 0 is accepted as representing a width or height of 256.

    The wBitCount and wPlanes describe the color depth of the image; for monochrome icons, these value are both 1. The bReserved must be zero. The dwImageOffset and dwBytesInRes describe the location (relative to the start of the ICO file) and size in bytes of the actual image data.

    And then there's bColorCount. Poor bColorCount. It's supposed to be equal to the number of colors in the image; in other words,

    bColorCount = 1 << (wBitCount * wPlanes)

    If wBitCount * wPlanes is greater than or equal to 8, then bColorCount is zero.

    In practice, a lot of people get lazy about filling in the bColorCount and set it to zero, even for 4-color or 16-color icons. Starting in Windows XP, Windows autodetects this common error, but its autocorrection is slightly buggy in the case of planar bitmaps. Fortunately, almost nobody uses planar bitmaps any more, but still, it would be in your best interest not to rely on the autocorrection performed by Windows and just set your bColorCount correctly in the first place. An incorrect bColorCount means that when Windows tries to find the best image for your icon, it may choose a suboptimal one because it based its decision on incorrect color depth information.

    Although it probably isn't true, I will pretend that monochrome icons existed before color icons, because it makes the storytelling easier.

    A monochome icon is described by two bitmaps, called AND (or mask) and XOR (or image, or when we get to color icons, color). Drawing an icon takes place in two steps: First, the mask is ANDed with the screen, then the image is XORed. In other words,

    pixel = (screen AND mask) XOR image

    By choosing appropriate values for mask and image, you can cover all the possible monochrome BLT operations.

    mask image result operation
    0 0 (screen AND 0) XOR 0 = 0 blackness
    0 1 (screen AND 0) XOR 1 = 1 whiteness
    1 0 (screen AND 1) XOR 0 = screen nop
    1 1 (screen AND 1) XOR 1 = NOT screen invert

    Conceptually, the mask specifies which pixels from the image should be copied to the destination: A black pixel in the mask means that the corresponding pixel in the image is copied.

    The mask and image bitmaps are physically stored as one single double-height DIB. The image bitmap comes first, followed by the mask. (But since DIBs are stored bottom-up, if you actually look at the bitmap, the mask is in the top half of the bitmap and the image is in the bottom half).

    In terms of file format, each icon image is stored in the form of a BITMAPINFO (which itself takes the form of a BITMAPINFOHEADER followed by a color table), followed by the image pixels, followed by the mask pixels. The biCompression must be BI_RGB. Since this is a double-height bitmap, the biWidth is the width of the image, but the biHeight is double the image height. For example, a 16×16 icon would specify a width of 16 but a height of 16 × 2 = 32.

    That's pretty much it for classic monochrome icons. Next time we'll look at color icons.

    Still, given what you know now, the following story will make sense.

    A customer contacted the shell team to report that despite all their best efforts, they could not get Windows to use the image they wanted from their .ICO file. Windows for some reason always chose a low-color icon instead of using the high-color icon. For example, even though the .ICO file had a 32bpp image available, Windows always chose to use the 16-color (4bpp) image, even when running on a 32bpp display.

    A closer inspection of the offending .ICO file revealed that the bColorCount in the IconDirectoryEntry for all the images was set to 1, regardless of the actual color depth of the image. The table of contents for the .ICO file said "Yeah, all I've got are monochrome images. I've got three 48×48 monochrome images, three 32×32 monochrome images, and three 16×16 monochrome images." Given this information, Windows figured, "Well, given those choices, I guess that means I'll use the monochrome one." It chose one of images (at pseudo-random), and went to the bitmap data and found, "Oh, hey, how about that, it's actually a 16-color image. Okay, well, I guess I can load that."

    In summary, the .ICO file was improperly authored. Patching each IconDirectoryEntry in a hex editor made the icon work as intended. The customer thanked us for our investigation and said that they would take the issue up with their graphic design team.

  • The Old New Thing

    What does the FOF_NOCOPYSECURITYATTRIBS flag really do (or not do)?

    • 16 Comments

    In the old days, the shell copy engine didn't pay attention to ACLs. It just let the file system do whatever the default file system behavior was. The result was something like this:

    • If you copied a file, it opened the destination, wrote to it, and that was it. Result: The copied file has the security attributes of destination (specifically, picking up the inheritable attributes from the container).
    • If you moved a file within the same drive, it moved the file with MoveFile, and that was it. Result: The file retained its security attributes.
    • If you moved a file between drives, then it was treated as a copy/delete. Result: The moved file has the security attributes of the destination (specifically, picking up the inheritable attributes from the container).

    Perfectly logical, right? If a new file is created, then the security attributes are inherited from the container. If an existing file is moved, then its security attributes move with it. And since moving a file across drives was handled as a copy/delete, moving a file across drives behaved like a copy.

    Users, however, found this behavior confusing. For example, they would take a file from a private folder like their My Documents folder, and move it to a public location like Common Documents, and... the file would still be private.

    The FOF_NO­COPY­SECURITY­ATTRIBS flag was introduced in Windows 2000 to address this confusion. If you pass this flag, then when you move a file, even within a drive, the security attributes of the moved file will match the destination directory. (The way the shell implements this flag, by the way, is to move the file like normal, and then reset the security attributes to match the destination. So even though it sounds like a flag that says "don't do X" would be less work than doing X, it's actually more work, because we actually do X+Y and then undo the X part. But it's still far cheaper than copying the file and deleting the original.)

    Note that omitting the FOF_NO­COPY­SECURITY­ATTRIBS flag does not mean "Always copy security attributes." If you don't pass the flag, then the security attributes follow the default file system behavior, which sometimes transfers the security attributes and sometimes doesn't. In retrospect, the flag might have been better-named something like FOF_SET­SECURITY­ATTRIBS­TO­MATCH­DESTINATION.

    Now the question is how to summarize all this information in the MSDN documentation for the FOF_NO­COPY­SECURITY­ATTRIBS flag? After receiving this explanation of how the flag works, one customer suggested that the text be changed to read "Do not copy the security attributes of the moved file. The destination file receives the security attributes of its new folder. Note that this flag has no effect on copied files, which will always receive the security attributes of the new folder."

    But this proposed version actually can be misinterpreted. Everything starting with "Note that" is intended to be guidance. It isn't actually part of the specification; rather, it's sort of "thinking out loud", taking the actual specification and calling out some of its consequences. But how many people reading the above proposed text would fail to realize that the first two sentences are normative but the third sentence is interpretive? In particular, the interpretation says that the copied file will "always" receive the security attributes of the new folder. Is that really true? Maybe in the future there will be a new flag like COPY_FILE_INCLUDE_SECURITY_ATTRIBUTES, and now the "always" isn't so "always" any more.

    Anyway, now that you know what the FOF_NO­COPY­SECURITY­ATTRIBS flag does (and doesn't do), maybe you can answer this customer's question:

    Download a file via Internet Explorer and put it on the desktop. The file will be marked as having come from the Internet Zone.

    Now copy the file with the FOF_NO­COPY­SECURITY­ATTRIBS to some other location.

    The resulting file is still marked as Internet Zone. I expected that FOF_NO­COPY­SECURITY­ATTRIBS would remove the Internet Zone security information. Is this a bug in SHFileOperation?

    (This article provides enough information for you to explain why the Internet Zone marker is not removed. The answer to the other half of the customer's question—actually removing the marker—lies in this COM method.)

  • The Old New Thing

    The memcmp function reports the result of the comparison at the point of the first difference, but it can still read past that point

    • 27 Comments

    This story originally involved a more complex data structure, but that would have required too much explaining (with relatively little benefit since the data structure was not related to the moral of the story), so I'm going to retell it with double null-terminated strings as the data structure instead.

    Consider the following code to compare two double-null-terminated strings for equality:

    size_t SizeOfDoubleNullTerminatedString(const char *s)
    {
      const char *start = s;
      for (; *s; s += strlen(s) + 1) { }
      return s - start + 1;
    }
    
    BOOL AreDoubleNullTerminatedStringsEqual(
        const char *s, const char *t)
    {
     size_t slen = SizeOfDoubleNullTerminatedString(s);
     size_t tlen = SizeOfDoubleNullTerminatedString(t);
     return slen == tlen && memcmp(s, t, slen) == 0;
    }
    

    "Aha, this code is inefficient. Since the memcmp function stops comparing as soon as it finds a difference, I can skip the call to SizeOfDoubleNullTerminatedString(t) and simply write

    BOOL AreDoubleNullTerminatedStringsEqual(
        const char *s, const char *t)
    {
     return memcmp(s, t, SizeOfDoubleNullTerminatedString(s)) == 0;
    }
    

    because we can never read past the end of t: If the strings are equal, then tlen will be equal to slen anyway, so the buffer size is correct. And if the strings are different, the difference will be found at or before the end of t, since it is not possible for a double-null-terminated string to be a prefix of another double-null-terminated string. In both cases, we never read past the end of t."

    This analysis is based on a flawed assumption, namely, that memcmp compares byte-by-byte and does not look at bytes beyond the first point of difference. The memcmp function makes no such guarantee. It is permitted to read all the bytes from both buffers before reporting the result of the comparison.

    In fact, most implementations of memcmp do read past the point of first difference. Your typical library will try to compare the two buffers in register-sized chunks rather than byte-by-byte. (This is particularly convenient on x86 thanks to the block comparison instruction rep cmpsd which compares two memory blocks in DWORD-sized chunks, and x64 doubles your fun with rep cmpsq.) Once it finds two chunks which differ, it then studies the bytes within the chunks to determine what the return value should be.

    (Indeed, people with free time on their hands or simply enjoy a challenge will try to outdo the runtime library with fancy-pants memcmp algorithms which compare the buffers in larger-than-normal chunks by doing things like comparing via SIMD registers.)

    To illustrate, consider an implementation of memcmp which uses 4-byte chunks. Typically, memory comparison functions do some preliminary work to get the buffers aligned, but let's ignore that part since it isn't interesting. The inner loop goes like this:

    while (length >= 4)
    {
     int32 schunk = *(int32*)s;
     int32 tchunk = *(int32*)t;
     if (schunk != tchunk) {
       -- difference found - calculate and return result
     }
     length -= 4;
     s += 4;
     t += 4;
    }
    
    Let's compare the strings s = "a\0b\0\0" and t = "a\0\0". The size of the double-null-terminated string s is 4, so the memory comparison goes like this: First we read four bytes from s into schunk, resulting in (on a little-endian machine) 0x00620061. Next, we read four bytes from t into tchunk, resulting in 0x??000061. Oops, we read one byte past the end of the buffer.

    If t happened to sit right at the end of a page, and the next page was uncommitted memory, then you take an access violation while trying to read tchunk. Your optimization turned into a crash.

    Remember, when you say that a buffer is a particular size, the basic ground rules of programming say that it really has to be that size.

  • The Old New Thing

    How do I get the color depth of the screen?

    • 19 Comments

    How do I get the color depth of the screen? This question already makes an assumption that isn't always true, but we'll answer the question first, then discuss why the answer is wrong.

    If you have a device context for the screen, you can query the color depth with a simple arithmetic calculation:

    colorDepth = GetDeviceCaps(hdc, BITSPIXEL) *
                 GetDeviceCaps(hdc, PLANES);
    

    Now that you have the answer, I'll explain why it's wrong, but you can probably guess the reason already.

    Two words: Multiple monitors.

    If you have multiple monitors connected to your system, each one can be running at a different color depth. For example, your primary monitor might be running at 32 bits per pixel, while the secondary is stuck at 16 bits per pixel. When there was only one monitor, there was such a thing as the color depth of the screen, but when there's more than one, you first have to answer the question, "Which screen?"

    To get the color depth of each monitor, you can take your device context and ask the window manager to chop the device context into pieces, each corresponding to a different monitor.

    EnumDisplayMonitors(hdc, NULL, MonitorEnumProc, 0);
    
    // this function is called once for each "piece"
    BOOL CALLBACK MonitorEnumProc(HMONITOR hmon, HDC hdc,
                                  LPRECT prc, LPARAM lParam)
    {
       // compute the color depth of monitor "hmon"
       int colorDepth = GetDeviceCaps(hdc, BITSPIXEL) *
                        GetDeviceCaps(hdc, PLANES);
       return TRUE;
    }
    

    If you decide to forego splitting the DC into pieces and just ask for "the" color depth, you'll get the color depth information for the primary monitor.

    As a bonus (and possible optimization), there is a system metric GetSystemMetrics(SM_SAMEDISPLAYFORMAT) which has a nonzero value if all the monitors in the system have the same color format.

  • The Old New Thing

    Why are the keyboard scan codes for digits off by one?

    • 17 Comments

    In Off by one what, exactly?, my colleague Michael Kaplan wrote

    And this decision long ago that caused the scan codes to not line up for these digits when they could have...

    The word that struck me there was "decision".

    Because it wasn't a "decision" to make the scan codes almost-but-not-quite line up with digits. It was just a coincidence.

    If you look at the scan code table from Michael's article

    you can see stretches of consecutive scan codes, broken up by weird places where the consecutive pattern is violated. The weirdness makes more sense when you look at the original IBM PC XT keyboard:

    01
    Esc
    02
    1
    03
    2
    04
    3
    05
    4
    06
    5
    07
    6
    08
    7
    09
    8
    0A
    9
    0B
    0
    0C
    0D
    =
    0E
    0F
    10
    Q
    11
    W
    12
    E
    13
    R
    14
    T
    15
    Y
    16
    U
    17
    I
    18
    O
    19
    P
    1A
    [
    1B
    ]
    1C
    1D
    Ctrl
    1E
    A
    1F
    S
    20
    D
    21
    F
    22
    G
    23
    H
    24
    J
    25
    K
    26
    L
    27
    ;
    28
    '
    29
    `
    2A
    2B
    \
    2C
    Z
    2D
    X
    2E
    C
    2F
    V
    30
    B
    31
    N
    32
    M
    33
    ,
    34
    .
    35
    /
    36
    37
    *
    38
    Alt
    39
    Space
    3A
    Caps

    With this presentation, it becomes clearer how scan codes were assigned: They simply started at 01 and continued through the keyboard in English reading order. (Scan code 00 is an error code indicating keyboard buffer overflow.) The reason for the keyboard scan code being off-by-one from the digits is merely due to the fact that there was one key to the left of the digits. If there were two keys to the left of the digits, they would have been off by two.

    Of course, if the original keyboard designers had started counting from the lower left corner, like all right-thinking mathematically-inclined people, then this sort-of-coincidence would never have happened. The scan codes for the digits would have been 2E through 37, and nobody would have thought anything of it.

    It's a testament to the human brain's desire to find patterns and determine a reason for them that what is really just a coincidence gets interpreted as some sort of conspiracy.

  • The Old New Thing

    Why does each drive have its own current directory?

    • 39 Comments

    Commenter Dean Earley asks, "Why is there a 'current directory' AND an current drive? Why not merge them?"

    Pithy answer: Originally, each drive had its own current directory, but now they don't, but it looks like they do.

    Okay, let's unwrap that sentence. You actually know enough to answer the question yourself; you just have to put the pieces together.

    Set the wayback machine to DOS 1.0. Each volume was represented by a drive letter. There were no subdirectories. This behavior was carried forward from CP/M.

    Programs from the DOS 1.0 era didn't understand subdirectories; they referred to files by just drive letter and file name, for example, B:PROGRAM.LST. Let's fire up the assembler (compilers were for rich people) and assemble a program whose source code is on the A drive, but sending the output to the B drive.

    A>asm foo       the ".asm" extension on "foo" is implied
    Assembler version blah blah blah
    Source File: FOO.ASM
    Listing file [FOO.LST]: NUL throw away the listing file
    Object file [FOO.OBJ]: B: send the object file to drive B

    Since we gave only a drive letter in response to the Object file prompt, the assembler defaults to a file name of FOO.OBJ, resulting in the object file being generated as B:FOO.OBJ.

    Okay, now let's introduce subdirectories into DOS 2.0. Suppose you have want to assemble A:\SRC\FOO.ASM and put the result into B:\OBJ\FOO.OBJ. Here's how you do it:

    A> B:
    B> CD \OBJ
    B> A:
    A> CD \SRC
    A> asm foo
    Assembler version blah blah blah
    Source File: FOO.ASM
    Listing file [FOO.LST]: NUL
    Object file [FOO.OBJ]: B:
    

    The assembler reads from A:FOO.ASM and writes to B:FOO.OBJ, but since the current directory is tracked on a per-drive basis, the results are A:\SRC\FOO.ASM and B:\OBJ\FOO.OBJ as desired. If the current directory were not tracked on a per-drive basis, then there would be no way to tell the assembler to put its output into a subdirectory. As a result, DOS 1.0 programs were effectively limited to operating on files in the root directory, which means that nobody would put files in subdirectories (because their programs couldn't access them).

    From a DOS 1.0 standpoint, changing the current directory on a drive performs the logical equivalent of changing media. "Oh look, a completely different set of files!"

    Short attention span.

    Remembering the current directory for each drive has been preserved ever since, at least for batch files, although there isn't actually such a concept as a per-drive current directory in Win32. In Win32, all you have is a current directory. The appearance that each drive has its own current directory is a fake-out by cmd.exe, which uses strange environment variables to create the illusion to batch files that each drive has its own current directory.

    Dean continues, "Why not merge them? I have to set both the dir and drive if i want a specific working dir."

    The answer to the second question is, "They already are merged. It's cmd.exe that tries to pretend that they aren't." And if you want to set the directory and the drive from the command prompt or a batch file, just use the /D option to the CHDIR command:

    D:\> CD /D C:\Program Files\Windows NT
    C:\Program Files\Windows NT> _
    

    (Notice that the CHDIR command lets you omit quotation marks around paths which contain spaces: Since the command takes only one path argument, the lack of quotation marks does not introduce ambiguity.

  • The Old New Thing

    Why does TaskDialog return immediately without showing a dialog? - Answer

    • 0 Comments

    Last time, I left an exercise to determine why the Task­Dialog function was not actually displaying anything. The problem had nothing to do with an invalid window handle parameter and had all to do with original window being destroyed.

    My psychic powers told me that the window's WM_DESTROY handler called Post­Quit­Message. As we learned some time ago, quit messages cause modal loops to exit. Since the code was calling Task­Dialog after the window was destroyed, there was a WM_QUIT message still sitting in the queue, and that quit message caused the modal loop in Task­Dialog to exit before it got a chance to display anything.

    Switching to Message­Box wouldn't have changed anything, since Message­Box responds to quit messages the same way as Task­Dialog.

    (Worf was the first person to post the correct answer.)

  • The Old New Thing

    Why does my asynchronous I/O request return TRUE instead of failing with ERROR_IO_PENDING?

    • 12 Comments

    A customer reported that their program was not respecting the FILE_FLAG_OVERLAPPED flag consistently:

    My program opens a file handle in FILE_FLAG_OVERLAPPED mode, binds it to an I/O completion callback function with Bind­Io­Completion­Callback, and then issues a Write­File against it. I would expect that the Write­File returns FALSE and Get­Last­Error() returns ERROR_IO_PENDING, indicating that the I/O operation is being performed asynchronously, and that the completion function will be called when the operation completes. However, I find that some percentage of the time, the call to Write­File returns TRUE, indicating that the operation was performed synchronously. What am I doing wrong? I don't want my thread to block on I/O; that's why I'm issuing asynchronous I/O.

    When you specify FILE_FLAG_OVERLAPPED, you're promising that your program knows how to handle I/O which completes asynchronously, but it does not require the I/O stack to behave asynchronously. A driver can choose to perform your I/O synchronously anyway. For example, if the write operation can be performed by writing to cache without blocking, the driver will just copy the data to the cache and indicate synchronous completion. Don't worry, be happy: Your I/O completed even faster than you expected!

    Even though the I/O completed synchronously, all the asynchronous completion notification machinery is still active. It's just that they all accomplished their job before the Write­File call returned. This means that the event handle will still be signaled, the completion routine will still run (once you wait alertably), and if the handle is bound to an I/O completion port, the I/O completion port will receive a completion notification.

    You can use the Set­File­Completion­Notification­Modes function to change some aspects of this behavior, giving some control of the behavior of the I/O subsystem when a potentially-asynchronous I/O request completes synchronously.

  • The Old New Thing

    The overlooked computer room at school that became my "office" for a while

    • 9 Comments

    VIMH's comment on an unmarked restroom that is largely unknown reminds me of a story from my college days.

    Since my final project involved computational geometry, I was granted a key to the rooms in our department which had the computers with fancy graphical displays. (The term "fancy graphical display" is a relative one, mind you. By today's standards they would be pretty lame.) Use of the computers in these rooms was normally reserved for faculty and graduate students. During my wanderings through the department building, I discovered that there was a small storage room in an unused corner of the basement that contained not only the boxes piled high, like you might expect, but also one graphics display terminal.

    I was pleased at my discovery and even more pleased to discover over time that nobody ever came to visit. I had stumbled across the forgotten computer room.

    After a few weeks, I moved in a small tape cassette recorder (that being the fanciest audio technology I could afford at the time) so I could listen to music while I worked. Rachmaninoff's Third Piano Concerto became the mental soundtrack to my final project. Initially, I stowed the tape recorder in the corner when I left the room, but I gradually became lazy and just left it on the table next to the computer.

    This is normally the part of the story where our hero's casual mistake leads to his downfall: A custodian discovers the tape recorder, reports it to the administrator, and our hero is kicked out of the department for misuse of school facilities.

    But that's not what happened. As far as I remember, there was only one time another person paid a visit to the overlooked computer room while I was working in there. He jiggled the door handle, found it locked, and waved apologetically. (He was probably not even somebody authorized to be in the room, because if he were, he would have had a key.)

Page 2 of 3 (27 items) 123