Holy cow, I wrote a book!
I didn't debug it personally, but I know the people who did.
During Windows XP development, a bug arrived on
a computer game that crashed only after you got to one of the higher levels.
After many saved and restored games, the problem was finally identified.
The program does its video work in an offscreen buffer and transfers
it to the screen when it's done. When it draws text with a shadow,
it first draws the text in black, offset down one and right one pixel,
then draws it again in the foreground color.
So far so good.
Except that it didn't check whether moving down and right one pixel
was going to go beyond the end of the screen buffer.
That's why it took until one of the higher levels before the bug
manifested itself. Not until then did you accomplish a mission
whose name contained a lowercase letter with a descender!
Shifting the descender down one pixel caused the bottom row of
pixels in the character to extend past the video buffer and
start corrupting memory.
Once the problem was identified, fixing it was comparatively easy.
The application compatibility team
has a bag of tricks, and one of them is called
This particular compatibility fix adds padding to every heap
allocation so that when a program overruns a heap buffer, all
that gets corrupted is the padding.
Enable that fix for the bad program
(specifying the amount of padding necessary,
in this case, one row's worth of pixels), and run through the
game again. No crash this time.
What made this interesting to me was that you had to play the
game for hours before the bug finally surfaced.
Scotland doesn't have the corner on monsters in lakes. You'll also find them in Norway, in Sweden (read about a recent expedition), and in Canada, among many, many others. Anywhere there are lakes, there's bound to be a legend about a monster in one of them.
It appears, however that Sweden's Storsjöodjur is about to lose its protected species status, owing to an inquiry inspired by a man's request to harvest the creature's eggs so he can hatch them.
As a result, it will soon be open season on Storsjöodjuret. Happy hunting.
(I find the Swedish word odjur somewhat poetic. It translates as "monster" but literally means "un-animal".)
A commenter asked why the original window order is not always preserved
when you undo a Show Desktop.
The answer is "Because the alternative is worse."
Guaranteeing that the window order is restored can result in
When the windows are restored when you undo a Show Desktop,
Explorer goes through and asks each window that it had minimized
to restore itself. If each window is quick to respond, then the
windows are restored and the order is preserved.
However, if there is a window that is slow to respond (or
even hung), then it
loses its chance and Explorer moves on to the next window in the list.
That way, a hung window doesn't cause Explorer to hang, too.
But it does mean that the windows restore out of order.
On x86 machines, Windows chooses a page size of 4K because that was the
only page size supported by that architecture at the time the operating
system was designed. (4MB pages were added to the CPU later,
in the Pentium as I recall, but clearly that is too large for everyday use.)
For the ia64, Windows chose a page size of 8K. Why 8K?
It's a balance between two competing objectives.
Large page sizes allow more efficient I/O since you are reading
twice as much data at one go. However large page sizes also
increase the likelihood that the extra I/O you perform is wasted
because of poor locality.
Experiments were run on the ia64 with various page sizes
(even with 64K pages, which were seriously considered at one point),
and 8K provided the best balance.
Note that changing the page size creates all sorts of problems
for compatibility. There are large numbers of programs out there that
blindly assume that the page size is 4K.
Boy are they in for a surprise.
For some reason, this question gets asked a lot. How do I convert a byte to a System.String? (Yes, this is a CLR question. Sorry.)
You can use String System.Text.UnicodeEncoding.GetString() which takes a byte array and produces a string.
Note that this is not the same as just blindly copying the bytes from the byte array into a hunk of memory and calling it a string. The GetString() method must validate the bytes and forbid invalid surrogates, for example.
You might be tempted to create a string and just mash the bytes into it, but that violates string immutability and can lead to subtle problems.
The Annals of Improbable Research highlighted a few days ago the pioneering work of researcher Eugenie C. Scott on The Morphology of Steve.
The value of these results to the growing field of Steve Theory cannot be understated.
Perhaps not as well-known today as it was in the days when the arrow keys and numeric keypad shared space is that the shift key overrides NumLock.
If NumLock is on (as it usually is), then pressing a key on the numeric keypad while holding the shift key overrides NumLock and instead generates the arrow key (or other navigation key) printed in small print under the big digits.
(The shift key also overrides CapsLock. If you turn on CapsLock then hold the shift key while typing a letter, that letter comes out in lowercase.)
Perhaps you might decide that this little shift key quirk is completely insignificant, at least until you try to do something like assign Shift+Numpad0 as a hotkey and wonder why it doesn't work. Now you know.
Apparently there are a lot of strange dictionaries out there.
Otherwise-well-respected German dictionary publisher Langenscheidt announced that it is producing a German-Woman/Woman-German dictionary. (Psst, Toronto Star, it's "Also sprachen die Fräulein"... Third person plural, past tense of strong verb, ending is "en". You're welcome.)
We also have The Hippie Dictionary which translates such words and phrases like "stay loose", "hey man", and "like".
Einstein discovered that simultaneity is relative.
This is also true of computing.
People will ask, "Is it okay to do X on one thread and Y on
another thread simultaneously?" Here are some examples:
You can answer this question knowing nothing about the internal
behavior of those operations. All you need to know are some physics
and the answers to much simpler questions about what is
valid sequential code.
Let's do a thought experiment with simultaneity.
Since simultaneity is relative, any code that does X and Y
simultaneously can be observed to have performed X before Y
or Y before X, depending on your frame of reference.
That's how the universe works.
So if it were okay to do them simultaneously, then it must
also be okay to do them one after the other, since they
do occur one after the other if you walk
past the computer in the correct direction.
Is it okay to use a handle after closing it?
Is it okay to unregister a wait event twice?
The answer to both questions is "No," and therefore
it isn't okay to do them simultaneously either.
If you don't like using physics to solve this problem, you can
also do it from a purely technical perspective.
Invoking a function is not an atomic operation. You prepare
the parameters, you call the entry point, the function does some
work, it returns. Even if you somehow manage to get both threads
to reach the function entry point simultaneously (even though as
we know from physics there is no such thing as true simultaneity),
there's always the possibility that one thread will get pre-empted
immediately after the "call" instruction has transferred control
to the first instruction of the target function,
while the other thread continues to completion.
After the second thread runs to completion, the pre-empted thread
gets scheduled and begins execution of the function body.
Under this situation, you effectively
called the two functions one after the
other, despite all your efforts to call them simultaneously.
Since you can't prevent this scenario from occurring,
you have to code with the possibility that it might actually happen.
Hopefully this second explanation will satisfy the people who don't believe
in the power of physics.
Personally, I prefer using physics.
Even though Windows NT uses UTC internally,
the BIOS clock stays on local time.
Why is that?
There are a few reasons.
One is a chain of backwards compatibility.
In the early days, people often dual-booted between
Windows NT and MS-DOS/Windows 3.1.
MS-DOS and Windows 3.1 operate on local time,
so Windows NT followed suit so that you wouldn't
have to keep changing your clock each time you changed
As people upgraded from Windows NT to
Windows 2000 to Windows XP, this choice
of time zone had to be preserved so that people
could dual-boot between their previous operating
system and the new operating system.
Another reason for keeping the BIOS clock on local time
is to avoid confusing people who set their time via the BIOS
If you hit the magic key during the power-on self-test,
the BIOS will go into its configuration mode, and one of
the things you can configure here is the time.
Imagine how confusing it would be if you set the time to 3pm,
and then when you started Windows, the clock read 11am.
"Stupid computer. Why did it even ask me to change the time
if it's going to screw it up and make me change it a second time?"
And if you explain to them, "No, you see, that time was UTC,
not local time," the response is likely to be
"What kind of totally propeller-headed nonsense is that?
You're telling me that when the computer asks me what time it is,
I have to tell it what time it is in
(Except during the summer in the northern hemisphere,
when I have to tell it what time it is in
Why do I have to remember my time zone and manually subtract
four hours? Or is it five during the summer? Or maybe I have to
add. Why do I even have to think about this?
Stupid Microsoft. My watch says three o'clock. I type three o'clock.
End of story."
(What's more, some BIOSes have alarm clocks built in,
where you can program them to have the computer turn itself on at a particular
time. Do you want to have to convert all those times to UTC
each time you want to set a wake-up call?)