Holy cow, I wrote a book!
The x86 architecture
does things that almost no other modern architecture does,
but due to its overwhelming popularity, people think that
the x86 way is the normal way and that everybody else is weird.
Let's get one thing straight:
The x86 architecture is the weirdo.
The x86 has a small number (8) of general-purpose registers; the other modern
processors have far more.
(PPC, MIPS, and Alpha each have 32; ia64 has 128.)
The x86 uses the stack to pass function parameters;
the others use registers.
The x86 forgives access to unaligned data, silently fixing up
The others raise a misalignment exception, which can optionally
be emulated by the supervisor at an amazingly huge performance penalty.
The x86 has variable-sized instructions.
The others use fixed-sized instructions.
(PPC, MIPS, and Alpha each have fixed-sized 32-bit instructions;
ia64 has fixed-sized 41-bit instructions. Yes, 41-bit instructions.)
The x86 has a strict memory model, where external memory access
matches the order in which memory accesses are issued by the code
The others have weak memory models, requiring explicit memory
barriers to ensure that issues to the bus are made (and completed)
in a specific order.
The x86 supports atomic load-modify-store operations.
None of the others do.
The x86 passes function return addresses on the stack.
The others use a link register.
Bear this in mind when you write what you think is portable code.
Like many things, the culture you grow up with is the one that
feels "normal" to you, even if, in the grand scheme of things,
it is one of the more bizarre ones out there.
It depends which version of Windows you're asking about.
For Windows 95, Windows 98, and Windows Me,
the answer is simple: Not at all.
These are not multiprocessor operating systems.
For Windows NT and Windows 2000, the answer is
"It doesn't even know."
These operating systems are not hyperthreading-aware
because they were written before hyperthreading was invented.
If you enable hyperthreading, then each of your CPUs looks
like two separate CPUs to these operating systems.
(And will get charged as two separate CPUs for licensing
Since the scheduler doesn't realize the connection between
the virtual CPUs, it can end up doing a worse job than
if you had never enabled hyperthreading to begin with.
Consider a dual-hyperthreaded-processor machine.
There are two physical processors A and B, each with
two virtual hyperthreaded processors, call them A1, A2,
B1, and B2.
Suppose you have two CPU-intensive tasks.
As far as the Windows NT
and Windows 2000 schedulers are concerned, all four
processors are equivalent, so it figure it doesn't matter which two
it uses. And if you're unlucky, it'll pick
A1 and A2, forcing one physical processor to shoulder two
heavy loads (each of which will probably run at something
between half-speed and three-quarter speed),
leaving physical processor B idle;
completely unaware that it could have done a better job
by putting one on A1 and the other on B1.
Windows XP and Windows Server 2003 are hyperthreading-aware.
When faced with the above scenario, those schedulers will know
that it is better to put one task on one of the A's and the other
on one of the B's.
Note that even with a hyperthreading-aware processor,
you can concoct pathological scenarios where hyperthreading ends
up a net loss. (For example, if you have four tasks, two of which
rely heavily on L2 cache and two of which don't, you'd be better
off putting each of the L2-intensive tasks on separate processors,
since the L2 cache is shared by the two virtual processors.
Putting them both on the same processor would result in a lot of L2-cache
misses as the two tasks fight over L2 cache slots.)
When you go to the expensive end of the scale (the Datacenter Servers,
the Enterprise Servers), things get tricky again.
I refer still-interested parties to the
Windows Support for Hyper-Threading Technology white paper.
Update 06/2007: The white paper
appears to have moved.
I didn't debug it personally, but I know the people who did.
During Windows XP development, a bug arrived on
a computer game that crashed only after you got to one of the higher levels.
After many saved and restored games, the problem was finally identified.
The program does its video work in an offscreen buffer and transfers
it to the screen when it's done. When it draws text with a shadow,
it first draws the text in black, offset down one and right one pixel,
then draws it again in the foreground color.
So far so good.
Except that it didn't check whether moving down and right one pixel
was going to go beyond the end of the screen buffer.
That's why it took until one of the higher levels before the bug
manifested itself. Not until then did you accomplish a mission
whose name contained a lowercase letter with a descender!
Shifting the descender down one pixel caused the bottom row of
pixels in the character to extend past the video buffer and
start corrupting memory.
Once the problem was identified, fixing it was comparatively easy.
The application compatibility team
has a bag of tricks, and one of them is called
This particular compatibility fix adds padding to every heap
allocation so that when a program overruns a heap buffer, all
that gets corrupted is the padding.
Enable that fix for the bad program
(specifying the amount of padding necessary,
in this case, one row's worth of pixels), and run through the
game again. No crash this time.
What made this interesting to me was that you had to play the
game for hours before the bug finally surfaced.
Scotland doesn't have the corner on monsters in lakes. You'll also find them in Norway, in Sweden (read about a recent expedition), and in Canada, among many, many others. Anywhere there are lakes, there's bound to be a legend about a monster in one of them.
It appears, however that Sweden's Storsjöodjur is about to lose its protected species status, owing to an inquiry inspired by a man's request to harvest the creature's eggs so he can hatch them.
As a result, it will soon be open season on Storsjöodjuret. Happy hunting.
(I find the Swedish word odjur somewhat poetic. It translates as "monster" but literally means "un-animal".)
A commenter asked why the original window order is not always preserved
when you undo a Show Desktop.
The answer is "Because the alternative is worse."
Guaranteeing that the window order is restored can result in
When the windows are restored when you undo a Show Desktop,
Explorer goes through and asks each window that it had minimized
to restore itself. If each window is quick to respond, then the
windows are restored and the order is preserved.
However, if there is a window that is slow to respond (or
even hung), then it
loses its chance and Explorer moves on to the next window in the list.
That way, a hung window doesn't cause Explorer to hang, too.
But it does mean that the windows restore out of order.
On x86 machines, Windows chooses a page size of 4K because that was the
only page size supported by that architecture at the time the operating
system was designed. (4MB pages were added to the CPU later,
in the Pentium as I recall, but clearly that is too large for everyday use.)
For the ia64, Windows chose a page size of 8K. Why 8K?
It's a balance between two competing objectives.
Large page sizes allow more efficient I/O since you are reading
twice as much data at one go. However large page sizes also
increase the likelihood that the extra I/O you perform is wasted
because of poor locality.
Experiments were run on the ia64 with various page sizes
(even with 64K pages, which were seriously considered at one point),
and 8K provided the best balance.
Note that changing the page size creates all sorts of problems
for compatibility. There are large numbers of programs out there that
blindly assume that the page size is 4K.
Boy are they in for a surprise.
For some reason, this question gets asked a lot. How do I convert a byte to a System.String? (Yes, this is a CLR question. Sorry.)
You can use String System.Text.UnicodeEncoding.GetString() which takes a byte array and produces a string.
Note that this is not the same as just blindly copying the bytes from the byte array into a hunk of memory and calling it a string. The GetString() method must validate the bytes and forbid invalid surrogates, for example.
You might be tempted to create a string and just mash the bytes into it, but that violates string immutability and can lead to subtle problems.
The Annals of Improbable Research highlighted a few days ago the pioneering work of researcher Eugenie C. Scott on The Morphology of Steve.
The value of these results to the growing field of Steve Theory cannot be understated.
Perhaps not as well-known today as it was in the days when the arrow keys and numeric keypad shared space is that the shift key overrides NumLock.
If NumLock is on (as it usually is), then pressing a key on the numeric keypad while holding the shift key overrides NumLock and instead generates the arrow key (or other navigation key) printed in small print under the big digits.
(The shift key also overrides CapsLock. If you turn on CapsLock then hold the shift key while typing a letter, that letter comes out in lowercase.)
Perhaps you might decide that this little shift key quirk is completely insignificant, at least until you try to do something like assign Shift+Numpad0 as a hotkey and wonder why it doesn't work. Now you know.
Apparently there are a lot of strange dictionaries out there.
Otherwise-well-respected German dictionary publisher Langenscheidt announced that it is producing a German-Woman/Woman-German dictionary. (Psst, Toronto Star, it's "Also sprachen die Fräulein"... Third person plural, past tense of strong verb, ending is "en". You're welcome.)
We also have The Hippie Dictionary which translates such words and phrases like "stay loose", "hey man", and "like".