Holy cow, I wrote a book!
A friend of mine used to work on the development of the
USB specification and subsequent
implementation. One of the things that happens at these
meetings is that hardware companies would show off the great
USB hardware they were working on. It also gave them a chance
to try out their hardware with various USB host manufacturers
and operating systems to make sure everything worked properly
One of the earlier demonstrations was a company that was making
USB floppy drives. The company representative talked about how
well the drives were doing and mentioned that they make two versions,
one for PCs and one for Macs.
"That's strange," the committee members thought to themselves.
"Why are there separate PC and Mac versions? The specification
is very careful to make sure that the same floppy drive works
on both systems. You shouldn't need to make two versions."
So one of the members asked the obvious question.
"Why do you have two versions? What's the difference?
If there's a flaw in our specification, let us know and we can
The company representative answered,
"Oh, the two floppy drives are completely the same electronically.
The only difference is that the Mac version comes in translucent
blue plastic and costs more."
This company was of course not the only one to try to capitalize
on the iMac-inspired translucent plastic craze. My favorite
is the iMac-styled
George Foreman Grill.
(I'm told the graphite ones cook faster.)
In an earlier comment, Larry Osterman described why Windows 3.0 was
such a runaway success. He got a little of the timeline wrong,
so I'll correct it here.
Windows 2.0 did support protected mode.
And it was Windows/386, which came out before Windows 3.0,
which first used the new virtual-x86 mode of the 80386 processor
to support pre-emptively multitasked DOS boxes.
The old Windows 2.0 program was renamed "Windows/286" to keep
the names in sync.
The three modes of Windows then became "real mode" (Windows 1.0 style),
"standard mode" (Windows/286 style) and "enhanced mode" (Windows/386 style).
Amazingly, even though the way the operating system used the processor was
radically different in each of the three modes, a program written for
"real mode" successfully ran without change in the other two modes.
You could write a single program that ran on all three operating systems.
And then Windows 3.0 came out and the world changed.
Sales were through the roof.
I remember that some major software reseller (Egghead?)
was so pleased with the success of Windows 3.0 that
it bought bought every Microsoft employee a Dove ice cream bar.
(Even the employees like me who were working on OS/2.)
I was sitting in my office and some people came in
with a big box of ice cream bars and they handed me one.
"This is from Egghead. Thank you for making Windows 3.0 a success," they said.
It was a strange feeling, getting a thank-you for something
you not only didn't work on, but something which totally destroyed
the project you were working on!
[Raymond is currently on vacation; this message was pre-recorded.]
Sometimes you'll see somebody brag about how many words are
in their spell-checking dictionary.
It turns out that having too many words in a spell checker's dictionary
is worse than having too few.
Suppose you had a spell checker whose
dictionary contained every word in the
Oxford English Dictionary.
Then you hand it this sentence:
Therf werre eyght bokes.
That sentence would pass with flying colors, because all of the
words in the above sentence are valid English words, though
most people would be hard-pressed to provide definitions.
The English language has so many words that if you included them all,
then common typographical errors would often match (by coincidence)
a valid English word and therefore not be detected by the spell checker.
Which would go against the whole point of a spell checker: To catch
So be glad that your spell checker doesn't have the largest dictionary
possible. If it did, it would end up doing a worse job.
After I wrote this article, I found
a nice discussion of the subject of spell check dictionary size
on the Wintertree Software web site.
Along the lines of
Windows as Rorschach test,
here's an example of someone attributing malicious behavior
Among the logon pictures that come with Windows XP is
a martial arts kick.
I remember one bug we got that went something like this:
"Windows XP is racist.
It put a picture of a kung fu fighter
next to my name - just because my name is Chinese.
This is an insult!"
The initial user picture is chosen at random from
among the pictures in the "%ALLUSERSPROFILE%\Application
Data\Microsoft\User Account Pictures\Default Pictures" directory.
It just so happened that the random number generator
picked the martial arts kick out of the 21 available pictures.
I'm also frustrated by people who find quirks in spellcheckers
and attribute malicious intent to them.
You know what I'm talking about.
"Go to Word and type in <some name that's not in the
dictionary> and tell it to spellcheck. Word will flag the
word and recommend <some other word that is somehow opposite
to the first word in meaning> instead. This is an insult!
Microsoft intentionally taught the spellchecker to suggest
<that word> when you type <this word>.
This is clear proof of <some bad thing>."
More on spell checking tomorrow.
Luna was the code name for the Windows XP "look".
The designers did a lot of research (and got off to a lot of false
starts, as you might expect) before they came
to the design they ultimately settled upon.
During the Luna studies,
that people's reaction to Luna was often,
"Wow this would be a great UI for X," where X was
"my dad" or "my employees" or "my daughter".
People didn't look at it as the UI for themselves;
rather, they thought it was a great UI for somebody else.
It was sometimes quite amusing to read the feedback.
One person would write,
"I can see how this UI would work great
in a business environment, but it wouldn't
work on a home computer." and the very next person would
write "I can see how this UI would work great
on a home computer, but it wouldn't
work in a business environment."
(And interestingly, even though armchair usability experts
claim that the "dumbed-down UI" is a hindrance, our studies
showed that people were actually more
productive with the so-called "dumb" UI.
Armchair usability experts also claim that the Luna look
is "too silly for serious business purposes", but in reality it
tested very well on the "looks professional" scale.)
Aero is the code name for the Longhorn "look".
With Aero, the designers have turned an important corner.
Now, when they show Aero to people, the reaction is,
"Wow, this would be a great UI for me to use."
People want Luna for others, but they want Aero for themselves.
You may have noticed that Windows doesn't use Ctrl+Alt
as a keyboard shortcut anywhere.
(Or at least it shouldn't.)
If a chorded modifier is needed, it's usually Ctrl+Shift.
That's because Ctrl+Alt has special meaning on many
keyboards. The combination Ctrl+Alt is also known as AltGr,
and it acts as an alternate shift key.
For example, consider the
German keyboard layout.
Notice that there are three keyboard shift states
(Normal, Shift, and AltGr),
whereas on U.S. keyboards there are only two
(Normal and Shift).
For example, to type the @ character on a German keyboard,
you would type AltGr+Q = Ctrl+Alt+Q.
(Some languages, like Swedish, have a fourth state: Shift+AltGr.
And then of course, there's the Japanese keyboard...)
Most international keyboards remap the right-hand Alt key
to act as AltGr, so instead of the finger-contorting Ctrl+Alt+Q,
you can usually type RAlt+Q.
here are diagrams of several other keyboard layouts,
my bubble-blowing friend, Nadine Kano.)
Sometimes a program accidentally uses Ctrl+Alt as a shortcut
modifier and they get bug reports like, "Every time I
type the letter 'đ', the program thinks I want to start
When you're dealing with application compatibility,
you discover all sorts of things that worked only by accident.
Today, I'll talk about some of the "creative" ways people
mess up the
Now, you'd think, "This interface is so critical to COM,
how could anybody possible mess it up?"
Sometimes you get so excited about responding to all these
great interfaces that you forget to respond to IUnknown itself.
We have found objects where
IShellFolder *psf = some object;
There are some methods which return an object with a specific
interface. And if you query that object for its own interface,
its sole reason for existing, it says "Huh?"
IShellFolder *psf = some object;
IEnumIDList *peidl, *peidl2;
There are some objects which return E_NOINTERFACE to the QueryInterface
call, even though you're asking the object for itself!
"Sorry, I don't exist," it seems they're trying to say.
When you implement a derived interface, you implicitly implement
the base interfaces, so don't forget to respond to them, too.
IShellView *psv = some object;
In principle, the following two code fragments are equivalent:
CoCreateInstance(CLSID_xyz, ..., IID_IShellFolder, (void**)&psf);
CoCreateInstance(CLSID_xyz, ..., IID_IUnknown, (void**)&punk);
In reality, some implementations mess up and fail the second call
to CoCreateInstance. The only way to create the object successfully
is to create it with the IShellFolder interface.
One of the rules for saying "no" is that you have to set the
output pointer to NULL before returning. Some people forget to do
If the QueryInterface succeeds, then pmbl must be non-NULL on return.
If it fails, then pmbl must be NULL on return.
The shell has to be compatible with all these buggy objects because
if it weren't, customers would get upset and the press would have
a field day. Some of the offenders
are big-name programs. If they broke, people would report,
"Don't upgrade to Windows XYZ, it's not compatible with
Conspiracy-minded folks would shout,
"Microsoft intentionally broke <big-name program>!
Proof of unfair business tactics!"
David Cumps discovered that certain text files come up strange in Notepad.
The reason is that Notepad has to edit files in a variety of encodings,
and when its back against the wall, sometimes it's forced to guess.
Here's the file "Hello" in various encodings:
48 65 6C 6C 6F
This is the traditional ANSI encoding.
48 00 65 00 6C 00 6C 00 6F 00
This is the Unicode (little-endian) encoding
with no BOM.
FF FE 48 00 65 00 6C 00 6C 00 6F 00
This is the Unicode (little-endian) encoding
with BOM. The BOM (FF FE) serves two purposes: First,
it tags the file as a Unicode document, and second, the order
in which the two bytes appear indicate that the file is little-endian.
00 48 00 65 00 6C 00 6C 00 6F
This is the Unicode (big-endian) encoding
with no BOM. Notepad does not support this encoding.
FE FF 00 48 00 65 00 6C 00 6C 00 6F
This is the Unicode (big-endian) encoding
with BOM. Notice that this BOM is in the opposite order
from the little-endian BOM.
EF BB BF 48 65 6C 6C 6F
This is UTF-8 encoding. The first three bytes are
the UTF-8 encoding of the BOM.
2B 2F 76 38 2D 48 65 6C 6C 6F
This is UTF-7 encoding. The first five bytes are
the UTF-7 encoding of the BOM.
Notepad doesn't support this encoding.
Notice that the UTF7 BOM encoding is just the ASCII string "+/v8-",
which is difficult to distinguish from just a regular file that happens
to begin with those five characters (as odd as they may be).
The encodings that do not have special prefixes and which are still
supported by Notepad are the traditional ANSI encoding (i.e., "plain ASCII")
and the Unicode (little-endian) encoding with no BOM.
When faced with a file that lacks a special prefix, Notepad is forced
to guess which of those two encodings the file actually uses.
The function that does this work is
IsTextUnicode, which studies a chunk of bytes and
does some statistical analysis to come up with a guess.
And as the documentation notes,
"Absolute certainty is not guaranteed."
Short strings are most likely to be misdetected.
This is another leftover from 16-bit Windows.
Back in the Win16 days, string resources were also grouped
into bundles of 16, but the strings were in ANSI, not Unicode,
and the prefix was only an 8-bit value.
And 255 is the largest length you can encode in an 8-bit value.
If your 32-bit DLL contains strings longer than 255 characters,
then 16-bit programs would be unable to read those strings.
It appears to be gone now. Good riddance.
If you go to the various internet protocol documents, such as
RFC 0821 (SMTP),
RFC 1939 (POP),
RFC 2060 (IMAP),
or RFC 2616 (HTTP),
you'll see that they all specify CR+LF as the line termination
So the the real question is not "Why do CP/M, MS-DOS, and Win32 use CR+LF
as the line terminator?" but rather "Why did other people
choose to differ from these standards documents and use some other
Unix adopted plain LF as the line termination sequence.
If you look at
the stty options,
you'll see that the onlcr option specifies whether
a LF should be changed into CR+LF.
If you get this setting wrong, you get stairstep text,
The unix ancestry of the C language carried this convention
into the C language standard, which requires only "\n" (which
encodes LF) to
terminate lines, putting the burden on the runtime libraries
to convert raw file data into logical lines.
The C language also introduced the term "newline" to express
the concept of "generic line terminator". I'm told that the ASCII
committee changed the name of character 0x0A to "newline" around 1996,
so the confusion level has been raised even higher.
Here's another discussion of the subject,
from a unix perspective.