Holy cow, I wrote a book!
the DllMain function
receives a reason code of
lpReserved parameter to
is used to indicate whether the process is exiting.
And if the process is exiting, then you should just return without
Don't worry about freeing memory; it will all go away when the
process address space is destroyed.
Don't worry about closing handles; handles are closed automatically
when the process handle table is destroyed.
Don't try to call into other DLLs, because those other DLLs may
already have received their DLL_PROCESS_DETACH
in which case they may behave erratically in the same way that a
object behaves erratically if you try to use it after its destructor has run.
The building is being demolished.
Don't bother sweeping the floor and emptying the trash cans and
erasing the whiteboards.
And don't line up at the exit to the building so everybody can
move their in/out magnet to out.
All you're doing is making the demolition team wait for you to
finish these pointless housecleaning tasks.
Okay, if you have internal file buffers, you can write them out
to the file handle.
That's like remembering to take the last pieces of mail from the mailroom
out to the mailbox.
But don't bother closing the handle or freeing the buffer,
in the same way you shouldn't bother updating the "mail last picked up on"
sign or resetting the flags on all the mailboxes.
And ideally, you would have flushed those buffers as part of your
normal wind-down before calling ExitProcess,
in the same way mailing those last few letters should have been taken
care of before you called in the demolition team.
I regularly use a program that doesn't follow this rule.
The program allocates a lot of memory during the course of its life,
and when I exit the program, it just sits there for several minutes,
sometimes spinning at 100% CPU, sometimes churning the hard drive
When I break in with the debugger to see what's going on,
I discover that the program isn't doing anything productive.
It's just methodically freeing every last byte of memory it had
allocated during its lifetime.
If my computer wasn't under a lot of memory pressure, then most
of the memory the program had allocated during its lifetime hasn't
yet been paged out, so
freeing every last drop of memory is a CPU-bound operation.
On the other hand, if I had kicked off a build or done something else
memory-intensive, then most of the memory the program had allocated
during its lifetime has been paged out,
which means that the program pages all that memory back in from the
just so it could call free on it.
Sounds kind of spiteful, actually.
"Come here so I can tell you to go away."
All this anal-rententive memory management is pointless.
The process is exiting.
All that memory will be freed when the address space is destroyed.
Stop wasting time and just exit already.
I have an ongoing conflict with my in-laws.
Their concept of the correct amount of food to have in the
refrigerator is "more than will comfortably fit."
Whenever they come to visit (which is quite often),
they make sure to bring enough food so that my refrigerator
bursts at the seams,
with vegetables and eggs and other foodstuffs crammed into every
available nook and cranny.
If I'm lucky,
the amount of food manages to get down to
"only slightly overfull" before their next visit.
And the problem isn't restricted to the refrigerator.
I once cleared out some space in the garage,
only to find that
they decided to use that space to store more food.
(Who knows, maybe one day I will return from an errand to find
that my parking space has been filled with still more food
while I was gone.)
Occasionally, a customer will ask for a way to design their
program so it continues consuming RAM until there is only x% free.
The idea is that their program should use RAM aggressively,
while still leaving enough RAM available (x%) for other use.
Unless you are designing a system where you are the only program
running on the computer, this is a bad idea.
Consider what happens if two programs try to be "good programs"
and leave x% of RAM available for other purposes.
Let's call the programs
Program 10 (which wants to keep 10% of the RAM free)
Program 20 (which wants to keep 20% of the RAM free).
For simplicity, let's suppose that they are the only two programs
on the system.
Initially, the computer is not under memory pressure, so both programs
can allocate all the memory they want without any hassle.
But as time passes, the amount of free memory slowly decreases.
And then we hit a critical point: The amount of free memory drops
At this point, Program 20 backs off in order to restore
the amount of free memory back to 20%.
Now, each time Program 10 and Program 20 think about
allocating more memory,
Program 20 will say "Nope, I can't do that because it would
send the amount of free memory below 20%."
On the other hand, Program 10 will happily allocate some more
memory since it sees that there's a whole 10% it can allocate
before it needs to stop.
And as soon as Program 10 allocates that memory,
Program 20 will free some memory to bring the amount of
free memory back up to 20%.
I think you see where this is going.
Each time Program 10 allocates a little more memory,
Program 20 frees the same amount of memory in order to get
the total free memory back up to 20%.
Eventually, we reach a situation like this:
Program 20 is now curled up in the corner of the computer
in a fetal position.
Program 10 meanwhile continues allocating memory,
and Program 20, having shrunk as much as it can,
is forced to just sit there and whimper.
Finally, Program 10 stops allocating memory since it has
reached its own personal limit of not allocating the last 10%
of the computer's RAM.
But it's too little too late.
Program 20 has already been forced into the corner,
thrashing its brains out trying to survive on only 5% of
the computer's memory.
It's sort of like when people from two different cultures
with different concepts of
have a face-to-face conversation.
The person from the not-so-close culture will try to back away
in order to preserve the necessary distance, while the person from the
closer-is-better culture will move forward in order to close the gap.
Eventually, the person from the not-so-close culture will
end up with his back against the wall anxiously looking
for an escape route.
While I was at a group dinner at a Chinese restaurant,
a whole fish was brought to our table.
One of the other people at the table told a story
of another time a whole fish was brought to the table.
He attended the wedding rehearsal dinner of a family member.
The bride is Chinese, but the groom is not.
(Or maybe it was the other way around.
Doesn't matter to the story.)
The dinner was banquet-style at a Chinese restaurant,
and one of the many courses was a whole fish.
Two of the non-Chinese attendees marveled at the presence
of an entire fish right there in front of them,
head, tail, fins, and all.
I guess they had up until then
only been served fish that had already
or at least had the head cut off.
One of them nudged my acquaintance and said,
"We'll give you $500 if you eat the eyeball."
These guys inadvertently created their own sucker bet.
For you see,
eating the eyeball is common in many parts of Asia.
whenever their family has fish,
my nieces fight over who gets the honor of
eating the eyeballs!
I don't know whether my acquaintance cheerfully accepted the bet
or whether he explained that their bet was a poor choice to
offer a Chinese person.
What food-related sucker bets exist in your culture?
(I'm not talking about foods like chicken feet or tongue,
which are clearly prepared and served to be eaten.
I'm talking about things that an uninitiated person might
consider to be a garnish or an inedible by-product,
like shrimp heads.)
I remind you that the question is not asking for foods
which are served as dishes on their own.
As I noted when I told the story of
the computer programmer who dabbled in making change
that my colleague had a lot of money-related quirks.
For some reason my colleague felt the $2 bill deserved more attention.
Every so often, he would go to the bank and buy $100 in $2 bills,
then reintroduce the bills into circulation and enjoy people's
reactions to them.
(Most cashiers looked at it and recognized that it was legal tender,
but couldn't find a good place to put it in the till.
It usually got tossed under the drawer with all the checks.)
It was a regular occurrence that
the bank didn't have that many $2 bills on hand, but they
managed to find them and let him know when he could come pick them up.
One time, the bank called him back.
"Hi, we asked all our branches in the entire county, but all together
we can't find enough $2 bills.
If you want, we can place an order with the Federal Reserve.
The catch is, though, that the minimum order is $2000."
"Sure, go ahead and place the order."
Some time later, he went in to pick up his huge stack of $2 bills.
My colleague now found himself in a situation where something fun
turned into an ordeal,
like a smoker who is forced to smoke an entire pack of cigarettes at
Or in this case, more like 1000 cigarettes.
At the end of group meals at a restaurant,
after everybody had calculated their share
and put their money in the bill holder
(this being the days when people
actually paid cash for things),
he would raid the bill holder for change,
taking out all the notes greater than $2 and replacing them with the
appropriate number of $2 bills.
As a result, when the servers came to collect the bill holders,
they found them stuffed with $1 and $2 bills (mostly $2).
Too bad he didn't
make a pad out of them.
$1 billion that nobody wants.
As is common in many industries,
Microsoft customer service records employ abbreviations for
many commonly-used words.
In the travel industry, for example, pax is used as
an abbreviation for passenger.
The term appears to have spread to the
even though people who stay at a hotel aren't technically
(Well, unless you think that with the outrageous
prices charged by the hotels, the people are being
taken for a ride.)
For a time, the standard abbreviation for customer
in Microsoft's customer service records was cu.
This changed, however, when it was pointed out to the people
in charge of such things that cu is a swear word in
The standard abbreviation was therefore changed to cx.
If you're reading through old customer records and you know
Portuguese and you see the word cu, please understand
that we are not calling the customer a rude name.
The person who introduced me to this abbreviation added,
"I just spell out the word. It's not that much more work,
and it's a lot easier to read."
Some years ago, I was asked to review a technical book,
and one of the items of feedback I returned was that
the comments in the code fragments were full of
"Sgnl evt before lv cs."
I suggested that the words be spelled out or,
if you really want to use abbreviations,
at least have somewhere in the text where the abbreviations
If I had wanted to demonstrate the social skills of a thermonuclear
my feedback might have read
"unls wrtg pzl bk, avd unxplnd n unnec abbvs."
There are two flags you can pass to the
CreateFile function to provide hints regarding
your program's file access pattern.
What happens if you pass either of them, or neither?
Note that the following description is not contractual.
It's just an explanation of the current heuristics (where "current"
means "Windows 7").
These heuristics have changed at each version of Windows,
so consider this information as a tip to help you choose an appropriate
access pattern flag in your program,
not a guarantee that the cache manager will behave in a specific way
if you do a specific thing.
If you pass the
flag, then the cache manager alters its behavior in two ways:
First, the amount of prefetch is doubled compared to what it would have been
if you hadn't passed the flag.
Second, the cache manager marks as available for re-use
those cache pages which
lie entirely behind the current file pointer (assuming there are no other
applications using the file).
After all, by saying that you are accessing the file sequentially,
you're promising that the file pointer will always move forward.
At the opposite extreme is
In the random access case,
the cache manager performs no prefetching, and it does not
aggressively evict pages that lie behind the file pointer.
Those pages (as well as the pages that lie ahead of the file pointer
which you already read from or wrote to) will age out of the cache according
to the usual most-recently-used policy,
which means that heavy random reads against a file will not pollute the
cache (the new pages will replace the old ones).
In between is the case where you pass neither flag.
If you pass neither flag, then the cache manager tries to detect
your program's file access pattern.
This is where things get weird.
If you issue a read that begins where the previous read left off,
then the cache manager performs some prefetching, but not as much
as if you had passed
If sequential access is detected, then pages behind the file pointer
are also evicted from the cache.
If you issue around six reads in a row, each of which begins where the
previous one left off, then the cache manager
for your file,
but once you issue a read that no longer begins where the previous
read left off, the cache manager revokes your
If your reads are not sequential, but they still follow a pattern where
the file offset changes by the same amount between each operation
(for example, you seek to position 100,000 and read some data,
then seek to position 150,000 and read some data,
then seek to position 200,000 and read some data),
then the cache manager will use that pattern to predict the next read.
In the above example, the cache manager will predict that your next
read will begin at position 250,000.
(This prediction works for decreasing offsets, too!)
As with auto-detected sequential scans,
the prediction stops as soon as you break the pattern.
Since people like charts, here's a summary of the above
in tabular form:
There are some question marks in the above table where I'm not
sure exactly what the answer is.
These cache hints apply only if you use
ReadFile (or moral equivalents).
Memory-mapped file access does not go through the cache manager,
and consequently these cache hints have no effect.
was curious why
it takes longer for Task Manager to appear when you start it
from the Ctrl+Alt+Del dialog
compared to launching it from the taskbar.
Well, you can see the reason right there on the screen:
You're launching it the long way around.
If you launch Task Manager from the taskbar,
Explorer just launches taskmgr.exe
via the usual CreateProcess mechanism,
and Task Manager launches under the same credentials
on the same desktop.
On the other hand, when you use the secure attention sequence,
the winlogon program receives the notification,
switches to the secure desktop,
and displays the Ctrl+Alt+Del dialog.
When you select Task Manager from that dialog,
it then has to launch taskmgr.exe,
but it can't use the normal CreateProcess
because it's on the wrong desktop and it's running under
the wrong security context.
(Because winlogon runs as SYSTEM,
as Task Manager will tell you.)
Clearly, in order to get Task Manager running on your desktop
with your credentials,
winlogon needs to change its security context,
change desktops, and then launch taskmgr.exe.
The desktop switch is probably the slowest part, since it
involves the video driver,
and video drivers are not known for their blazingly fast
It's like asking why an international package takes longer to deliver
than a domestic one.
Because it's starting from further away, and it also has to go through
You may have noticed a minor inconsistency between pinning a program
to the Start menu and pinning a destination to a program's
Although pinned items appear at the top of the respective lists,
and both the Start menu and Jump List let you right-click an
item and select Pin/Unpin,
the Jump List also lets you pin and unpin an item by clicking on the
Why doesn't the Start menu have a pushpin in addition to the
For a time, items on the Start menu did have a pushpin,
just like items on Jump Lists.
The design had a few problems, however.
Start menu items can also have a triangle
indicating the presence of a flyout menu,
and the presence of two indicators next to an item made the interface
look awkward and too busy.
And what do you do if an item has only one indicator?
Do you right-justify all the indicators?
Or do you place the indicators in columns and reserve blank
space for the missing ones?
Both look ugly for different reasons.
The right-justify-everything version looks ugly because the pushpin
appears to keep moving around.
The blank-space-if-no-flyout version looks ugly because you have
a pushpin hovering in the middle of nowhere.
(Imagine trying to click on one of these things: You just have
to "know" that the magic click spot for pinning an item
is 20 pixels to the left of the far right edge.)
But the real death blow to showing a pushpin for pinning items
to the Start menu was the usability testing.
Users had trouble figuring out where to click to pin an item
or to open the Jump List and frequently got the opposite of what
Since opening the Jump List is by far the more common operation,
it won the battle of the prominent UI affordance,
and the option for pinning and unpinning was left to a context
Which, as it happens, is where the pin/unpin option started
in the first place.
There was a report some time ago that
researchers have developed a way to duplicate keys given only a photograph.
When I read this story,
I was reminded of an incident that occurred to a colleague of mine.
He accidentally locked his keys in his car and called a locksmith.
Frustratingly, the keys were sitting right there on the driver's seat.
The locksmith arrived and assessed the situation.
"Well, since you already paid for me to come all the way out here,
how would you like a spare key?"
"Huh? What do you mean?"
The locksmith looked at the key on the driver's seat,
studied it intently for a few seconds,
then returned to his truck.
A short while later, he returned with a freshly-cut key,
which he inserted into the door lock.
The key worked.
Researchers have determined that
the key to losing weight is to consume fewer calories.
Okay, it's actually more interesting than the summary suggests.
The researchers compared a variety of different popular diets
and found that it didn't matter what diet you were on;
the weight loss (and regain) was the same.
The controlling factor was how many calories you consumed.