# March, 2012

• #### The most exciting part of my morning is catching my bus, specifically, making the transfer

Note: Transit nerd content. You have been warned.

I still rely on One Bus Away to tell me when my bus is coming. Recent changes in bus service means that there is no longer a direct bus from my neighborhood to my work. My basic options are as follows:

• Walk 3 minutes to Stop A (my neighborhood stop), catch Bus 1 (comes every 30 minutes), ride for 11 minutes, get off at Stop X, then walk 15 minutes to work.
• Walk 7 minutes to Stop B, where I can
• catch Bus 2 (comes every 10 minutes),
• ride 7 minutes, get off at Stop X, then walk 15 minutes to work.
• ride 13 minutes, get off at Stop Y, then walk 8 minutes to work.
• catch Bus 3 (comes every 5 minutes), ride 9 minutes, get off at Stop Z, then walk 12 minutes to work.
 Start walk: 3 minutes Stop A Bus 1 11 minutes Stop X walk: 15 minutes Finish walk: 7 minutes Stop B Bus 2 7 minutes 13 minutes Stop Y walk: 8 minutes Bus 3 9 minutes Stop Z walk: 12 minutes prep: 2 minutes ride bicycle: 25 minutes park: 2 minutes

If you sit and work out the math, the total travel time for all the options is about the same, around 29 minutes. Which is about the same time it takes me to ride my bicycle, so it basically doesn't matter which route I take, especially since traffic lights randomize the travel time by a few minutes each way. But the paradox of choice means that I still try to optimize something that is basically irrelevant. (I'll spare you the calculations that went into choosing which bike route to use!)

Anyway, my morning commute-decision algorithm is:

• Do I want to ride my bicycle? If so, then ride. (This is the most common branch.)
• Else, is Bus 1 coming soon? If so, then go to Stop A.
• Else, walk to Stop B and take whichever bus comes next (usually Bus 3).

The excitement is the Stop X extension.

I recently discovered that there's another route, Bus 4, which runs parallel to buses 1 and 2 for a stretch (for stops V, W, and X), and which then veers away and drops me off in front of my building. If I'm on Bus 1 or Bus 2, I can check on the status of Bus 4, and if it's only a few minutes behind the bus that I'm on, then some new options become available.

The high-risk option is to transfer at Stop V. This is a high-risk move because if I don't time it right, I end up having to wait for the next Bus 2 to resume my commute.

The next safer option is to transfer at Stop W, which is only twenty minutes of walking from my office. (Update: Bus 2 does not stop at W.)

The safest option is to transfer at Stop X, since the only downside is that I do the normal amount of walking anyway. But this has a higher risk of missing the connection because I have to cross the street to get from one bus stop to the other, and it's a busy street, so I may have to wait a long time before I get the Walk signal.

When one of these higher-risk moves comes into play, I will use Realtime Transit, which plots the locations of the buses on a map, so I can decide whether I feel lucky today, punk.

Last Friday was my first opportunity to try out the Stop X extension, and it was a nail-biter, because the bus locations in the Realtime Transit application were inconsistent. Sometimes, Bus 4 would show up a bit too close for comfort (it might end up passing my bus because my bus stops more often), and then sometimes it would show up miles and miles away.

Stop V was too risky. If the nearby bus was just a mirage, then I got off a perfectly good bus and stranded myself. As we neared Stop W, I looked out the back window of the bus and didn't see a Bus 4 in the distance, so I decided to go for the safe approach and get off at Stop X.

As I waited for the traffic light to change, I saw Bus 4 go zooming past.

One of the days, I will actually succeed at making the Stop X extension.

• #### Why does holding the Ctrl key when selecting New Task from Task Manager open a command prompt?

Commenter Adam S wonders why holding the Ctrl key when selecting New Task from Task Manager will open a command prompt.

It's a rogue feature.

Windows XP introduced visual styles, and one of the tricky parts of debugging visual styles is that if the visual style engine goes berzerk, you can't see anything! One of the problems that the visual styles folks encountered when developing their feature was that sometimes they would get into a state where the Run dialog would stop working. And without a Run dialog, you couldn't install or launch a debugger to begin investigating what went wrong.

The solution: Add the rogue feature where holding the Ctrl key when selecting New Task from Task Manager opened a command prompt directly, without involving the Run dialog. From that command prompt, you can then install the debugger and start debugging. (This technique also took advantage of the fact that console windows were not themed in Windows XP. If the visual style system got all messed up, at least your console windows worked!)

Over time, the bugs in the visual style system got worked out, and this rogue escape hatch was no longer needed, but for whatever reason, it never got removed.

• #### Memory allocation functions can give you more memory than you ask for, and you are welcome to use the freebies too, but watch out for the free lunch

Memory allocation functions like Heap­Alloc, Global­Alloc, Local­Alloc, and Co­Task­Mem­Alloc all have the property that they can return more memory than you requested. For example, if you ask for 13 bytes, you may very well get a pointer to 16 bytes. The corresponding Xxx­Size functions return the actual size of the memory block, and you are welcome to use all the memory in the block up to the actual size (even the bytes beyond the ones you requested). But watch out for the free lunch.

Consider the following code:

BYTE *GetSomeZeroBytes(SIZE_T size)
{
BYTE *bytes = (BYTE*)HeapAlloc(GetProcessHeap(), 0, size);
if (bytes) ZeroMemory(bytes, size);
return bytes;
}

So far so good. We allocate some memory, and then fill it with zeroes. That gives us our zero-initialized memory.

Or does it?

BYTE *bytes = GetSomeZeroBytes(13);
SIZE_T actualSize = HeapSize(GetProcessHeap(), 0, bytes);
for (SIZE_T i = 0; i < actualSize; i++) {
assert(bytes[i] == 0); // assertion fires!?
}

When you ask the heap manager for 13 bytes, it's probably going to round that up to 16, and when you call Heap­Size, it may very well say, "Hey, I gave you three extra bytes. Don't need to thank me."

The problem comes when you try to reallocate the memory:

BYTE *ReallocAndZero(BYTE *bytes, SIZE_T newSize)
{
return (BYTE*)HeapReAlloc(bytes, GetProcessHeap(),
HEAP_ZERO_MEMORY, newSize);
}

Here, you said, "Dear heap manager, please make this memory block bigger, and zero out the new bytes. Kthxbai." And, assuming the heap manager was successful, you will indeed have a larger memory block, and the new bytes will have been zeroed out.

But the memory manager won't zero out the three bonus bytes it gave you when you called Heap­Alloc, because those bytes aren't new. In fact, the heap manager assumes that you knew about those three extra bytes and were actively using them, and it would be rude to zero out those bytes behind your back.

Those bytes you didn't know about since you didn't check.

You might think the problem is that you mixed zero-allocation modes. You allocated the memory as "Go ahead and give me garbage, I'll zero it out myself", and then you reallocated it as "Can you zero it out for me?" The problem is that you and the heap manager disagree on how big it is. While you assume that the size of it is "the exact number of bytes I asked for", the heap manager assumes that the size of it is "the exact number of bytes I gave you." Those bytes in the middle fall through the cracks.

Therefore, you might try to fix it by changing your function like this:

BYTE *ReallocAndZero(BYTE *bytes, SIZE_T newSize)
{
SIZE_T oldSize = HeapSize(GetProcessHeap(), bytes);
BYTE *newBytes = (BYTE*)HeapReAlloc(bytes, GetProcessHeap(),
0, size);
if (newBytes && newSize > oldSize) {
ZeroMemory(newBytes + oldSize, newSize - oldSize);
}
return newBytes;
}

But this doesn't work, because of the reason we gave above: Your call to Heap­Size will return the actual block size, not the requested size. You will therefore forget to zero out those three bytes you didn't know about.

The real problem is in the Get­Some­Zero­Bytes function. It decided to manually zero out the bytes it received, but it zeroed out only the bytes that were requested, not the actual bytes received.

One solution is to make sure to zero out everything, so that if it is reallocated, the extra bytes gained in the reallocation will also be zero.

BYTE *GetSomeZeroBytes(SIZE_T size)
{
BYTE *bytes = (BYTE*)HeapAlloc(GetProcessHeap(), 0, size);
if (bytes) ZeroMemory(bytes,
HeapSize(GetProcessHeap(), bytes));
return bytes;
}

Another solution is to take advantage of the memory manager's HEAP_ZERO_MEMORY flag, which tells the memory manager to zero out the entire block of memory when it is allocated:

BYTE *GetSomeZeroBytes(SIZE_T size)
{
return (BYTE*)HeapAlloc(GetProcessHeap(),
HEAP_ZERO_MEMORY, size);
}

… and to use the same flag when reallocating:

BYTE *ReallocAndZero(BYTE *bytes, SIZE_T newSize)
{
return (BYTE*)HeapReAlloc(bytes, GetProcessHeap(),
HEAP_ZERO_MEMORY, size);
}

Most of the heap functions let you specify that you want the heap manager to zero out the memory for you, and that includes the bonus bytes. For example, you can use GMEM_ZERO­INIT with the Global­Alloc family of functions, and LMEM_ZERO­INIT with the Local­Alloc family of functions. The annoying one is Co­Task­Mem­Alloc, since it does not provide a flag for zero-allocation. You have to zero out the memory yourself, and you have to do it right. (The inspiration for today's article was a bug caused by not zeroing out the memory correctly.)

There are other implications of these bonus bytes. For example, if you use Create­Stream­On­HGlobal to create a stream on an existing HGLOBAL, the function uses Global­Size to determine the size of the stream it should create. And that value includes the bonus bytes, even though you may not have realized that they were there. Result: You create a stream of 13 bytes, but somebody who tries to read from it will get 16 bytes. You need to make sure that the code which reads from the stream won't get upset by those extra bytes. (For example, if you passed it to a function that concatenates streams, you just inserted three bytes of garbage between the streams.) You also need to be careful that those extra bytes don't leak any sensitive information if you, say, put the memory block on the clipboard for everyone to see.

Bonus chatter: It appears that at some point, the kernel folks decided that these "bonus bytes" were more hassle than they were worth, and now they spend extra effort remembering not only the actual size of the memory block but also the requested size. When you ask, "How big is this memory block?" they lie and return the requested size rather than the actual size. In other words, the free bonus bytes are no longer exposed to applications by the kernel heap functions. Note, however, that this behavior is not contractual; future versions of Windows may start handing out free bonus bytes again. Note also that not all heap managers have done the extra work to remember the requested size, and they will continue to hand out bonus bytes. Therefore, you must continue to code defensively and assume that bonus bytes may exist (even if they usually don't). (And note that heap debugging tools may intentionally generate "bonus bytes" to help flush out bugs.)

Double extra bonus chatter: Note that this gotcha is not specific to Windows.

// resize a block of memory originally allocated by calloc
// and zero out the new bytes
void *crealloc(void *bytes, size_t new_size)
{
size_t old_size = malloc_size(bytes);
void *new_bytes = realloc(bytes, new_size);
if (new_bytes && new_size > old_size) {
memset((char*)new_bytes + old_size, 0, new_size - old_size);
}
return new_bytes;
}

Virtually all heap libraries have bonus bytes.

• #### Why does the VerQueryValue function give me the wrong file version number?

A customer was writing a test to verify that their patching system was working properly, but they found that even after the patch was installed, a call to VerQueryValue reported that the file was still the original version. Why was the VerQueryValue function reporting the wrong version?

Recall that the version resource is, well, a resource. And one of the things that happens with resources is that they can get redirected based on the language the user is running. When you ask for the resources of a language-neutral DLL, the loader redirects your request to the appropriate language-specific DLL. That way, if you're running on an English system, the resources come from the DLL with English resources, whereas if you're running on a German system, the resources come from the DLL with German resources.

The customer's patch updated only the language-neutral DLL (since it was a code fix that involved no resource changes). When the GetFileVersionInfo function loaded the DLL and asked for its resources, the loader redirected the request to the English satellite DLL.

To disable this redirection, you can use the GetFileVersionInfo function and don't pass the FILE_VER_GET_LOCALIZED flag or the FILE_VER_GET_NEUTRAL flag. Michael Kaplan covered this a few years ago. If you use the plain GetFileVersionInfo function, the version information that comes back is a blend of the language-neutral and the localized information: The binary version information comes from the language-neutral DLL, whereas the string version information comes from the localized DLL. The strings come from the localized DLL because you want the information like FileDescription to be something meaningful to the user.

It does mean, though, that if you are extracting version information for testing and verification purposes, you need to be mindful of where you are getting them from so that you get the values you expect.

• #### How do I get mouse messages faster than WM_MOUSEMOVE?

We saw some time ago that the rate at which you receive WM_MOUSE­MOVE messages is entirely up to how fast your program calls Get­Message. But what if your program is calling Get­Message as fast as it can, and it's still not fast enough?

You can use the Get­Mouse­Move­Points­Ex function to ask the window manager, "Hey, can you tell me about the mouse messages I missed?" I can think of two cases where you might want to do this:

• You are a program like Paint, where the user is drawing with the mouse and you want to capture every nuance of the mouse motion.
• You are a program that supports something like mouse gestures, so you want the full mouse curve information so you can do your gesture recognition on it.

Here's a program that I wrote for a relative of mine who is a radiologist. One part of his job consists of sitting in a dark room studying medical images. He has to use his years of medical training to identify the tumor (if there is one), and then determine what percentage of the organ is afflicted. To use this program, run it and position the circle so that it matches the location and size of the organ under study. Once you have the circle positioned properly, use the mouse to draw an outline of the tumor. When you let go of the mouse, the title bar will tell you the size of the tumor relative to the entire organ.

(Oh great, now I'm telling people to practice medicine without a license.)

First, we'll do a version of the program that just calls Get­Message as fast as it can. Start with the new scratch program and make the following changes:

class RootWindow : public Window
{
public:
virtual LPCTSTR ClassName() { return TEXT("Scratch"); }
static RootWindow *Create();
protected:
LRESULT HandleMessage(UINT uMsg, WPARAM wParam, LPARAM lParam);
void PaintContent(PAINTSTRUCT *pps);
BOOL WinRegisterClass(WNDCLASS *pwc);

private:
RootWindow();
~RootWindow();
void OnCreate();
void UpdateTitle();
void OnSizeChanged(int cx, int cy);
void OnMouseMove(LPARAM lParam);
void OnButtonDown(LPARAM lParam);
void OnButtonUp(LPARAM lParam);

// arbitrary limit (this is just a demo!)
static const int cptMax = 1000;
private:
POINT  m_ptCenter;
BOOL   m_fDrawing;
HPEN   m_hpenInside;
HPEN   m_hpenDot;
POINT  m_ptLast;
int    m_cpt;
POINT  m_rgpt[cptMax];
};

RootWindow::RootWindow()
: m_fDrawing(FALSE)
, m_hpenInside(CreatePen(PS_INSIDEFRAME, 3,
GetSysColor(COLOR_WINDOWTEXT)))
, m_hpenDot(CreatePen(PS_DOT, 1, GetSysColor(COLOR_WINDOWTEXT)))
{
}

RootWindow::~RootWindow()
{
if (m_hpenInside) DeleteObject(m_hpenInside);
if (m_hpenDot) DeleteObject(m_hpenDot);
}

BOOL RootWindow::WinRegisterClass(WNDCLASS *pwc)
{
pwc->style |= CS_VREDRAW | CS_HREDRAW;
return __super::WinRegisterClass(pwc);
}

void RootWindow::OnCreate()
{
SetLayeredWindowAttributes(m_hwnd, 0, 0xA0, LWA_ALPHA);
}

void RootWindow::UpdateTitle()
{
TCHAR szBuf[256];

// Compute the area of the circle using a surprisingly good
// rational approximation to pi.

// Compute the area of the region, if we have one
if (m_cpt > 0 && !m_fDrawing) {
int polyArea = 0;
for (int i = 1; i < m_cpt; i++) {
polyArea += m_rgpt[i-1].x * m_rgpt[i  ].y -
m_rgpt[i  ].x * m_rgpt[i-1].y;
}
if (polyArea < 0) polyArea = -polyArea; // ignore orientation
polyArea /= 2;
wnsprintf(szBuf, 256,
TEXT("circle area is %d, poly area is %d = %d%%"),
circleArea, polyArea,
MulDiv(polyArea, 100, circleArea));
} else {
wnsprintf(szBuf, 256, TEXT("circle area is %d"), circleArea);
}
SetWindowText(m_hwnd, szBuf);
}

void RootWindow::OnSizeChanged(int cx, int cy)
{
m_ptCenter.x = cx / 2;
m_ptCenter.y = cy / 2;
m_radius = min(m_ptCenter.x, m_ptCenter.y) - 6;
UpdateTitle();
}

void RootWindow::PaintContent(PAINTSTRUCT *pps)
{
HBRUSH hbrPrev = SelectBrush(pps->hdc,
GetStockBrush(HOLLOW_BRUSH));
HPEN hpenPrev = SelectPen(pps->hdc, m_hpenInside);
SelectPen(pps->hdc, m_hpenDot);
Polyline(pps->hdc, m_rgpt, m_cpt);
SelectPen(pps->hdc, hpenPrev);
SelectBrush(pps->hdc, hbrPrev);
}

{
// Ignore duplicates
if (pt.x == m_ptLast.x && pt.y == m_ptLast.y) return;

// Stop if no room for more
if (m_cpt >= cptMax) return;

}

{
// Overwrite the last point if we can't add a new one
if (m_cpt >= cptMax) m_cpt = cptMax - 1;

// Invalidate the rectangle connecting this point
// to the last point
RECT rc = { pt.x, pt.y, pt.x+1, pt.y+1 };
if (m_cpt > 0) {
RECT rcLast = { m_ptLast.x,   m_ptLast.y,
m_ptLast.x+1, m_ptLast.y+1 };
UnionRect(&rc, &rc, &rcLast);
}
InvalidateRect(m_hwnd, &rc, FALSE);

m_rgpt[m_cpt++] = pt;
m_ptLast = pt;
}

void RootWindow::OnMouseMove(LPARAM lParam)
{
if (m_fDrawing) {
POINT pt = { GET_X_LPARAM(lParam), GET_Y_LPARAM(lParam) };
}
}

void RootWindow::OnButtonDown(LPARAM lParam)
{
// Erase any previous polygon
InvalidateRect(m_hwnd, NULL, TRUE);

m_cpt = 0;
POINT pt = { GET_X_LPARAM(lParam), GET_Y_LPARAM(lParam) };
m_fDrawing = TRUE;
}

void RootWindow::OnButtonUp(LPARAM lParam)
{
if (!m_fDrawing) return;

OnMouseMove(lParam);

// Close the loop, eating the last point if necessary
m_fDrawing = FALSE;
UpdateTitle();
}

LRESULT RootWindow::HandleMessage(
UINT uMsg, WPARAM wParam, LPARAM lParam)
{
switch (uMsg) {
case WM_CREATE:
OnCreate();
break;

case WM_NCDESTROY:
// Death of the root window ends the thread
PostQuitMessage(0);
break;

case WM_SIZE:
if (wParam == SIZE_MAXIMIZED || wParam == SIZE_RESTORED) {
OnSizeChanged(GET_X_LPARAM(lParam), GET_Y_LPARAM(lParam));
}
break;

case WM_MOUSEMOVE:
OnMouseMove(lParam);
break;

case WM_LBUTTONDOWN:
OnButtonDown(lParam);
break;

case WM_LBUTTONUP:
OnButtonUp(lParam);
break;
}

return __super::HandleMessage(uMsg, wParam, lParam);
}

RootWindow *RootWindow::Create()
{
RootWindow *self = new(std::nothrow) RootWindow();
if (self && self->WinCreateWindow(WS_EX_LAYERED,
TEXT("Scratch"), WS_OVERLAPPEDWINDOW,
CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT,
NULL, NULL)) {
return self;
}
delete self;
return NULL;
}

This program records every mouse movement while the button is down and replays them in the form of a dotted polygon. When the mouse button goes up, it calculates the area both in terms of pixels and in terms of a percentage of the circle.

This program works well. My relative's hand moves slowly enough (after all, it has to trace a tumor) that the Get­Message loop is plenty fast enough to keep up. But just for the sake of illustration, suppose it isn't. To make the effect easier to see, let's add some artificial delays:

void RootWindow::OnMouseMove(LPARAM lParam)
{
if (m_fDrawing) {
POINT pt = { GET_X_LPARAM(lParam), GET_Y_LPARAM(lParam) };
UpdateWindow(m_hwnd);
Sleep(100);
}
}

Now, if you try to draw with the mouse, you see all sorts of jagged edges because our program can't keep up. (The Update­Window is just to make the most recent line visible while we are sleeping.)

Enter Get­Mouse­Move­Points­Ex. This gives you all the mouse activity that led up to a specific point in time, allowing you to fill in the data that you missed because you weren't pumping messages fast enough. Let's teach our program how to take advantage of this:

class RootWindow : public Window
{
...
...
POINT m_ptLast;
DWORD m_tmLast;
int   m_cpt;
};

{
// See discussion for why this code is wrong
ClientToScreen(m_hwnd, &pt);
MOUSEMOVEPOINT mmpt = { pt.x, pt.y, tm };
MOUSEMOVEPOINT rgmmpt[64];
int cmmpt = GetMouseMovePointsEx(sizeof(mmpt), &mmpt,
rgmmpt, 64, GMMP_USE_DISPLAY_POINTS);

POINT ptLastScreen = m_ptLast;
ClientToScreen(m_hwnd, &ptLastScreen);
int i;
for (i = 0; i < cmmpt; i++) {
if (rgmmpt[i].time < m_tmLast) break;
if (rgmmpt[i].time == m_tmLast &&
rgmmpt[i].x == ptLastScreen.x &&
rgmmpt[i].y == ptLastScreen.y) break;
}
while (--i >= 0) {
POINT ptClient = { rgmmpt[i].x, rgmmpt[i].y };
ScreenToClient(m_hwnd, &ptClient);
}
}

{
...
m_rgpt[m_cpt++] = pt;
m_ptLast = pt;
m_tmLast = GetMessageTime();
}

void RootWindow::OnMouseMove(LPARAM lParam)
{
if (m_fDrawing) {
POINT pt = { GET_X_LPARAM(lParam), GET_Y_LPARAM(lParam) };
UpdateWindow(m_hwnd);
Sleep(100); // artificial delay to simulate unresponsive app
}
}

Before updating the the current mouse position, we check to see if there were other mouse motions that occurred while we weren't paying attention. We tell Get­Mouse­Move­Points­Ex, "Hey, here is a mouse message that I have right now. Please tell me about the stuff that I missed." It fills in an array with recent mouse history, most recent events first. We go through that array looking for the previous point, and give up either when we find it, or when the timestamps on the events we received take us too far backward in time. Once we find all the points that we missed, we play them into the Add­Point function.

Notes to people who like to copy code without understanding it: The code fragment above works only for single-monitor systems. To work correctly on multiple-monitor systems, you need to include the crazy coordinate-shifting code provided in the documentation for Get­Mouse­Move­Points­Ex. (I omitted that code because it would just be distracting.) Also, the management of m_tmLast is now rather confusing, but I did it this way to minimize the amount of change to the original program. It would probably be better to have added a DWORD tm parameter to Add­Point instead of trying to infer it from the current message time.

The Get­Mouse­Move­Points­Ex technique is also handy if you need to refer back to the historical record. For example, if the user dragged the mouse out of your window and you want to calculate the velocity with which the mouse exited, you can use Get­Mouse­Move­Points­Ex to get the most recent mouse activity and calculate the velocity. This saves you from having to record all the mouse activity yourself on the off chance that the mouse might leave the window.

• #### Microspeak: Friction

In physics, friction is a force that resists motion. In Microspeak, friction is an obstacle which prevents somebody from doing something you want them to do. (The preferred verb phrase for getting over an obstacle is overcoming friction.)

There is friction in the system for X that is reduced when developing with Y.
Using X reduces friction of someone being able to do Y without having to Z.
Many companies have found that outsourcing activities can introduce unexpected complexity, add cost and friction into the value chain, and require more senior management attention and deeper management skills than anticipated.
The goals of the Wiki include providing broader and more in-depth solutions content … from a wider variety of authors with less publishing friction than less traditional mechanisms.
While multi-tenancy and richer browser capabilities are valuable, I believe we have to start architecting multi-tenant solutions while incorporating the rich differentiation of new client platforms in disconnected and connected capabilities with the ability of ad-hoc collaborative communities forming around these services without centralized service friction.

(That last one deserves some sort of award for impenetrability.)

JD Meier kindly defines the term as it applies to communication:

It's obvious in retrospect, but I found a distinction between low-friction communication and high-friction communication. By low-friction, I mean *person A* doesn't have to work that hard for *person B* to get a point.

As the term friction gained popularity, second-order jargon emerged, such as friction-free (another citation).

(Remember that Microspeak covers not only terminology specific to Microsoft, but also business jargon that you need to know in order to "fit in.")

• #### If you have multiple versions of Windows installed, why do they all try to adjust the clock?

Commenter Martin notes that if you have multiple copies of Windows installed on your machine, then each one will try to adjust the clock when you enter or exit daylight saving time. "I cannot believe that this feature is a bug. Please could you comment this?"

This falls into a category of issue that I like to call "So what did you expect?" (This was the catch phrase of the old Call-A.P.P.L.E. magazine.)

If you have multiple operating systems installed on your machine, each one thinks that it has control of your computer. It's not like there's some standard cross-operating system mechanism for negotiating control of hardware resources. If you install CP/M and MINIX on your machine, each one is unaware of the presence of the other. CP/M doesn't know how to mount MINIX file systems and update a configuration file to say "Hey, I updated the time, you don't need to."

And not that you would expect it to, either.

It's like signing up for two housekeeping services and telling both of them to water the plants every Monday. And lo and behold, every Monday, the plants get double-watered. There's no standard protocol for multiple housekeeping services to coordinate their activities; each housekeeping service assumes it's responsible for cleaning your house.

Bonus reading: Why does Windows keep your BIOS clock on local time?

• #### To some people, time zones are just a fancy way of sounding important

As I noted some time ago, there is a standard series of announcements that are sent out when a server is undergoing planned (or unplanned) maintenance. And since these are official announcements, the authors want to sound official.

One way of sounding official is to give the times during which the outage will take place is a very formal manner. "The servers will be unavailable on Saturday, March 17, 2012 from 1:00 AM to 9:00 AM Pacific Standard Time."

Did you notice something funny about that announcement?

On March 17, 2012, most of the United States will not be on Standard Time. They will be on Daylight Time. (The switchover takes place this weekend.)¹

I sent mail to the "If you have questions, please contact X" address to confirm that they are indeed taking the server down from 1am to 9am Pacific Standard Time (i.e., from 2am to 10am Pacific Daylight Time), pointing out that on March 17th, most of the United States won't be using Standard Time. (I was planning on coming to work, but if the servers won't be back up until 10am, I can sleep in.)

The response I got back was "The machines will be unavailable from 1am to 9am local time."

So in fact when they wrote Pacific Standard Time, they didn't mean Pacific Standard Time. They really meant Pacific Time, but we'll stick the word Standard in there because it makes us sound all official-like. In other words, "We're using words not for what they mean but for how they sound." I'm surprised they didn't use military time, just to sound that much more awesome.

Bonus chatter: Not to be outdone, another announcement said that a particular server would be available from time X to time Y PDT, even though the United States was on standard time. So now I'm not sure what the logic is. Maybe they just pick a time zone randomly.

Tip to people who write these announcements: Just say "Pacific Time" or "Redmond local time".

Nitpicker's corner

• #### Alt text for images are important in email, too

Apparently the IT department gave up on getting everybody to read email in plain text, and other service departments at Microsoft have moved beyond simply using HTML for markup and started adding banner images to the top of each email message. Because the best way to promote your brand to other parts of the company is to stick a banner logo at the top of every message.

Here's the HTML for one such banner image, with line breaks inserted for sanity.

<img width=707 height=63 id="Picture_x0020_2"
src="cid:image001.png@01CB0944.B4771400"
alt="Description: Description: Description: Description:
Description: Description: Description: Microsoft Real Estate
and Facilities">

The great thing about the absurd alt text is that that's what appears in the autopreview window and in the email notification pop-up.

 BUILDING NOTICE: Buildings 8... Description: Description: Description: Description: Description: Description:...

But wait, it gets worse. The second image in the message (a giant circled-i icon indicating that this is an informational message) has as its alt text "Description: Description: Description: Description: Description: Description: cid:image003.jpg@01CAFC55.BC923A80". Yeah, like that explains the image clearly.

Maybe they were just taking a lead from the boss.

No lesson today, just venting.