• #### Enumerating bit strings with a specific number of bits set (binomial coefficients strike again)

Today's Little Program prints all bit strings of length n subject to the requirement that the string have exactly k 1-bits. For example, all bit strings of length 4 with 2 bits set are 0011, 0101, 0110, 1001, 1010, and 1100. Let's write BitString(n, k) to mean all the bit strings of length n with exactly k bits set.

Let's look at the last bit of a typical member of BitString(n, k). If it is a zero, then removing it leaves a string one bit shorter but with the same number of bits set. Conversely every BitString(n − 1, k) string can be extended to a BitString(n, k) string by adding a zero to the end.

If the last bit is a one, then removing it leaves a bit string of length n − 1 with exactly k − 1 bits set, and conversely every bit string of length n − 1 with exactly k − 1 bits set can be extended to a bit string of length n with exactly k bits set by adding a one to the end.

Therefore, our algorithm goes like this:

• Handle base cases.
• Otherwise,
• Recursively call BitString(n − 1, k) and add a 0 to the end.
• Recursively call BitString(n − 1, k − 1) and add a 1 to the end.

This algorithm may look awfully familiar. The overall recursive structure is the same as enumerating subsets with binomial coefficients; we just decorate the results differently.

That's because this problem is the same as the problem of enumerating subsets. You can think of the set bits as selecting which elements get put into the subset.

It's not surprising, therefore, that the resulting code is identical except for how we report the results. (Instead of generating arrays, we generate strings.)

```function repeatString(s, n) {
return new Array(n+1).join(s);
}

function BitString(n, k, f) {
if (k == 0) { f(repeatString("0", n)); return; }
if (n == 0) { return; }
BitString(n-1, k, function(s) { f(s + "0"); });
BitString(n-1, k-1, function(s) { f(s + "1"); });
}
```

If `k` is zero, then that means we are looking for strings of length `n` that contain no bits set at all. There is exactly one of those, namely the string consisting of `n` zeroes.

If `k` is nonzero but `n` is zero, then that means we want strings of length zero with some bits set. That's not possible, so we return without generating anything.

Finally, we handle the recursive case by generating the shorter strings and tacking on the appropriate digits.

Since this is the same algorithm as subset generation, we should be able to write one in terms of the other:

```function BitString(n, k, f) {
Subsets(n, k, function(s) {
var str = "";
for (var i = 1; i <= n; i++) {
str += s.indexOf(i) >= 0 ? "1" : "0";
}
f(str);
});
}
```
• #### Forcing a file handle closed when it has been opened remotely

Today's Little Program closes a file handle that was opened remotely. It builds on previous discussion on how to use the `Net­Xxx` functions.

```int __cdecl wmain(int argc, wchar_t **argv)
{
FILE_INFO_3 *pinfo3;
NET_API_STATUS status;
DWORD_PTR resumeHandle = 0;
do {
DWORD actual, estimatedTotal;
status = NetFileEnum(NULL, NULL, NULL, 3,
(LPBYTE*)&pinfo3,
MAX_PREFERRED_LENGTH,
&actual,
&estimatedTotal,
&resumeHandle);
if (status == NERR_Success ||
status == ERROR_MORE_DATA) {
for (DWORD i = 0; i < actual; i++) {
if (lstrcmpiW(argv[1], pinfo3[i].fi3_pathname) == 0) {
wprintf(L"Closing %ls result %d\n", pinfo3[i].fi3_pathname,
NetFileClose(NULL, pinfo3[i].fi3_id));
status = ERROR_NO_MORE_FILES;
break;
}
}
NetApiBufferFree(pinfo3);
}
} while (status == ERROR_MORE_DATA);
return 0;
}
```

Forcing a network file handle closed does not actually close the handle. This makes it very different from the various "force handle closed" utilities out there. Rather, forcing a network file handle closed is accomplished by simulating a network failure, so that when the remote machine tries to use the handle again, it's told, "Wha? I'm sorry, we must have a bad connection, because I'm not sure what you're talking about." Since programs which access network resources must deal with the possibility of network connectivity loss, this deception does not violate the interface contract.

(Doing this to handles to local resources is a much riskier undertaking, because applications expect access to local files to remain valid for the lifetime of the handle. There is no equivalent of transient network connectivity failure for local files on non-removable drives. There is also no API for simulating it.)

• #### The normal string manipulation functions stop on a null terminator, so be careful when manipulating double-null-terminated strings

One of the many gotchas of working with double-null-terminated strings is accidentally using functions on them which were designed to operate on single-null-terminated strings. Now, you do need to use those single-null-terminated strings, but you also need to know when they won't do what you want.

One of the responses to my psychic detection that somebody passed a single-null-terminated string to `SHFileOperation` is, "Oh, no, I double-null-terminated it. Look:

```sprintf(szDeletePath, "%s\0", file_to_delete);
```

See, I put an extra `\0` at the end."

Well, yeah, you put an extra `\0` at the end, but all that does is terminate the format string. The `sprintf` function accepts its format string as a traditional null-terminated string. When it sees the `\0` you stuck into the string, it thinks it found the end of the string. It can't read your mind and say, "Oh, this null is not a terminator. It's an embedded null."

A simple mistake, but something that needs to be called out. To be fair, most people recognize this mistake as soon as it's pointed out to them. You just have to remember to point it out to them.

• #### Why don't we create a special class of programs which can break the normal rules?

In response to a discussion of why the window handle limit is 10,000, commenter Juan wondered why we don't create a special class of programs which can exceed the 10,000 handle limit and otherwise bypass the normal operation of the system.

This is another case of the tragedy of special treatment: Eventually, nothing is special any more.

If there were a way for an application to say, "Hey, I don't want to be constrained by the normal rules that apply to your everyday boring applications," then every application would simply say it, and the net effect would be that the constraint no longer applies to anybody.

Task Manager conforms to the normal rules for GUI programs because if it marked itself as special, soon everybody would mark themselves as special in order to get the same special treatment. (Besides, the special treatment doesn't help Task Manager at all, since Task Manager doesn't create 10,000 handles. The specific issue in the comment is not something Task Manager even knows that it needs to opt out of. All it did was call `CreateWindow`; Task Manager shouldn't need to know about the internal implementation of `CreateWindow`.)

Bonus chatter: There is already one mechanism for applications to say that a particular class of restrictions should not apply to them.

• #### Death at a Funeral, Cashback, and Tell No One

Another installment in Raymond's short reviews of SIFF movies he's seen recently.

Death at a Funeral: The family funeral gets off to a bad start when the funeral home delivers the wrong body, and it's the only mishap that actually gets fixed without further incident. Everything else goes horribly wrong, and then when you thought it couldn't get any worse, it gets worse. Hilarious if you're willing to laugh at uncomfortable situations, a disaster spiraling out of control, men behaving badly, gross-out humor, accidental consumption of hallucinogens, an old man with a filthy mouth, and lots of other stuff that will earn you an R rating. I give it a 4 out of 5.

Cashback, a full-length version of the 2006 Academy-Award-nominated short of the same name: Everybody who has worked an hourly job knows that staring at the clock just makes time feel like it's standing still. Ben manages to turn this feeling into reality while working the night shift at a grocery store, giving him opportunities to escape from his crazy boss, his even crazier coworkers, and to pine after the cute cashier girl. A very sweet story lurks behind the insanity. I give it a 4 out of 5.

Tell No One: A man becomes a suspect for the murder of his wife eight years earlier. The opening of the movie is uncomfortably tense because you know the wife is going to die and it's just a matter of waiting for it to happen. The screws slowly tighten on Alexandre Beck until he finds himself running for his life. And, believe or not, that's when the tension eases up. (Thankfully, in my case. I don't handle suspense well.) There are some funny moments during his period in hiding, trying to figure out what's really going on, and the tension actually drops as more and more pieces of the puzzle start to emerge. I was kind of disappointed, however, by the ending, (spoiler: highlight with mouse to read) which is a pretty standard "hero held a gunpoint while the villain explains everything that happened". There's a little twist, but not enough to vindicate it. Even so, I give it a 3 out of 5.

Legend:

 Would pay money to see again by myself. Would see again if it were free or if seeing it with others. Would recommend to others. Okay, but wouldn't recommend to someone not already interested. Would advise against. Waste of my time.

Note: The rating scheme has been revised since this article was originally posted.

• #### Trying to come up with the most annoying meal ever

The other night, I had a small fish for dinner. The small fish combines two annoying features: (1) Lots of tiny bones, and (2) not a lot of meat. The challenge then occurred to me: Come up with the most annoying meal ever.

Specifically, the criterion for most annoying meal would be a meal in which the diner expends the most amount of effort to obtain the least amount of food, while still adhering to the general shape of a traditional dinner. Here's what I came up with:

Appetizer: Dried watermelon seeds. To eat dried watermelon seeds, you insert the seed vertically between your back teeth and bite down to separate the two halves. Then you extract the (tiny amount) of seed meat.

Vegetable: Artichoke leaves. To eat artichoke leaves, you insert the leaf flat between your two front teeth and pull it with your hand, using your teeth to scrape off the meat.

Main dish: A small fish. To eat a small fish, lay the fish on its side and remove the meat from the half that sits on top. Then you carefully remove the spine and the bones attached to it, leaving the meat from the other half on the plate. This operation is not perfect, and you still have to be on the lookout for tiny bones that remain in the fish meat.

Alternative main dish: A small crab. Picking the meat from the body of the crab is not too difficult (though you have to avoid the gills), but extracting it from the legs takes a bit more work. You eat it in a manner similar to the artichoke leaf, inserting it between your teeth and pulling it through, using your teeth to compress the leg and extrude the meat out the end. Even if you manage to do it somewhat efficiently, it's still a messy affair.

Dessert: Pomegranate. To eat a pomegranate, you tear it open and pick out the jelly-coated seeds. You pop the seeds into your mouth, suck off the jelly, and spit out the pits. (Or eat the pits, as some of my friends do.)

(Still looking for a beverage.)

Pre-emptive snarky comment: If you want the dinner to be truly annoying, make the guests use Windows Vista!

Life imitates art corner: I received a piece of email from a colleague who actually did this!

I have cooked a dinner very similar to this. The theme was "off-piste" dining: tasty food a restaurant would never serve because it would infuriate too many customers. My menu:

Starter: Artichoke leaves (as you describe them)

Main course: roast wood pigeon. (Very little meat on a pigeon, they're tough, and have lots of bones, but very tasty!)

Dessert: chocolate cake (I relented)

The major unintended side-effect was that very little wine was consumed: everyone was far too busy interacting with their food!

• #### Even if you have code to handle a message, you're allowed to call DefWindowProc, because you were doing that anyway after all

Just because you write `case WM_SOMETHING:` doesn't mean that you have to handle all possible parameters for the `WM_SOMETHING` message. You're still allowed to call the `DefWindowProc` function. After all, that's what you did when you didn't have a `case WM_SOMETHING:` statement in the first place.

```switch (uMsg) {
case WM_CHAR:
OnChar(...);
return 0;

default:
return DefWindowProc(...);
}
```

The above code fragment doesn't handle the `WM_SOMETHING` message at all. Suppose the `WM_SOMETHING` message uses the `wParam` parameter to specify what type of something occurred, and you only want to override the default processing in the case where `wParam` has the value of 4. What do you do with the other values?

```switch (uMsg) {
case WM_CHAR:
OnChar(...);
return 0;

case WM_SOMETHING:
if (wParam == 4) { DoSomething4(...); }
else ... ????? ...
return 0;

default:
return DefWindowProc(...);
}
```

If the value is 4, then you do your special "something 4" processing, but what about all the other values? How do you handle them?

Well, think about it: How did you handle them before? The original code, before you added a `WM_SOMETHING` handler, was equivalent to this:

```switch (uMsg) {
case WM_CHAR:
OnChar(...);
return 0;

case WM_SOMETHING:
return DefWindowProc(...);

default:
return DefWindowProc(...);
}
```

In the original code, since there was no explicit handler for the `WM_SOMETHING` message, control is transferred to the `default` case handler, which just calls the `DefWindowProc` function. If you really want to, you can expand the case out a bit more:

```switch (uMsg) {
case WM_CHAR:
OnChar(...);
return 0;

case WM_SOMETHING:
if (wParam == 4) return DefWindowProc(...);
else return DefWindowProc(...);

default:
return DefWindowProc(...);
}
```

Because if the `wParam` is 4, the original code just called `DefWindowProc`. And if the `wParam` was something other than 4, the original code still just called `DefWindowProc`.

Of course, I expanded the block in precisely this way so it matches up with the case we started writing when we decided to handle the `WM_SOMETHING` method. Written out this way, it becomes obvious what to write for the question marks.

```switch (uMsg) {
case WM_CHAR:
OnChar(...);
return 0;

case WM_SOMETHING:
if (wParam == 4) { DoSomething4(...); }
else return DefWindowProc(...);
return 0;

default:
return DefWindowProc(...);
}
```

Just because you have a `case WM_SOMETHING` statement doesn't mean you have to handle all the cases; you can still call `DefWindowProc` for the cases you don't want to handle.

Armed with this information, you can help commenter Norman Diamond handle the `VK_F10` key in his `WM_SYSKEYDOWN` message handler without having to "start handling a bunch of keys that really are system keys, that I didn't want to bother with."

• #### When you share an input queue, you have to wait your turn

Now that we've had a quick introduction to asynchronous input, let's look at some of the details. Remember, this is a peek under the hood at how the sausage is made. The algorithm described here is not part of the API contract and it can change at any time, as long as it services the overall goal of serializing input.

Let's start by looking at how things worked in the 16-bit world. Even though 16-bit Windows didn't use the term thread (since each application was single-threaded), I will still use the term since that makes the transition to the 32-bit world more straightforward.

As a point of terminology, I say that a message belongs to a thread if the message targets a window which belongs to that thread. Equivalently, the thread owns the message.

Now, the goal is to dispatch input messages in chronological order: Only the thread which owns the next input message can retrieve it. All other input must wait their turn to come to the front of the input queue.

In 16-bit Windows, all input gets added to a system-wide input queue, and the basic algorithm used by `Peek­Message` and `Get­Message` for retrieving messages from the input queue goes like this.

• Look at the first message in the input queue:
• If the message belongs to some other thread, then stop. Return no message to the caller.
• Otherwise, return the message we found.
• If there are no messages in the input queue, then there is no input. Return no message.

All the rest is tweaking the boundary cases.

For example, suppose there are two input messages in the input queue, message 1 for thread A, and message 2 for thread B. Thread A calls `Get­Message`, and the above algorithm returns message 1 to thread A, at which point the new "first message" is message 2, and if thread B calls `Get­Message`, it will get message 2.

The catch is that according to the above algorithm, thread B can be told about message 2 before thread A has finished processing message 1. You've introduced a race condition that breaks the rule that input is processed sequentially: Thread B can race ahead of thread A and start processing message 2 before thread A can even get started processing message 1.

To fix this, we add a new state to the input queue that says "Yea, I just gave somebody an input message, and I'm waiting for that thread to finish processing it before I will hand out another input message."

• (New!) If the input queue is waiting for another thread to finish processing an input message, then stop and return no message.
• (New!) If the input queue is waiting for the current thread to finish processing an input message, then mark the input queue as no longer waiting. (We finished processing it and have come back for more!)
• Look at the first message in the input queue:
• If the message belongs to some other thread, then stop. Return no message to the caller.
• Otherwise, (New!) mark the input queue as waiting for the current thread to finish processing an input message, and return the message we found.
• If there are no messages in the input queue, then there is no input. Return no message.

Okay, we fixed a race condition. But now there's a new problem: Suppose thread A retrieves an input message (and therefore puts the input queue into the "waiting for thread A" state), and then thread A sends a message to thread B, and thread B wants to display a dialog box or a menu. According to the rules as we have them so far, we would have a deadlock: That `Send­Message` call will not return to the first thread until the modal UI is complete, but the modal UI cannot complete because the input queue is waiting for thread A to finish processing the input message.

The fix for this special case is that if a thread asks for an input message, and it is handling an inbound `Send­Message`, then the input queue declares that any in-progress input message has finished processing. One way of interpreting this rule is to say, "If a thread sends a message to another thread, then that implicitly completes input processing."

• (New!) If the input queue is waiting for another thread to finish processing an input message, and the current thread is processing an inbound sent message, then mark the input queue as no longer waiting.
• If the input queue is waiting for another thread to finish processing an input message, then stop and return no message.
• If the input queue is waiting for the current thread to finish processing an input message, then mark the input queue as no longer waiting.
• Look at the first message in the input queue:
• If the message belongs to some other thread, then stop. Return no message to the caller.
• Otherwise, mark the input queue as waiting for the current thread to finish processing an input message, and return the message we found.
• If there are no messages in the input queue, then there is no input. Return no message.

Recall that you are allowed to pass a message range filter and a window handle filter to the `Peek­Message` and `Get­Message` functions. The above algorithm was developed on the assumption that there was no message retrieval filter. First, let's add the message range filter to the algorithm:

• If the input queue is waiting for another thread to finish processing an input message, and the current thread is processing an inbound sent message, then mark the input queue as no longer waiting.
• If the input queue is waiting for another thread to finish processing an input message, then stop and return no message.
• If the input queue is waiting for the current thread to finish processing an input message, then mark the input queue as no longer waiting.
• Look at the first message in the input queue which satisfies the message range filter (New!):
• If the message belongs to some other thread, then stop. Return no message to the caller.
• Otherwise, mark the input queue as waiting for the current thread to finish processing an input message, and return the message we found.
• If there are no messages in the input queue which satisfy the message range filter (New!), then there is no input. Return no message.

That wasn't so hard. If you pass a message range filter, then we care only about messages that pass the filter in determining which one is "at the head of the input queue". Without this additional rule, you wouldn't be able to "peek into the future" to see if, for example, there is a mouse message in the input queue sitting behind the keyboard message that is at the front of the input queue.

Adding the window handle filter is a little trickier, because we still want to let input be processed in order (among messages which satisfy the message range filter).

• If the input queue is waiting for another thread to finish processing an input message, and the current thread is processing an inbound sent message, then mark the input queue as no longer waiting.
• If the input queue is waiting for another thread to finish processing an input message, then stop and return no message.
• If the input queue is waiting for the current thread to finish processing an input message, then mark the input queue as no longer waiting.
• Look at the first message in the input queue which satisfies the message range filter and (New!) either belongs to some other thread or belongs to the current thread and matches the window handle filter.
• If the message belongs to some other thread, then stop. Return no message to the caller.
• Otherwise, mark the input queue as waiting for the current thread to finish processing an input message, and return the message we found.
• If no such message exists, then there is no input. Return no message.

In other words, the window handle is used to control which message is ultimately retrieved, but it does not let you deny another thread access to input which matches the message range filter.

Whew, that's how 16-bit Windows dispatched input.

How do we port this to 32-bit Windows and asynchronous input?

First, we give each thread group its own input queue, rather than having a single system-wide input queue.

Second, whenever the above algorithm says the input queue, change it to say the calling thread's input queue.

And that's it!

• Return the first message in the input queue that satisfies both the message range filter and the window handle filter, if any.

It's only if you start attaching threads to each other that you have multiple threads associated with a single input queue, and then all these extra rules start to have an effect.

Next time, we'll explore some of the consequences of synchronous input by writing some poorly-behaving code and observing how the system responds. Along the way, we'll discover an interesting paradox introduced by the above algorithm.

Exercise: The alternative interpretation I gave above does not match the description of the rule, because the rule allows any thread processing an inbound `Send­Message` to clear the input queue's wait state. Why does the actual rule permit any thread clear the wait state, instead of first checking that the inbound `Send­Message` is coming from the thread that the input queue is waiting for?