# Matthew van Eerde's web log

• #### xkcd finds East and West confusing - what about North and South?

Stealing from XKCD again:

I have a similar problem with North and South.  On the globe, there's a clearly marked "North Pole" and a clearly marked "South Pole."

Fine.

Magnets also have North and South poles.  These are typically labeled N and S respectively.  Fine.

But if you consider the Earth as a large magnet (which it is), then you have to stick the N where the penguins live (Antarctica-ish) and the S where the polar bears live (Canada-ish...)

That bugs me.

• #### Bad Perl: Russian Peasant multiplication algorithm

I found this programming contest interesting: here's what I've got.

perl -e "(\$a,\$b)=@ARGV;map{\$c+=\$_*\$b}grep{\$a&\$_}map{1<<\$_}(0..log(\$a)/log 2);print\$c" 7 19

I'm calling this a one-liner because the part between the quotes is less than 80 characters (75, to be exact.)  The full command line goes over :-(

Requires the inputs to be positive whole numbers.

Perfect example of Bad Perl.  Exercise: rewrite in Good Perl.

EDIT: got the whole thing down to 80 characters (with single-digit multiplicands.)

perl -e "(\$a,\$b)=@ARGV;map{\$c+=\$b<<\$_ if\$a>>\$_&1}(0..log(\$a)/log 2);print\$c" 8 7

• #### Bad Perl: Josephus problem

Another programming contest asks to solve the Josephus problem.

Bad Perl solution (83 characters... so close...)

>perl -e"@_=(1..\$ARGV[0]);++\$c%\$ARGV[1]?\$i++:splice@_,\$i%=@_,1while\$#_;print@_" 40 3
28

EDIT: got it down to 80.

>perl -e"@_=(1..shift);++\$c%\$ARGV[0]?\$i++:splice@_,\$i%=@_,1while\$#_;print@_" 40 3
28

EDIT2: 78 dropping the parentheses.

>perl -e"@_=1..shift;++\$c%\$ARGV[0]?\$i++:splice@_,\$i%=@_,1while\$#_;print@_" 40 3
28

EDIT3: 66, shamelessly cannibalizing others' ideas from the contest (though I refuse to use "say")

>perl -e"\$k=pop;@_=1..pop;@_=grep{++\$i%\$k}@_ while\$#_;print@_" 40 3
28

• #### Spock's chess games from Star Trek Ishmael

In his post "A chess problem begging for a solution", Michael Kaplan quotes Barbara Hambly's Star Trek novel Ishmael.  In the quoted scene, Spock (AKA Ishmael) plays a couple of chess games against a stranger - rather unusual chess games.  The problem alluded to is to determine the moves of the games given certain information.

Let's take the second game first.  There are two effective possibilities that meet the "Reverse Fool's mate" and "three moves" criteria:

Reverse Fool's mate, even material: \$200 in 2½ moves

 # Ishmael Stranger 1. e3 f6 2. a3 g5 3. Qh5#

... and...

Reverse Fool's mate plus a pawn: \$220 in 2½ moves

 # Ishmael Stranger 1. e4 f5 2. exf5 g5 3. Qh5#

The task for the first game, then, is to win either \$400 or \$380 in seven moves.  Assuming the game ends in mate (this is reasonable) gives us a difference of either \$200 or \$180.  The first move can not be a capture, so we really only have six moves to capture \$200 worth of material - going after the queen is obvious, but the rooks are quite well tucked away, and it is hard to go after them and simultaneously set up the ending mate.

I believe that the solution that Barbara Hambly had in mind is the following variation of the "other" mate that every chess student learns (Scholar's Mate).  This particular setup is gated by White's need to move his Bishop out of the way, and this nicely satisfies the common convention that players alternate colors in successive games.  Naturally Spock, being a gentleman, would let the stranger take White first.

Mate, a queen, and four pawns: \$380 in 7 moves

 # Stranger Ishmael 1. c4 e6 2. c5 Bxc5 3. d4 Bxd4 4. e3 Bxe3 5. Qf3 Qf6 6. Bc4 Qxf3 7. Kf1 Qxf2#

However, from a "chess problem" point of view, there's a cook.  It is, in fact, possible to get mate plus a queen plus four pawns in a mere five and a half moves, rather than the seven full moves above:

Mate, a queen, and four pawns: \$380 in 5½ moves

 # Ishmael Stranger 1. e3 h5 2. Qxh5 d5 3. Qxd5 Bf5 4. Qxb7 Bh7 5. Qxc7 Qc8 6. Qxc8#

Even worse, there is a way to get \$400 in a mere four and a half moves.

Mate, a queen, a bishop, and a knight: \$400 in 4½ moves

 # Ishmael Stranger 1. d4 Nh6 2. Bxh6 b7 3. Qd3 Ba6 4. Qxa6 Qc8 5. Qxc8#

The problem, as it stands, is therefore underdetermined... Spock and the stranger had a full two and a half moves to play around with in the first game, either to exchange material or perhaps to allow the stranger to pick up a free pawn (and return it in the second game.)

• #### Daylight Saving Time and Benjamin Franklin

Daylight Saving Time is a thorn in my side.

Politician: What time is it?
Scientist: What time would you like it to be?

It is my firm belief that

1. Daylight Saving Time doesn't do any real good.
2. Daylight Saving Time does real harm.
3. Politicians love Daylight Saving Time because it's easy and at least it looks good.
One of the points that my congresswoman brought up is that Daylight Saving Time was originally proposed by a founding father of the US, and that therefore it must be a good idea.  (Exercise: what logical fallacy is this?)

Today I decided to actually look up the original proposal and found it quite enlightening.

Benjamin Franklin's 1784 letter to the Journal de Paris (English translation)

I believe all who have common sense, as soon as they have learnt... that it is daylight when the sun rises, will contrive to rise with him; and, to compel the rest, I would propose the following regulations...

let a tax be laid... on every window that is provided with shutters...

[let] no family be permitted to be supplied with more than one pound of candles per week...

Let guards also be posted to stop all the coaches, &c. that would pass the streets after sunset...

Every morning, as soon as the sun rises, let all the bells in every church be set ringing; and if that is not sufficient?, let cannon be fired in every street, to wake the sluggards effectually, and make them open their eyes to see their true interest.

Yes, Benjamin Franklin was the first to propose a law encouraging people to get up early.

But he was joking.

• #### The evolution of HRESULT_FROM_WIN32

We figured they were more actual guidelines. -- Pirates of the Caribbean

Macros are wonderful things.  I love macros.  They're exciting and dangerous.  Every time I write a macro I salivate at the power I am wielding.  It's like using a blowtorch.

It is perhaps no coincidence that many coding standards put big warning signs around macroland.  "Consider carefully before using macros", some say. "When considering using a macro, see if you can use an inline function instead," some say.  "Macros are frequent sources of bugs," some say.  "C++ has evolved to the point where, with inline functions and templates, macros are no longer necessary".

Heresy.

All absolutely true, and yet heresy nonetheless.  It is true that there are many bugs out there due to misuse of macros.  And yet only a few simple tips are necessary to prevent misuse of macros: (The problem is that the tips don't always get followed).

She generally gave herself very good advice, (though she very seldom followed it) -- Alice in Wonderland

1. By convention, name macros in ALL_CAPS.  This makes it easy to recognize them.
Without this rule it is easy to think that the following macro is a function...
#define SafeDelete(p) { delete (p); (p) = NULL; } void(0)
... and in fact it probably should be...
template<typename T> SafeDelete(T *&p) { delete p; p = NULL; }
2. Parenthesize the definition of the macro.
This prevents precedence bugs.
#define SQUARE(x) (x) * (x) // should be ((x) * (x))
int four = 16 / SQUARE(2); // BUG: four == 16
3. Parenthesize uses of the arguments of the macro unless this contravenes the purpose of the macro.
This prevents precedence bugs in the other direction.
#define SQUARE(x) (x * x) // should be ((x) * (x))
int four = SQUARE(1 + 1); // BUG: four == 3
4. Fill out optional clauses.
#define ASSERT(x) if (!(x)) { exit(__LINE__); }
// should be if (!(x)) { exit(__LINE__); } else {} (void)0
if (IsTuesday())
ASSERT(m_pGarbage);
else { // BUG: will only vacuum and check mail on tuesday (and only if m_pGarbage is set)
Vacuum();
CheckMail();
}
5. UNDER NO CIRCUMSTANCES use an expression that has side effects as an argument when calling a macro.
#define SafeDelete(p) { delete (p); (p) = NULL; } void(0) // reasonable, except for ALL_CAPS rule
// p points to the first of a null-terminated array of pointers that need to be deleted
while (p) SafeDelete(p++); // BUG: only odd-numbered pointers deleted, and AV if odd number of elements

Exceptions I happen to agree with: VERIFY(...) cries out to be an exception to this rule since it commonly textifies the condition, and a popup that says "CloseHandle(m_file) failed" is infinitely more meaningful than "bOK failed".  As a corollary, VERIFY(...) macros should only evaluate their arguments once.
Exceptions I've given up trying to convince other people are bad: SUCCEEDED(...), FAILED(...), and HRESULT_FROM_WIN32(GetLastError()) are common exceptions too.

Let's talk about the "no expressions with side effects" rule a little more.  It is OK to do things like

if (SUCCEEDED(CoInitialize(...))) { ... }

if (FAILED(CoCreateInstance(...))) { ... }

return HRESULT_FROM_WIN32(GetLastError());

... but for different reasons.  SUCCEEDED(...) and FAILED(...) only evaluate their arguments once, as can be verified from their definitions in winerror.h.  I'm using the Vista RTM version of winerror.h:

//
// Generic test for success on any status value (non-negative numbers
// indicate success).
//

#define SUCCEEDED(hr) (((HRESULT)(hr)) >= 0)

//
// and the inverse
//

#define FAILED(hr) (((HRESULT)(hr)) < 0)

Note the heavy (and correct) use of parentheses.  I still personally don't think it's a good idea to rely on this, since macros have a habit of changing... but I won't ding anyone on a code review for it.

HRESULT_FROM_WIN32(GetLastError()) is OK for a totally different reason.  HRESULT_FROM_WIN32(...), classically, does evaluate its argument multiple times... this time I'm using the Windows 2000 version of winerror.h:

#define HRESULT_FROM_WIN32(x)   (x ? ((HRESULT) (((x) & 0x0000FFFF) | (FACILITY_WIN32 << 16) | 0x80000000)) : 0 )

... so this looks like a bug... until you realize that GetLastError() is idempotent.  If you call it a bunch of times in a row on the same thread, it will (per its contract) always return the same value.

Note that the macro is not completely parenthesized!  The bold x is missing its protective parentheses.  This was fixed in a Windows 2000 service pack.  For example, here's the Win2K SP3 version:

#define HRESULT_FROM_WIN32(x) ((HRESULT)(x) <= 0 ? ((HRESULT)(x)) : ((HRESULT) (((x) & 0x0000FFFF) | (FACILITY_WIN32 << 16) | 0x80000000)))

Much better.

I'm still personally leery of using side effects in HRESULT_FROM_WIN32 though.  Why?  Because it sets a bad precedent.  It's easier to have a simple rule (no side effects in a macro call!) than to remember for an infinitude of macros and function calls which macros evaluate the arguments multiple times and which function calls really have side effects.  You also get legitimate bugs like this.

Macros, macros, macros... you deviate just a little bit from the straight-and-narrow and what happens?   You get blown out of the water.  Wonderful things, macros.

Nasty little bugs arising from harmless-looking violations of the safety tips are the cause of the dirty secret I am about to reveal...

As of Vista RTM...

HRESULT_FROM_WIN32 is NO LONGER a macro!

This is one of those annoying little facts I try so hard to forget because they complicate life unnecessarily.  From my point of view, there is no bad consequence of treating HRESULT_FROM_WIN32 as the macro it looks like (and was... and who knows, may be again.)  The inline function it has become is uniformly safer.

This has apparently been a long time in coming.  Here are some excerpts of winerror.h from various releases of Windows:

Windows 2000 RTM:

#define HRESULT_FROM_WIN32(x)   (x ? ((HRESULT) (((x) & 0x0000FFFF) | (FACILITY_WIN32 << 16) | 0x80000000)) : 0 )

Windows XP RTM:

// __HRESULT_FROM_WIN32 will always be a macro.
// The goal will be to enable INLINE_HRESULT_FROM_WIN32 all the time,
// but there's too much code to change to do that at this time.

#define __HRESULT_FROM_WIN32(x) ((HRESULT)(x) <= 0 ? ((HRESULT)(x)) : ((HRESULT) (((x) & 0x0000FFFF) | (FACILITY_WIN32 << 16) | 0x80000000)))

#ifdef INLINE_HRESULT_FROM_WIN32
#ifndef _HRESULT_DEFINED
#define _HRESULT_DEFINED
typedef long HRESULT;
#endif
#ifndef __midl
__inline HRESULT HRESULT_FROM_WIN32(long x) { return x < 0 ? (HRESULT)x : (HRESULT) (((x) & 0x0000FFFF) | (FACILITY_WIN32 << 16) | 0x80000000);}
#else
#define HRESULT_FROM_WIN32(x) __HRESULT_FROM_WIN32(x)
#endif
#else
#define HRESULT_FROM_WIN32(x) __HRESULT_FROM_WIN32(x)
#endif

Whew.  The net result is that HRESULT_FROM_WIN32 is a macro.  Oh, unless you decide to be avant-garde and #define INLINE_HRESULT_FROM_WIN32 before you #include <winerror.h>.  Which probably very few people did, except for build engineers running experiments.  So this is an "opt-in" inline function.

Windows Vista RTM:

//
// HRESULT_FROM_WIN32(x) used to be a macro, however we now run it as an inline function
// to prevent double evaluation of 'x'. If you still need the macro, you can use __HRESULT_FROM_WIN32(x)
//
#define __HRESULT_FROM_WIN32(x) ((HRESULT)(x) <= 0 ? ((HRESULT)(x)) : ((HRESULT) (((x) & 0x0000FFFF) | (FACILITY_WIN32 << 16) | 0x80000000)))

#if !defined(_HRESULT_DEFINED) && !defined(__midl)
#define _HRESULT_DEFINED
typedef __success(return >= 0) long HRESULT;
#endif

#ifndef __midl
FORCEINLINE HRESULT HRESULT_FROM_WIN32(unsigned long x) { return (HRESULT)(x) <= 0 ? (HRESULT)(x) : (HRESULT) (((x) & 0x0000FFFF) | (FACILITY_WIN32 << 16) | 0x80000000);}
#else
#define HRESULT_FROM_WIN32(x) __HRESULT_FROM_WIN32(x)
#endif

The experiments appear to have been completed.   Apparently no imaginable consequence of changing to an inline function was worse than the known consequence of bugs.

People who want the macro can still use __HRESULT_FROM_WIN32.

Why would you want to?  Hmmm... maybe you're building a switch statement.  Something like:

#define CASE_HRESULT(x) case x: return #x
#define CASE_ERR(x) case HRESULT_FROM_WIN32(x): return #x // need __HRESULT_FROM_WIN32 here?

LPCSTR FriendlyNameOf(HRESULT hr) {
switch(hr) {
CASE_HRESULT(S_OK);
CASE_HRESULT(S_FALSE);
... many more...
CASE_ERR(ERROR_SUCCESS);
CASE_ERR(ERROR_INVALID_FUNCTION);
...
}
return "Unrecognized";
}

I haven't tried it out, but I suspect the compiler will reject the attempt to squeeze the result of an inline function into the value of a case, but will accept a complicated expression involving a bunch of constants (i.e., the macro will work.)

• #### Generating a sine wave

One common task on the Windows Sound test team is taking a known-good sine wave, passing it through some chunk of code, and looking at what comes out.  Is it a sine wave?  Is it the same frequency that it should be?  Etc., etc.

So the quality of the sine wave generator is important.

There's a commonly available sine wave generator: double sin(double).   It's fairly simple to use this to generate a buffer, if you keep a few things in mind.  But there are pitfalls too.

In full generality, a sine wave can be completely described by four measurements:

1. Amplitude: half the difference between the maximum and minimum samples.
There's a mathematical reason to use half the difference instead of the whole difference.  We'll go into this later.
Yes, there were some bugs due to this "use half the difference" convention.
For our purposes, we'll consider "full scale" to be from -1.0 to 1.0; the amplitude of a full-scale sine wave is 1.0.  the amplitude of a -3 dB FS sine wave is 10-3/20, or about 0.707945...
2. Frequency or Period.  Frequency is the number of complete cycles in a fixed unit of time (usually one second.)  Period is the time length of a single complete cycle.  Frequency * Period = 1, unless one of them is zero (in which case the other is infinite.)
3. dc.  dc is the average value over a complete cycle.
4. Phase.  This tells where in the cycle the first sample is.  If you measure the angle in radians... which is usual... then the phase is a number between 0 and 2π.

These can all be linked together by a single formula which tells you the mathematical value of a sine wave with these features at any given time.  Let:

a = amplitude (0.0 to 1.0)
f = frequency (in cycles/sec)
c = dc (0.0 to 1.0; to avoid clipping, 0.0 to (1.0 - a))
ϕ = starting phase (0.0 to 2π)
t = the time of interest in seconds

Then

w(t) = a sin(2πft + ϕ) + c

Note in passing that w(0) = a sin((2πf)0 + ϕ) + c = a sin(ϕ) + c is independent of f.

So much for mathematics.  When you want to realize a sine wave in a PCM wave format, you have to discretize the time dimension and the sample dimension.  To discretize the time dimension you need a sample rate (usually 44100 samples per second) and to discretize the sample dimension you need to specify a sample format (usually 16-bit int).

Let's talk about discretizing the time dimension first.  Let s = the sample rate, and i be the sample of interest.  Now:

w(i) = a sin(2πfi / s + ϕ) + c

As a design simplification, we collect constants, and eliminate ϕ and c...

w(i) = a sin((2πf / s) i)

And we have an implementable function

// given a buffer, fills it with sine wave data
HRESULT FillBufferWithSine(
LPCWAVEFORMATEX pWfx, // includes sample rate
double lfAmplitude,
double lfFrequency,
PBYTE pBuffer,
UINT32 cbBytes
);

This would start at i = 0, loop through i = cbBytes / pWfx->nBlockAlign, and set sample values for each i.

It is often necessary to get continuous sine wave data in chunks... say, 1 KB at a time.  It is important that the phase of the sine wave match at chunk boundaries.  This suggests another parameter:

// given a buffer, fills it with sine wave data
HRESULT FillBufferWithSine(
LPCWAVEFORMATEX pWfx, // includes sample rate
double lfAmplitude,
double lfFrequency,
UINT32 ifFrameOffset,
PBYTE pBuffer,
UINT32 cbBytes
);

... where ifFrameOffset is the count of frames to skip.  Now a client can fill a 1 KB buffer with 256 frames of 16-bit int stereo sine wave, pass it off to someone else, fill another 1 KB buffer starting at frame offset 256 with 16-bit int stereo sine wave, etc., and the effect is an almost infinite continuous sine wave.  (Oh, and as a side effect, we get a kinda-sorta implementation of ϕ.)

Almost.

Why almost?

Well, because I used UINT32 as the frame offset.  Suppose you, as a client, use a WAVEFORMATEX with a nice high sample rate like 384 kHz.  A UINT32 wraps at 232.   At 384,000 samples per second, you'll burn through 232 samples in (232 / 384000) seconds... that's, um, 1184.8106666... seconds... three hours, six minutes, twenty-five seconds.  This is uncomfortably close to reasonable.

At that point, one of two things happen.  If you were clever in your chosen frequency, you will wrap back to a point in the sine wave where the phase is the same as what you want anyway... and there will be no observable artifact in the generated sine wave.  Or, if you were unlucky, you will wrap back to a point in the sine wave where the phase is different - and there will be a discontinuity.  This will probably result in an audible "pop" artifact in the signal tone.

Our test tone generator had this problem.  We knew about it - we didn't care.

But sometimes little things nag at me and I have to fix them or I can't sleep at night.  It's a fixation... a terrible disease... but there it is.

So I fixed it - almost.  Now we use a UINT64 for the sample offset.  That won't wrap until 264 samples.  Even at 384 kHz, that won't wrap until (264/384000) seconds - that's, um,  48038396025285.29 seconds... 1,521,800-odd years.  I'll let a tester from that era file a bug then. :-)

Almost.

Why almost?

Well, it turns out that fixing that bug uncovered another bug.  We know about it - we don't care.  (But sometimes little things nag at me...)

Exercise... what is the bug?

• #### Generating a sine wave with phase

I alluded to a bug in my last post but didn't explicitly say what it was.  Hint: it's all about phase.

So let's talk about phase for a minute.  Sine waves are periodic... although they are frequently plotted over very large intervals, their natural domain is a single period.

(original)

Exercise - The original version, when printed, makes a nice armband.

Advanced Exercise - generate a one-and-a-half period sine wave, print it on both sides of a piece of paper, and make a Möbius strip.

Well, that's a step, but it's not really getting at the nature of the sine wave.  Yes, we notice that the beginning value (0) is the same as the ending value... and the beginning slope (+1) is the same as the ending slope... and there is, perhaps, a certain smoothness beyond even slope (in fact, all derivatives agree at the crossover point...)

If you see a fact, try to see it as intuitively as possible.

Do the first exercise above - print out the original image, cut it out along the lines, and make a cylinder by connecting the short ends together.  Rotate the cylinder through various orientations, paying particular attention to the location of the sine wave at all times.

What you get is hard to render in a two-dimensional blog post, but here's a stab at it (Matlab script below:)

(original)

Notice that this representation of the sine wave is an ellipse.

Exercise - Prove that the red line in the above graph is planar (Hint: you can type the equation of the plane in three keystrokes.)

Advanced Exercise - Prove that it's an ellipse.

Advanced Exercise - Make a Keplerian observation. (Hint: note the angles at which the red line crosses the dotted black lines.)

So what does this have to do with phase?

Everything.

There's another hint in the documentation of double sin(double), which I also linked in my last post:

sin returns the sine of x [phase]. If x [phase] is greater than or equal to 263, or less than or equal to –263, a loss of significance in the result occurs. (my emphasis)

A final hint: again from my last post, "as a side effect, we get a kinda-sorta implementation of ϕ [phase]."

When you find yourself with a kinda-sorta implementation of something, that is occasionally a hint from above that your life will be easier if you do some self-examination... perhaps a full-blown implementation of that thing will be (a) easier and (b) more what you actually want.

No more hints.  If you want to figure it out for yourself, stop reading now.

OK, here's the scoop.  The thing you feed to sin(...) is a radian measurement.  This is naturally between 0 and 2π (or, perhaps, -π to π if you prefer.)  You can feed it things that are out of this range... say, sin(100.0)... but this is a little out of the natural and there are consequences.

double-type numbers, and floating point numbers in general, can represent very small numbers to a very high degree of precision... and they can represent very large numbers with the same relative degree of precision.  See IEEE754 for gory details.

An illustration is in order.  Richard Feynman, celebrity physicist, tells a story of his youth.  He claimed to be able to solve, in one minute, to within 10% accuracy, any problem that can be stated in ten seconds.

He did pretty well until someone came up with "tan(10100)".

If you feed bigger and bigger numbers to sin(...), eventually it starts giving you sloppy answers.  It has to.  It's not getting enough information to give you a good answer.

I was able to test the tone generation function and measure the signal-to-noise ratio of the tones it was producing after varying lengths of sample time.  It started off very well, but after very long times (days, weeks, years) the SNR would get slowly compromised until, after millions of years, the signal disappeared.

That is not the bug.

But it is a serious clue to the bug.

If the worst-case scenario for an implementation is that the signal will depreciate over a million years, ship it.

Take your cylinder that you made above.  Place it in front of you so that the taped-together part is directly in front of your nose, and the part to the right has the red line going up.  This is a sine wave.

Rotate it clockwise 90 degrees, so the part in front of your nose has the red line brushing the top.  Now it is a cosine wave.

Rotate it counter-clockwise 60 degrees, so the part in front of your nose has the red line crossing the dotted line.  Now it is a sine wave with a known phase (30 degrees.)

In fact, rotate it any angle (say, ϕ) and it's a sine wave with known phase ϕ.

We can't do that with our tone generator.

That is the bug.  We're implementing a tweaked version of phase which has a forgivable side effect, but which is not a full implementation of phase.

Why do we care about implementing phase?

Well, for example, one problem I sometimes have when testing audio fidelity is I connect the stereo cables to the Audio Precision equipment the wrong way (left-to-right and right-to-left.)  If I could generate the test tones out of phase by 90 degrees, with the left channel in front of the right channel, I could take a phase measurement and log a "you have the cables backwards" error.  I send different frequencies over the cables, so the ifFrameOffset parameter is of little use here.

So what's the improved implementation?

// given a buffer, fills it with sine wave data
HRESULT FillBufferWithSine(
LPCWAVEFORMATEX pWfx, // includes sample rate
double lfAmplitude,
double lfFrequency,
UINT64 ifFrameOffset,
PBYTE pBuffer,
UINT32 cbBytes
);

do

// given a buffer, fills it with sine wave data
HRESULT FillBufferWithSine(
LPCWAVEFORMATEX pWfx, // includes sample rate
double lfAmplitude,
double lfFrequency,

// in/out
//     On input, initial phase (0 to 2pi).
//     On output, phase of next sample.
double *plfPhase,

PBYTE pBuffer, // out - byte count is cbBytes
UINT32 cbBytes
);

Now, if you want to generate a lot of contiguous sine tone buffers, you do something like this:

double lfPhase = 0.0; // start at 0... or somewhere else
while (...) {
// ...
// FillBufferWithSine takes care of updating lfPhase as necessary
hr = FillBufferWithSine(pWfx, lfAmp, lfFreq, &lfPhase, pBuffer, cbBytes);
// ...
}

As a side effect of this new phase feature, the quality degradation non-issue goes away.  So the generated tones will sound good forever... which is kind of neat... but not the motivation for this fix.

Why did this happen?

Because there are many ways to measure time.  You can count samples... you can count milliseconds... you can count hundred-nanosecond intervals... you can count CPU flops... you can do so many different things.  Some are more natural than others in different contexts.

The reason this happened is because a time measure that was very natural in one context (counting frames: playing an audio stream) was somewhat artificially shoved into another context (generating a sine wave) where there was another, more natural context (changing phase.)  Alas, context switching is a frequent source of bugs.

Matlab script to generate the three-dimensional rendering of a sine wave:

t = 0:pi/50:2*pi;
h = -1:0.5:1;
z = ones(size(t))' * h;

sinx = cos(t);
siny = sin(t);
sinz = sin(t);

plot3( ...
sinx, siny, z, 'k:', ...
sinx, siny, sinz, 'r' ...
);

• #### Calculating timeouts with a clock that wraps

There are several ways to get a number that represents "the time right now."  Some of them wrap; some of them don't.

Some that wrap:

Some that don't:

• time()  (well, this will wrap in 2038... unless you use the 64-bit version... which is the default...)
• DateTime.Now

A common programming meme is to start an operation and then wait for it to complete... but give up after a certain preset "timeout."  There are multi-threaded ways of doing this that are perhaps better, but for better or worse, there is a fair amount of code out there that looks like this:

StartSomething();

while ( ! Done() ) {
if ( BeenWaitingTooLong(Timeout) ) { break; }

WaitABit();
}

if ( Done() ) {
Great();
} else {
Darn();
}

For this post I want to focus on the BeenWaitingTooLong(Timeout) snippet emphasized above.  If the timing function being used wraps, it is all too easy to get this to work most of the time... when it's just as easy to get it to work all of the time.  Alas, the consequences of software that works most of the time tend to be more severe than software that never works.

I wouldn't last long in the banking business being accurate most of the time! -- The Music Man

I'll use GetTickCount() for these examples, but I want to emphasize that the same malady affects all of the wrappy time counters.

Here are some different ways to do it:

StartSomething();

DWORD msTimeout = 10 * 1000; // give up after ten seconds
DWORD msAtStart = GetTickCount();

// assume Done() always returns within, say, a day
while ( ! Done() ) {
if (
// most of the time, these are the same.
// which one will always work?
// GetTickCount() - msAtStart > msTimeout
// GetTickCount() - msTimeout > msAtStart
// GetTickCount() > msAtStart + msTimeOut
) { break; }

// assume that WaitABit() always returns within, say, a day
WaitABit();
}

if ( Done() ) {
Great();
} else {
Darn();
}

GetTickCount() - msAtStart > msTimeout

The other two will work most of the time, but will occasionally fail.

There's an easy rule I use to always remember the right one.

When dealing with a clock that wraps, only compare intervals.

Allow me to redecorate my variable names using Joel Spolsky's "Making Wrong Code Look Wrong" technique.

StartSomething();

DWORD intervalTimeout = 10 * 1000; // give up after ten seconds
DWORD instantStart = GetTickCount();

// assume Done() always returns within, say, a day
while ( ! Done() ) {
if (
// which one will always work?
// GetTickCount() - instantAtStart > intervalTimeout
// GetTickCount() - intervalTimeout > instantAtStart
// GetTickCount() > instantAtStart + intervalTimeout

) { break; }

// assume that WaitABit() always returns within, say, a day
WaitABit();
}

if ( Done() ) {
Great();
} else {
Darn();
}

Some thought experiments immediately fall out of this.

• instantA + intervalX = instantB (instantB is later than instantA)
• instantA - intervalX = instantB (instantB is earlier than instantA)
• instantB - instantA = intervalX  (instantB is later than instantA)
• intervalX - intervalY = intervalZ (intervalZ < intervalX; intervalY < intervalX)
• intervalX + intervalY = intervalZ (intervalZ > intervalX; intervalZ > intervalX)
Some "it is wrong because it looks wrong" rules fall out of this too...
• instantA + instantB is always a bug due to wrapping
• Yes, even in (instantA + instantB) / 2
• Correct version is instantA + (instantB - instantA) / 2
• instantA < instantB is always a bug
Once you get used to applying the Hungarian in your head, you can catch all sorts of bugs... and avoid introducing them yourself :-)

A word of warning... this isn't foolproof.  The assumptions about Done() and WaitABit() returning reasonably promptly are very important.  If your intervals start getting large on the wrapping scale, things start getting very difficult... I personally recommend avoiding that situation by switching to another way of measuring time that can handle the kind of intervals you need.
• #### If you see a fact, try to see it as intuitively as possible

Insight is a tricky thing.  To a certain degree people are born with it (or without it) - but if you are gifted with a measure of it, you can develop it (as though it were a muscle) by working at it.

The late mathematician George Pólya had a number of helpful problem-solving mottos, one of which is the title of this post.  There's a nice echo in the chess world, where the maxim "If you see a good move, wait - look for a better one!" is popular (as is the simpler version, "Sit on your hands!")

The granddaddy of them all is William of Ockham's "entia non sunt multiplicanda praeter necessitatem"... AKA, Occam's [sic] Razor.

KISS, as they say.

What am I going on about?

In the Computer Science world, it happens very often that your first idea... though it works... is inferior to your second idea.  A great tragedy that is played over and over is:

1. Developer is faced with a problem.
2. Developer has brilliant idea.
3. Developer starts coding.  And coding.  For hours.  Straight.
4. After a few hours developer pauses.  Hmm.  Developer has better idea.
5. Developer goes back and hacks up what they wrote to implement the better idea.
6. Goto step 3.
7. Catch out-of-time exception.  Finish the latest iteration as quickly as possible.  Ship it.

I'm exaggerating a bit by calling this a great tragedy.  It's also a perfectly good development strategy to begin coding with the knowledge that you will probably come back and redo a large chunk of what you're doing as you become more familiar with the problem domain.  The key words here are "with the knowledge"... if you know that what you're coding is experimental, you can avoid a lot of scheduling Sturm und Drang.

Bottom line: it happens frequently that a good idea has a better one as a sequel.  Keep your eyes open... are you solving a specific case when you could, with less work, solve the general problem?  Look at your fixed code fresh - are you introducing another problem with the fix?  Is there a more appropriate way to fix the problem?  Is it better to fix the symptom instead of the disease?  Stop and think.  Even if you don't find a better way, it's good exercise for your noodle.

• #### Generating a sine wave with phase - answers

In a previous post I posed several mathematical problems... I'd like to go back and give some answers to them.

To reiterate, we take a sine wave period and wrap it around a cylinder, giving us this:

The problems are to:

1. Prove that the curve is planar.
2. Prove that the curve is an ellipse.
3. Make a Keplerian observation. Johannes Kepler was a mathematician/astronomer who made several observations relating to planetary orbits.

Let's define the curve. Since we're going to be making analogies to planetary orbit, let's define the curve parametrically. The parametric point is the analog of a planet.

Peter piper picked a peck of pickled peppers. -- Mother Goose

Penny Pingleton, you know you are punished. From now on you're wearing a giant P on your blouse EVERY DAY to school so that the whole world knows that Penny Pingleton is permanently, positively, punished.
-- Prudence Pingleton, Hairspray

OK, now that we have that out of the way... I want the parametric point to sweep its way around the unit cylinder with constant angular velocity, as viewed from above. So if we project the path of the point into the unit cylinder with z = 0, then r(t) = 1 for all t and θ(t) is linear in t. In fact let's have θ(t) be t, so at time t = 2π our parametric point has made a complete cycle.

Now we'll add the z component in. The curve is a wrapping of sin(t), so z = sin(t); we aren't affecting the vertical component of the paper by wrapping it horizontally around a cylinder.

We're actually, in a sense, done. We have the formalized equations:

• r(t) = 1
• θ(t) = t
• z(t) = sin(t)

But let's convert to Cartesian coordinates, since we'll be dealing with planes. The relevant formulae are:

• x2 + y2 = r2
• tan(θ) = y/x

A little intuition, checked by back-substitution, gives us these for x and y:

• x(t) = cos(t)
• y(t) = sin(t)

Putting these together with our formula for z, we have:

• x(t) = cos(t)
• y(t) = sin(t)
• z(t) = sin(t)

Note that, no matter what t is, y(t) = z(t)... that is, the parametric point travels entirely in the plane given by the equation y = z. This proves that the parametric path is planar. QED #1.

To tackle #2 we have to prove that this planar curve is, in fact, an ellipse. One way to do that would be to come up with x' and y' coordinates in the plane of the curve, transform the parametric equations into the new coordinate system, and verify that a(x')2 + b(y')2 = c2 for some a, b, and c... but when I came up with the problem I had a much simpler proof in mind. There's also a third way of doing this, where you note that the curve is the result of an affine transformation of the unit circle... and an affine transformation of a circle is demonstratably an ellipse... but I hadn't thought of that.

Another astronomer/mathematician, Appollonius of Perga, made a study of "conic sections"... that is, the intersection of a plane with a double-headed cone. He came up with an exhaustive list of the kinds of curves that resulted.

Basic conic sections:

• An ellipse, if the angle that the plane makes is shallower than the edge of the cone;
• A hyperbola, if the angle that the plane makes is steeper than the edge of the cone;
• And a parabola, if the plane is parallel to the edge of the cone.

For complicated reasons (to wit, that gravity diminishes as the square of the distance between two objects,) all heavenly bodies trace either an ellipse or a hyperbola as they go around (or past) another heavenly body.
Parabolae are theoretically possible but would smack heavily of "design"; as yet none have been observed.

For completeness, there are other "designed" ways to intersect a double-headed cone with a plane in a coincidental fashion:

Degenerate conic sections:

• A circle - early astronomers were convinced that the heavenly bodies moved only in circles. It took a long time for us to believe our measurements (which showed uncircular orbits) and acknowledge that the orbits are in fact elliptical. Instead, great efforts were made to show that the deviation could be made up of other, smaller, circular sub-orbits. Eventually we admitted that the orbits were elliptical (ellipses are more complicated than circles, but much simpler than the proposed circles-within-circles) and not until Newton did we understand why planets move in ellipses.
• A straight line - this is a degenerate hyperbola. This is one solution to the "one body" problem.
• Two intersecting straight lines - a hyperbola, as seen from infinitely far off.
• A single point - the other solution to the "one body" problem.

Since our curve is the intersection of a plane and a cylinder, we're done! QED #2. Well, not quite.

Appolonius' results hold for cones. We have a cylinder. Those aren't quite the same thing. Right?

They aren't, quite... but a cylinder can be viewed as a degenerate form of cone. To wit, a cylinder is a cone whose apex is infinitely far off. I'll get back to a "proof" of this in a minute, but if you allow this, then we really are done.

Instead of the seven intersections of a double-headed cone and a plane, there are only four ways a cylinder and a plane can intersect. Two of these are new.

Cylindrical conics:

• An ellipse - though it remains to be proven that this really is an ellipse. A circle is a special form of this... I won't show it again.
• A straight line - the single body problem again.
• Two straight lines - parallel this time. This is something new.
• No intersection at all. The kōan of conics. This is new, but whether it is something is a question I will not attempt to address.

Now for the proof that cylindrical intersection #1 really is an ellipse. Place two double-headed cones coaxially with the cylinder. Arrange them so that:

• The apexes of both cones are in the "up" direction, from anywhere on the curve in question.
• The "top" cone intersects the highest point on the curve in question.
• The "bottom" cone intersects the lowest point on the curve in question.

Diagram:

Notice that the entire curve lies in between the two cones. Consider the intersection of the plane that determines the curve with each of the cones. We know both of these are ellipses; we know that the intersection of the plane with the "upper" cone lies wholly outside our unknown curve; and we know that the intersection of the plane with the "inner" cone lies wholly inside our unknown curve.

Diagram:

The final stage of the proof lies in pushing the apexes of both cones to infinity, which "squeezes" our unknown curve in between two known ellipses. Since the horizontal distance between the cones goes to zero as 23/2 / tan(θ), our unknown curve cannot help but be elliptical-ish... to any degree of precision... and it is, thus, an ellipse. QED #2.

For the final problem, we must make a "Keplerian observation." The observation in question is that our parametric point, like the planets, sweeps out equal areas in equal times. What makes this interesting is that the parametric point does not move in the same way that a planet does... so it's a little odd that such a similar result should hold... but it does.

Let's talk a little about Kepler's second law. A planet moves around the sun in an elliptical orbit. Fine. The sun lies at one of the foci of the ellipse. An imaginary line between the planet and the sun will sweep out area at a constant rate. That is to say, when the planet is far from the sun, that line will be long... and it will move correspondingly slowly. When the planet is near to the sun, that line will be short... and it will move correspondingly quickly.

Conversely, our parametric point sweeps around the origin at constant angular velocity. So this is trivially "equal areas in equal times". Right?

Not quite. It's true that our parametric point appears to have constant angular velocity, if you project its path into the x-y plane... that is, if you view it from directly above, from a point on the z-axis.

But that's a silly way to look at things. To get an idea of the point's actual motion, it's far more natural to view it from an axis orthogonal to the plane of motion. So let's put an observer (Aranea) directly above the point, and another observer (Nellie) in the more natural location.

From Aranea's point of view, the motion is a simple constant-speed trip around a circle. But Nellie sees the parametric point's true velocity. The parametric point's velocity has two components... a constant "clockwise" component in the x-y plane, and a variable orthogonal component going "up" or "down". This is maximized when the curve crosses the x-y plane, and minimized when the curve is topping or bottoming out. So Nellie sees this:

It seems hard to believe, given that the parametric point is speeding up and slowing down according to different rules... and the notion of "center" is different... but the fact is, the parametric point also sweeps out equal areas in equal times!

To see this most easily, look at Aranea's point of view again. Have Aranea draw a grid on her picture of the system. She can count the area swept out in any given time by counting the number of squares.

Now consider what those squares look like to Nellie. They're rectangles. But they're all of equal area. Aranea and Nellie can label the squares according to some naming scheme, and will agree that the same set of squares/rectangles were swept out... but this means, since the rectangles are of equal area, that even in Nellie's point of view, equal areas are swept out in equal times! QED #3.

• #### How long can the logoff sound be?

Windows has a customizable soundtrack - when different things happen, sounds play.  You can live with the default sounds, turn all the sounds off, or substitute your own sounds.  All configurable via the Sound control panel:

You can assign short or long, pleasant or annoying sounds to any event in the list.  For example, you could rip a CD to .wav format and assign it to the "Maximize" event.

There are two events that are a little finicky about how long the sound is, though: the "Exit Windows" and "Windows Logoff" events.  If you assign a sound that is too long to either of these events, it won't play.  It plays when you click the "Test" button, but when you actually log off or shut down, the sound doesn't play.

How long is too long?

Before the answer, a quick review of .wav files.

You can have any number of frames per second.  CDs use a sample rate of 44100 frames per second; DVDs use 48000 frames per second.  This is usually one of (11025, 22050, 44100, 88200, 176400; 48000, 96000, 19200; 8000, 16000, 32000).

You can have any number of samples in a frame: mono (1 channel), stereo (two channels), surround (four channels), 5.1 (six channels), 7.1 (eight channels), or a different number.

Each sample is a number.  This can be a floating-point number (32 bits), an unsigned integer (8 bits), or a signed integer (16 bits, 20 bits, 24 bits, 32 bits.)  The 20 bit and 24 bit integers can be either in 24-bit containers or in 32-bit containers.

The fatness of the samples, and how many there are per second, determines how many bytes a second of .wav data will contain.  The formula is:

bytes/second = (samples/frame, AKA channels) * (frames/second, AKA sample rate) * (bits/sample) / (bits/byte, always 8)

For performance reasons, Windows Vista limits the size of shutdown/logoff sounds to 4 MB.

So how long is 4MB?  Here's the answer in tabular form for most common formats.  Lengths are in seconds.

 unsigned 8-bit int sample rate mono stereo surround 5.1 7.1 8000 524.29 262.14 131.07 87.38 65.54 11025 380.44 190.22 95.11 63.41 47.55 16000 262.14 131.07 65.54 43.69 32.77 22050 190.22 95.11 47.55 31.70 23.78 32000 131.07 65.54 32.77 21.85 16.38 44100 95.11 47.55 23.78 15.85 11.89 48000 87.38 43.69 21.85 14.56 10.92 88200 47.55 23.78 11.89 7.93 5.94 96000 43.69 21.85 10.92 7.28 5.46 176400 23.78 11.89 5.94 3.96 2.97 192000 21.85 10.92 5.46 3.64 2.73 signed 16-bit int sample rate mono stereo surround 5.1 7.1 8000 262.14 131.07 65.54 43.69 32.77 11025 190.22 95.11 47.55 31.70 23.78 16000 131.07 65.54 32.77 21.85 16.38 22050 95.11 47.55 23.78 15.85 11.89 32000 65.54 32.77 16.38 10.92 8.19 44100 47.55 23.78 11.89 7.93 5.94 48000 43.69 21.85 10.92 7.28 5.46 88200 23.78 11.89 5.94 3.96 2.97 96000 21.85 10.92 5.46 3.64 2.73 176400 11.89 5.94 2.97 1.98 1.49 192000 10.92 5.46 2.73 1.82 1.37 24-bit container: signed 20-bit int or signed 24-bit int sample rate mono stereo surround 5.1 7.1 8000 174.76 87.38 43.69 29.13 21.85 11025 126.81 63.41 31.70 21.14 15.85 16000 87.38 43.69 21.85 14.56 10.92 22050 63.41 31.70 15.85 10.57 7.93 32000 43.69 21.85 10.92 7.28 5.46 44100 31.70 15.85 7.93 5.28 3.96 48000 29.13 14.56 7.28 4.85 3.64 88200 15.85 7.93 3.96 2.64 1.98 96000 14.56 7.28 3.64 2.43 1.82 176400 7.93 3.96 1.98 1.32 0.99 192000 7.28 3.64 1.82 1.21 0.91 32-bit container: signed 20-bit int, signed 24-bit int, or float32 sample rate mono stereo surround 5.1 7.1 8000 131.07 65.54 32.77 21.85 16.38 11025 95.11 47.55 23.78 15.85 11.89 16000 65.54 32.77 16.38 10.92 8.19 22050 47.55 23.78 11.89 7.93 5.94 32000 32.77 16.38 8.19 5.46 4.10 44100 23.78 11.89 5.94 3.96 2.97 48000 21.85 10.92 5.46 3.64 2.73 88200 11.89 5.94 2.97 1.98 1.49 96000 10.92 5.46 2.73 1.82 1.37 176400 5.94 2.97 1.49 0.99 0.74 192000 5.46 2.73 1.37 0.91 0.68

The most common audio format by far is 44.1 kHz / stereo / 16-bit; if your .wav file is of this format, you get just under 23 seconds before you hit the 4 MB limit.

If you want to go over this limit, or if you want a multichannel logoff sound, you can get away with downsampling.  If you shoehorn in a longer sound, though, you may run into this:

If you just wait, the sound will play to the end and logoff will continue.

If you click "Cancel", explorer.exe will exit anyway and you'll be logged in with no shell.

A trick to get the shell back: Ctrl-Shift-Esc to bring up Task Manager, and File | New Task (Run...) | explorer.exe to reinvoke the shell.  This also allows you to determine which of the notification area icons are well-behaved. :-)

• #### Solution: x! + y! + z! = x * y * z

Last time I discussed the fact that there is a single, unique, solution to the formula x ! + y ! + z ! = x * y * z, where x, y, and z are non-negative integers.  I asked for a computer program to find the solution given that x, y, and z are all <= 9 but I wondered aloud whether the solution was unique without that constraint.

The part where a computer can't help is the uniqueness.  Here is a mathematical proof that there are no solutions if any of x, y, or z are >= 6:

Assume without loss of generality that x >= y => z.  The following two inequalities hold trivially, since we're dealing with non-negative numbers:

x ! + y ! + z ! > x !
x * y * z <= x 3

Here's where 6 comes in:

Lemma: for x >= 6, x ! > x 3 (proof momentarily.)

If this lemma were true, then we'd be done... for x >= 6,

x ! + y ! + z ! > x ! > x 3 >= x * y * z

So a search for solutions for x ! + y ! + z ! = x * y * z can be restricted to 5 >= x >= y >= z.  This is computationally feasible.
By inspection, there is a unique solution (see later in this post.)

Proof of lemma: first, look at values of x ! and x 3 for x = 0 through 7:

0! = 1 > 0 = 03
1! = 1 = 1 = 13
2! = 2 < 8 = 23
3! = 6 < 27 = 33
4! = 24 < 64 = 43
5! = 120 < 125 = 53
6! = 720 > 216 = 63
7! = 5040 > 343 = 73

It's not immediately clear how to progress.  The key idea, as often in geometric-ish progressions: look at successive ratios.  The initial 0 messes things up a bit, so we'll start with x = 1.

On the left, numbers advance by a factor of:

(x + 1)! / x ! = x + 1

Note this ratio gets bigger and bigger.

On the right, numbers advance by a factor of

(x + 1)3 / x 3 = ((x + 1) / x)3 = (1 + (1 / x))3
Note this rate of increase actually gets smaller.

The crossover point - where x ! stops losing ground and begins to catch up - is x = 3;

1 + 1 = 2 < 8 = (1 + (1/1))3
2 + 1 = 3 < 3.375 = (1 + (1/2))3
3 + 1 = 4 > 2.370370... = (1 + (1/3))3

By x = 6, the lost ground has been made up, and x ! never looks back. QED.

By the proof above, it only remains to look at the finite number of cases where x, y, and z are all <= 5.  Here they all are: note that in addition to the unique solution for the equality

x ! + y ! + z ! = x * y * z

there are only four solutions to the inequality

x ! + y ! + z ! < x * y * z

All of the other cases have

x ! + y ! + z ! > x * y * z.

 0! + 0! + 0!= 3 > 0 =0 * 0 * 0 1! + 0! + 0!= 3 > 0 =1 * 0 * 0 1! + 1! + 0!= 3 > 0 =1 * 1 * 0 1! + 1! + 1!= 3 > 1 =1 * 1 * 1 2! + 0! + 0!= 4 > 0 =2 * 0 * 0 2! + 1! + 0!= 4 > 0 =2 * 1 * 0 2! + 1! + 1!= 4 > 2 =2 * 1 * 1 2! + 2! + 0!= 5 > 0 =2 * 2 * 0 2! + 2! + 1!= 5 > 4 =2 * 2 * 1 2! + 2! + 2!= 6 < 8 =2 * 2 * 2 3! + 0! + 0!= 8 > 0 =3 * 0 * 0 3! + 1! + 0!= 8 > 0 =3 * 1 * 0 3! + 1! + 1!= 8 > 3 =3 * 1 * 1 3! + 2! + 0!= 9 > 0 =3 * 2 * 0 3! + 2! + 1!= 9 > 6 =3 * 2 * 1 3! + 2! + 2!= 10 < 12 =3 * 2 * 2 3! + 3! + 0!= 13 > 0 =3 * 3 * 0 3! + 3! + 1!= 13 > 9 =3 * 3 * 1 3! + 3! + 2!= 14 < 18 =3 * 3 * 2 3! + 3! + 3!= 18 < 27 =3 * 3 * 3 4! + 0! + 0!= 26 > 0 =4 * 0 * 0 4! + 1! + 0!= 26 > 0 =4 * 1 * 0 4! + 1! + 1!= 26 > 4 =4 * 1 * 1 4! + 2! + 0!= 27 > 0 =4 * 2 * 0 4! + 2! + 1!= 27 > 8 =4 * 2 * 1 4! + 2! + 2!= 28 > 16 =4 * 2 * 2 4! + 3! + 0!= 31 > 0 =4 * 3 * 0 4! + 3! + 1!= 31 > 12 =4 * 3 * 1 4! + 3! + 2!= 32 > 24 =4 * 3 * 2 4! + 3! + 3!= 36 = 36 =4 * 3 * 3 4! + 4! + 0!= 49 > 0 =4 * 4 * 0 4! + 4! + 1!= 49 > 16 =4 * 4 * 1 4! + 4! + 2!= 50 > 32 =4 * 4 * 2 4! + 4! + 3!= 54 > 48 =4 * 4 * 3 4! + 4! + 4!= 72 > 64 =4 * 4 * 4 5! + 0! + 0!= 122 > 0 =5 * 0 * 0 5! + 1! + 0!= 122 > 0 =5 * 1 * 0 5! + 1! + 1!= 122 > 5 =5 * 1 * 1 5! + 2! + 0!= 123 > 0 =5 * 2 * 0 5! + 2! + 1!= 123 > 10 =5 * 2 * 1 5! + 2! + 2!= 124 > 20 =5 * 2 * 2 5! + 3! + 0!= 127 > 0 =5 * 3 * 0 5! + 3! + 1!= 127 > 15 =5 * 3 * 1 5! + 3! + 2!= 128 > 30 =5 * 3 * 2 5! + 3! + 3!= 132 > 45 =5 * 3 * 3 5! + 4! + 0!= 145 > 0 =5 * 4 * 0 5! + 4! + 1!= 145 > 20 =5 * 4 * 1 5! + 4! + 2!= 146 > 40 =5 * 4 * 2 5! + 4! + 3!= 150 > 60 =5 * 4 * 3 5! + 4! + 4!= 168 > 80 =5 * 4 * 4 5! + 5! + 0!= 241 > 0 =5 * 5 * 0 5! + 5! + 1!= 241 > 25 =5 * 5 * 1 5! + 5! + 2!= 242 > 50 =5 * 5 * 2 5! + 5! + 3!= 246 > 75 =5 * 5 * 3 5! + 5! + 4!= 264 > 100 =5 * 5 * 4 5! + 5! + 5!= 360 > 125 =5 * 5 * 5

Here's the program I used to check the cases for x, y, z <= 9.  I used perl.  Note the similarity to Zbyszek's solution.

`use strict;# initialize table of factorials with 0! = 1my @fac_table = (1);# loop executes 10 times# interesting fact - solution remains unique even if you use higher bases# (e.g., change 9 to 15 for hex)for my \$i (0 .. 9) {    \$fac_table[\$i + 1] = (\$i + 1) * \$fac_table[\$i];    # loop executes 1 + 2 + ... + 9 + 10 = 55 times    for my \$j (0 .. \$i) {        # inner loop executes        # 1 +        # 1 + 2 +        # ...        # 1 + 2 + ... + 9 +        # 1 + 2 + ... + 9 + 10 times        #         # Here's how I counted that up:        # if \$i, \$j, and \$k are all different,        # there are (10 choose 3) = 10 * 9 * 8 / 1 * 2 * 3 = 120 ways        # if \$i, \$j, and \$k are all the same,        # there are 10 ways        # if \$i and \$k are different, but \$j is the same as one or the other,        # there are (10 choose 2) ways to pick \$i and \$k = 10 * 9 / 1 * 2 = 45 ways        # and two choices in each of these for \$j so there are 90 ways        # total = 120 + 10 + 90 = 220 iterations        # (verified with a loop counter)        for my \$k (0 .. \$j) {            # note \$i >= \$j >= \$k            if (\$fac_table[\$i] + \$fac_table[\$j] + \$fac_table[\$k] == \$i * \$j * \$k) {                print qq(\$i! + \$j! + \$k! == \$i * \$j * \$k\n);            }        }    }}`

A cute little exercise.

• #### The largest prime ever - you saw it here first

The GIMPS project says they've found the largest prime number ever, but they're keeping quiet about what it is until they've verified it (they expect to be done in a couple of weeks.)

Pshaw.  I can tell you right now what their prime number is.

Since I'm a computer scientist I'll write it down in binary.

GIMPS' LARGEST PRIME NUMBER IS (scroll down / highlight to view:)

0b
111111111111111111111111111111111111111111111111
111111111111111111111111111111111111111111111111
111111111111111111111111111111111111111111111111
111111111111111111111111111111111111111111111111
111111111111111111111111111111111111111111111111
111111111111111111111111111111111111111111111111
111111111111111111111111111111111111111111111111
111111111111111111111111111111111111111111111111
111111111111111111111111111111111111111111111111
...
111111111111111111111111111111111111111111111111
111111111111111111111111111111111111111111111111
111111111111111111111111111111111111111111111111
111111111111111111111111111111111111111111111111
111111111111111111111111111111111111111111111111
111111111111111111111111111111111111111111111111
111111111111111111111111111111111111111111111111
111111111111111111111111111111111111111111111111
111111111111111111111111111111111111111111111111

EDIT: On Saturday 9/6 another prime was found.  They're also keeping mum about what this one is.  But I know this one too...

EDIT2: Every new Mersenne prime also means a new perfect number is discovered; counting these two new Mersenne primes, there are now 46 known perfect numbers, all of them even.  (It is conjectured, but not proven, that all perfect numbers are even.)  To go from a Mersenne prime (which is of the form 0b111...11, where there are a prime number of 1's) to its corresponding perfect number, tack on one fewer number of 0's onto the end of the number: e.g., 0b11 (3, the first Mersenne prime) becomes 0b110 (6, the first perfect number;) 0b111 (7, the second Mersenne prime) becomes 0b11100 (28, the second perfect number) etc. (The proof that such numbers are perfect is simple; there is a more complicated proof that all even perfect numbers are of this form.)

EDIT: I can now reveal that the number of 1s is 43,112,609.

• #### Why square waves have ears: Gibbs' phenomenon (Wilbraham's phenomenon)

In a recent post I sung the praises of square waves as a way to get a heckuva lot of power (3 dB more power than a sine wave) into a sample-range-limited signal.  It's time to take them down a notch now.

A problem with square waves is they're impossible to generate in the analog domain.  In fact, you can't even get close.

Signals generated in the analog domain are subject to physical laws regarding continuity (no teleportation) and whatnot. A common way to model these is to express (periodic) analog signals using a basis of sine waves with integral periods.  Suppose I want to generate the square wave:

 f(x) = 1, -π < x < 0 -1, 0 < x < π

Graphed below are some approximations of this function using sine waves as a basis.  Note that only odd values of n are used in the sums of sin(nx).

The sums converge to +/-1 quite well, but there are definite "ears" at x near 0 where there's overshoot.  This doesn't appear to die down.  A closeup of one of the "ears":

If anything, rather than dying down, the "ears" converge to max(fn) → about 1.18, or about 9% of the "jump" from -1 to 1.  (Dym and McKean, in their 1972 book Fourier Series and Integrals, get the 9% right but incorrectly assert that the convergence is to 1.09.)

This mathematical phenomenon - Gibbs' phenomenon - is a good illustration of the difference between convergence of a series of functions and uniform convergence.

In this case, the series of partial sums pointwise converge to the square wave... for any given point x > 0 and ε > 0, the ear will eventually move to the left, and I can choose an N such that fn(x) is within of ε of 1 for all n > N...

... but the series does not uniformly converge to the square wave.  The following assertion is false: "for any given ε > 0, I can pick an N such that fn(x) is within of ε of 1 for all n > N and all x > 0."  This can be demonstrated by picking ε = 0.17, say.  For any n, even, say, n = 10100, there is an x close to 0 where f1e100(x) > 1.17.

• #### Minimal unsatisfiable regular expression, XPath query

Regular expressions are a tool for matching generic text.  XPath queries are a tool for matching chunks of XML.  Both are search technologies.

When using search technologies it is occasionally quite useful to have a query that will never match anything - for SQL, this would be something like "SELECT 1 WHERE 1 = 0".

My candidates for minimal unsatisfiable regular expression:

/a\bc/
\b is a zero-width assertion that matches a boundary at the beginning or end of a word - specifically, it is true between a word-ish character (\w) and a non-wordish character (\W).  Since literal "a" and "c" are both wordish characters, this will never match.

Or, if you allow Perl extensions:

/(?!)/
This is a negative lookahead for an empty string.  Since the empty string always matches everywhere, this will never match.

My candidate for minimal unsatisfiable XPath query:

/parent::*
This matches everything at or under the parent of the root element.  Since, by definition, the root element has no parent, this will never match.

• #### Second cousins, cousins once removed; relationships by generations to common ancestor

Raymond Chen explains some common terms for blood relatives of varying distance across cultures in his blog post "What kind of uncle am I?"

He links to a diagram on genealogy.com that I felt was lacking something... so here's my version, with consanguinary colors.

Red means "marriage is almost certainly legally prohibited."

Yellow means "marriage may be legally prohibited - check your region's laws."

Green means "marriage is amost certainly legal."

 Relationship by generations to common ancestor # 0 1 2 3 4 ... m 0 self father/mother grand(father/mother) great‑grand(father/mother) (great‑)2grand(father/mother) (great-)m ‑ 2grand(father/mother) 1 son/daughter brother/sister aunt/uncle grand(aunt/uncle) great‑grand(aunt/uncle) (great‑)m ‑ 3grand(aunt/uncle) 2 grand(son/daughter) niece/nephew (first) cousin first cousin,once removed first cousin,twice removed first cousin,(m ‑ 2) times removed 3 great‑grand(son/daughter) grand(niece/nephew) first cousin,once removed second cousin second cousin,once removed second cousin,(m ‑ 3) times removed 4 (great‑)2grand(son/daughter) great-grand(niece/nephew) first cousin,twice removed second cousin,once removed third cousin third cousin,(m ‑ 4) times removed ... n (great‑)n ‑ 2grand(son/daughter) (great‑)n ‑ 3grand(niece/nephew) first cousin,(n ‑ 2) times removed second cousin,(n ‑ 3) times removed third cousin,(n ‑ 4) times removed (n == m) ? ((n ‑ 1)th cousin) : ((min(n, m) ‑ 1)th cousin, |n ‑ m| times removed)
• #### Bad Perl: locker problem

Bad Perl solution to the "print the open lockers" problem:

perl -e"print join', ',map{\$_*\$_}1..sqrt pop" 100

54 characters.  I prefer this to the 53-character solution obtained by omitting the space after the first comma.

EDIT: 49 characters:

perl -e"print map{\$_*\$_,' '}1..sqrt pop" 100

EDIT: 48:

perl -e"print map{\$_*\$_.\$/}1..sqrt pop" 100

EDIT: 47:

perl -e"map{print\$/.\$_*\$_}1..sqrt pop" 100

I still think "say" is cheating but it does afford this very short solution:

perl -E"map{say\$_*\$_}1..sqrt pop" 100

EDIT: Apparently I need to learn how to count. Counts above are off. Anyway, 41:

perl -e"print\$_*\$_,\$/for 1..sqrt pop" 100

• #### Good Perl, Bad Perl

One of my favorite languages is Perl.  Perl has an ambivalent reputation; some people take to it, some accuse it of being a syntax-complete language.  (There's some truth to this.)

My view is that Perl gives you a very direct link into the mind of the programmer - much more so than other languages.  Perl is designed very much like a spoken language, perhaps because Larry Wall's background is linguistics.

There was a little girl
Right in the middle of her forehead.
And when she was good,
She was very, very, good;
She was horrid.

(In an English accent, "forehead" and "horrid" actually rhyme.)

Two examples of my own Perl to illustrate my point.  This is in my email signature:

`perl -e "print join er,reverse',','l hack',' P','Just anoth'"`

And this little seasonal gem:

`use strict;use warnings;sub receive(\$);my @ordinals = qw(	zeroth	first second third fourth fifth sixth	seventh eighth ninth tenth eleventh twelfth);my @gifts = reverse split /\n/, <<END_OF_LIST;	Twelve drummers drumming;	Eleven pipers piping;	Ten lords a-leaping;	Nine ladies dancing;	Eight maids a-milking;	Seven swans a-swimming;	Six geese a-laying;	Five golden ringeds;	Four colly birds;	Three French hens;	Two turtle doves;	A partridge in a pear tree.END_OF_LISTfor (my \$day = 1; \$day <= 12; \$day++) {	receive(\$day);}sub receive(\$) {	my \$day = shift;	print("On the ", \$ordinals[\$day], " day of Christmas, my true love sent to me:\n");	for (my \$i = \$day; \$i > 0; \$i--) {		my \$gift = \$gifts[\$i - 1];		if (\$i == 1 && \$day != 1) {			\$gift =~ s/^(\s*)A/\$1And a/;		}		print \$gift, "\n";	}	if (\$day != 12) {		print "\n";	}}`

The latter kind of Perl I like to call "good Perl".  It's easy to read, I think.  There are a couple of idioms that take getting used to, just like with any new language, but well-written Perl is (I think) easier to read than any other language.

But flexibility has its dark sides as well.  Black Perl is the canonical example, but there are others such as Perl golf.  This kind of thing (the first sample above is an example) is responsible for at least part of Perl's reputation for opacity; its compatibility with shell scripting, and most particularly its embedded regular expression support, is responsible for much of the rest.

Exercise: duplicate the output of the second sample above using as short a Perl program as possible.

• #### The more experience I get the more I like to try new things

There is a general pattern in professional people.  We start out idealistic and adventurous, and as disasters inevitably accumulate we become more circumspect and pessimistic.

(Pre-emptive snarky response: "you say that like it's a bad thing.")

While pessimism is invaluable to a tester, it should be tempered with a just sense of hubris.  Larry Wall's saw about the three virtues of any great programmer has a historical antecedent from one of the canonical American authors:

We should be careful to get out of an experience only the wisdom that is in it -- and stop there; lest we be like the cat that sits down on a hot stove-lid. She will never sit down on a hot stove-lid again -- and that is well; but also she will never sit down on a cold one any more.
-- Mark Twain, Following the Equator

• #### Rules for making change

It happens a million times a day.  Somebody pays cash for something and gets change.  But there are rules, and Raymond's friend seems to like to break those rules.  Luckily for Raymond's friend, one of the rules is the customer is always right... or as Mr. Krabs would say, the money is always right.  This blog post is about a couple of the more mathematical rules.

Rule for the cashier: you must give "canonical" change.

Whether to honor requests for non-canonical change is a complicated morass I will not attempt to tackle in this blog post as too much depends on context.  But the initial offering must be canonical.

By "canonical change", I mean the cashier must give change using the smallest possible number of canonical coins and bills.  For example, (one quarter + one nickel) is a rendering of \$0.30 using only two coins, while (three dimes) is three coins, so a cashier must never give change that includes three dimes.

By "canonical coins and bills" I mean (in the U.S.) this precise list:

• penny
• nickel
• dime
• quarter
• \$1 bill
• \$5 bill
• \$10 bill
• \$20 bill
I discuss usage of non-canonical (unusual) coins and bills later.

Rule 1 for the customer: "Tap, tap, no takebacks"

The tender and canonical change cannot intersect.

Don't give the cashier anything they would just hand back to you directly.

This is the rule that Raymond's friend broke - it just wastes everyone's time to play musical chairs with the money.
Actually, this rule is just a corollary of the more general rule:

Rule 2 for the customer: "No side transactions."

No subset of the tender can have the same value as any subset of canonical change for that tender.

Don't give the cashier anything that they would just hand back to you in coalesced form.

If you want to consolidate your change, go to a bank or use one of those big machines.

Corollary 1: it is always OK to pay with a single large bill.
Corollary 2: it is always OK to give exact change.

Unusual denominations:

The two-dollar bill, the dollar coin (in any of its forms), the half-dollar coin, and bills higher than \$20 are "unusual" denominations that complicate the situation slightly.

Rule for the cashier: Unusual denominations stay in the drawer.

Accept them (unless there are security rules at your place of business) but they are explicitly not part of any canonical change.

Rule for the customer: Go right ahead.

You can pay with an unusual denomination provided that you do not break the "no side transactions" rule or any large-denomination security rule at the business in question.

Examples:

In the following examples, "total" is the amount owed; "tender" is the amount the customer gives to the cashier; and "change" is what the cashier gives back to the customer.

 Total Tender Change Notes \$5.20 \$5 bill + \$2 bill \$1 bill + three quarters + one nickel OK - cashier correctly accepted the \$2 \$5.20 \$5 bill + \$1 bill two quarters + three dimes Cashier gave out more coins than necessary \$5.20 \$5 bill + \$1 bill half-dollar + one quarter + one nickel Cashier gave out non-canonical half-dollar \$5.20 \$5 bill + four quarters three quarters + one nickel Customer gave three "takeback" quarters \$5.20 \$5 bill + quarter + nickel dime OK \$5.20 \$5 bill + quarter + five pennies dime OK \$5.20 \$5 bill + quarter + ten pennies dime + nickel Customer got nickel for five pennies in "side transaction"

In the last row, you can think of the "side transaction" as either five pennies for a nickel or ten pennies for a dime. I prefer the former.

• #### Perl one-liner: approximate pi via Monte Carlo method

>perl -e"print 4/(map{\$n+=rand()**2+rand()**2<1}1..pop)*\$n" 5000
3.1336

59 characters, plus arguments.

• #### Famous Watch Story

Thanks to Gerry F. in Tampa for raising this interesting question.  Given: a perfect watch with three sweep hands (hour, minute and second).  (By "sweep", I mean that the hands move continuously and do not "jump" every hour, minute, or second.)

Find: a list of all times at which the three hands divide the circle of the clock into three equal sections - that is to say, any pair of hands will form a 120° angle, or, in radians, an angle of 2π/3.  (For the rest of the post I will use radians.)

The remainder of this post is my proof of the surprising result.  If you'd like to try solving the problem yourself, stop reading now...

OK.  First, let's define some terms.

Let t be the time.  I'm going to make a rather strange choice for the units.  I have the obvious choices of hours, minutes, or seconds - instead, I'm going to choose a unit of half-days.  So 12:00 corresponds to t = 0 or 1; 4:00 corresponds to t = 1/3; 9:00 corresponds to t = 3/4, etc.  Note the clock has periodic behavior with a period of 1 (in units of t.)

Let H be the directed angle between 12 and the hour hand, increasing from 0 (at 12:00, t = 0) through π/2 (at 3:00) and all the way 'round to 2π at 12:00 again.  Note that at 9:00 H is defined to be 3π/2 and not merely π/2.  Note that H is also periodic with period 1.

Let M be the directed angle between 12 and the minute hand similarly.  Note M is periodic with period 1/12: the minute hand goes all the way 'round in an hour, which is 1/12 of a half-day.

Let S be the directed angle between 12 and the second hand.  Note S is periodic with period 1/720: the second hand goes 'round 720 times in a half-day.

We can now write explicit formulas for H, M, and S in terms of t:

Define θxy as the directed angle between hand x and hand y.  In particular:

The key point is that the angles between the hands are periodic.  The angle between the minute and the hour hands cycles from 0 through 2π exactly 11 times every half-day; the angle between the minute and the hour hands, exactly 12*59 times; and the angle between the hour and the second hands, exactly (12*60 - 1) = 719 times.

Now that we have formulas for the three relevant angles, we can answer the question "when are these angles 2π/3 or 4π/3?"  In particular, define {txy} as the set of times at which the x and y hands make either a 2π/3 angle or a 4π/3 angle.

Let's consider the angle between the hour and minute hand first.

So the angle is 2π/3 (or 4π/3) at multiples of t = 1/33 of a half-day, or every 12*60/33 = 21 and 9/11 minutes... except that at every third multiple of 21 and 9/11 minutes, the hands coincide, so those don't count.

But rather than think of it as "21 and 9/11 minutes", I'd like to urge you to think of it as "1/(3*11) of a cycle."

Now consider the angle between the minute and the second hand:

The angle between the minute and the second hand is 2π/3 or 4π/3 at multiples of 1/(3*12*59) of a half-day (except that, as before, at every third multiple of 1/2124 the hands coincide instead of making a suitable angle.)

The following is a demonstration that {tHM} and {tMS} have no elements in common.  This implies that the hour/minute angle and the minute/second angle are never 2π/3 or 4π/3 simultaneously.  As this is a necessary condition for the three hands to divide the clock evenly, the three hands never divide the clock evenly.

Write {tHM} and {tMS} in lowest terms.  How many factors of 3 are there in the numerator of each fraction?  In the denominator?

The numerator is easy.  The numerator is never a multiple of 3 to begin with, and cancelation of common terms in the numerator and denominator will never add a factor, so after cancelation there are (still) no factors of 3.

The denominator for each tHM has exactly one factor of 3; the denominator for each tMS has exactly two factors of 3.

Thus, the two sets have no elements in common, and the clock is never evenly divided.  QED.

To illustrate the importance of the numerator being a nonmultiple of 3, let's calculate {tHS}:

719 happens to be prime.

It is tempting to put forth the following fallacious argument:

Fallacious argument: "{tHS} consists of a bunch of fractions whose denominator contains the large prime factor 719.  Neither of the other sets {tHM} or {tMS} contain this large prime factor in their denominators... therefore {tHS} contains no elements in common with either {tHM} or {tMS}, QED."

The problem with this argument is that it attempts to prove something that is false.  True, the intersection of {tHS} and {tMS} is empty, but the intersection of {tHS} and {tHM} is not empty - in fact, it consists of precisely the two elements { 1/3, 2/3 } (which correspond to 4:00:00 and 8:00:00 respectively.)

What happened to the factor of 719 in the denominator?  It got canceled out by a multiple of 719 in the numerator.  In particular, 1/3 = (3*239 + 2)/(3*719), and 2/3 = (3*479 + 1)/(3*719).

• #### How to enumerate DirectSound DirectX Media Objects (DMOs) on your system

Source and binaries (amd64 and x86) attached.

Pseudocode:

GUID dmo_categories[] = { ... }
for (each guid in dmo_categories) {
IEnumDMO = DMOEnum(guid, DMO_ENUMF_INCLUDE_KEYED, ...);
while (0 != items fetched by IEnumDMO::Next(&iid, &szName...)) {
print iid, szName
}
}

Output on my system:

>dmoenum.exe
-- Audio decoders ({57F2DB8B-E6BB-4513-9D43-DCD2A6593125}) --
WMAPro over S/PDIF DMO ({5210F8E4-B0BB-47C3-A8D9-7B2282CC79ED})
WMSpeech Decoder DMO ({874131CB-4ECC-443B-8948-746B89595D20})
MP3 Decoder DMO ({BBEEA841-0A63-4F52-A7AB-A9B3A84ED38A})

-- Audio effects ({F3602B3F-0592-48DF-A4CD-674721E7EBEB}) --
ParamEq ({120CED89-3BF4-4173-A132-3CB406CF3231})
AEC ({745057C7-F353-4F2D-A7EE-58434477730E})
WavesReverb ({87FC0268-9A55-4360-95AA-004A1D9DE26C})
Gargle ({DAFD8210-5711-4B91-9FE3-F75B7AE279BF})
Compressor ({EF011F79-4000-406D-87AF-BFFB3FC39D57})
Distortion ({EF114C90-CD1D-484E-96E5-09CFAF912A21})
Echo ({EF3E932C-D40B-4F51-8CCF-3F98F1B29D5D})
I3DL2Reverb ({EF985E71-D5C7-42D4-BA4D-2D073E2E96F4})
Chorus ({EFE6629C-81F7-4281-BD91-C9D604A95AF6})
Resampler DMO ({F447B69E-1884-4A7E-8055-346F74D6EDB3})

-- Audio encoders ({33D9A761-90C8-11D0-BD43-00A0C911CE86}) --
WM Speech Encoder DMO ({1F1F4E1A-2252-4063-84BB-EEE75F8856D5})
WMAudio Encoder DMO ({70F598E9-F4AB-495A-99E2-A7C4D3D89ABF})

Mpeg4s Decoder DMO ({2A11BAE2-FE6E-4249-864B-9E9ED6E8DBC2})
WMV Screen decoder DMO ({7BAFB3B1-D8F4-4279-9253-27DA423108DE})
WMVideo Decoder DMO ({82D353DF-90BD-4382-8BC2-3F6192B76E34})
Mpeg43 Decoder DMO ({CBA9E78B-49A3-49EA-93D4-6BCBA8C4DE07})
MS ATC Screen Decoder 1 ({F1931D8E-51D3-496F-BE8A-3D08AEE9C9DB})
Mpeg4 Decoder DMO ({F371728A-6052-4D47-827C-D039335DFE0A})

-- Video effects ({D990EE14-776C-4723-BE46-3DA2F56F10B9}) --
Frame Rate Converter ({01F36CE2-0907-4D8B-979D-F151BE91C883})
TOC Generator ({4DDA1941-77A0-4FB1-A518-E2185041D70C})
Color Control ({798059F0-89CA-4160-B325-AEB48EFE4F9A})
Color Converter DMO ({98230571-0087-4204-B020-3282538E57D3})

-- Video encoders ({33D9A760-90C8-11D0-BD43-00A0C911CE86}) --
WMVideo8 Encoder DMO ({7E320092-596A-41B2-BBEB-175D10504EB6})
WMVideo9 Encoder DMO ({D23B90D0-144F-46BD-841D-59E4EB19DC59})
MSScreen 9 encoder DMO ({F7FFE0A0-A4F5-44B5-949E-15ED2BC66F9D})

-- Audio capture effects ({F665AABA-3E09-4920-AA5F-219811148F09}) --

• #### How to enumerate Audio Compression Manager (ACM) drivers on your system (spot the bug!)

Source and binaries (amd64 and x86) attached.

Pseudocode:

list acm_drivers;
acmDriverEnum( myCallbackFunction, &acm_drivers, ...);
// myCallbackFunction(driver, pacm_drivers) { pacm_drivers->add(driver); }
for each driver in (acm_drivers) {
details = acmDriverDetails(driver);
print details;
}

Output on my system - spot the bug!

ACM Drivers found: 6
-- ACM Driver Details: Microsoft IMA ADPCM --
cbStruct: 1804
fccType: 0x63647561 (audc)
fccComp: 0x00000000 (    )
wMid: 1
wPid: 34
vdwACM: 0x03320000 (3.50.12800)
vdwDriver: 0x04000000 (4.0.0)
fdwSupport: 0x00000001
ACMDRIVERDETAILS_SUPPORTF_CODEC
cFormatTags: 2
cFilterTags: 0
hicon: 0x0000000000000000
szLicensing: ""
szFeatures: "Compresses and decompresses IMA ADPCM audio data."

-- ACM Driver Details: Microsoft CCITT G.711 --
cbStruct: 1804
fccType: 0x63647561 (audc)
fccComp: 0x00000000 (    )
wMid: 1
wPid: 37
vdwACM: 0x03320000 (3.50.12800)
vdwDriver: 0x04000000 (4.0.0)
fdwSupport: 0x00000001
ACMDRIVERDETAILS_SUPPORTF_CODEC
cFormatTags: 3
cFilterTags: 0
hicon: 0x0000000000000000
szShortName: "Microsoft CCITT G.711"
szLongName: "Microsoft CCITT G.711 A-Law and u-Law CODEC"
szLicensing: ""
szFeatures: "Compresses and decompresses CCITT G.711 A-Law and u-Law audio data."

-- ACM Driver Details: Microsoft GSM 6.10 --
cbStruct: 1804
fccType: 0x63647561 (audc)
fccComp: 0x00000000 (    )
wMid: 1
wPid: 36
vdwACM: 0x03320000 (3.50.12800)
vdwDriver: 0x04000000 (4.0.0)
fdwSupport: 0x00000001
ACMDRIVERDETAILS_SUPPORTF_CODEC
cFormatTags: 2
cFilterTags: 0
hicon: 0x0000000000000000
szShortName: "Microsoft GSM 6.10"
szLongName: "Microsoft GSM 6.10 Audio CODEC"
szLicensing: ""
szFeatures: "Compresses and decompresses audio data conforming to the ETSI-GSM (European Telecommunications Standards Institute-Groupe Special Mobile) recommendation 6.10."

-- ACM Driver Details: MS-ADPCM --
cbStruct: 1804
fccType: 0x63647561 (audc)
fccComp: 0x00000000 (    )
wMid: 1
wPid: 33
vdwACM: 0x03320000 (3.50.12800)
vdwDriver: 0x04000000 (4.0.0)
fdwSupport: 0x00000001
ACMDRIVERDETAILS_SUPPORTF_CODEC
cFormatTags: 2
cFilterTags: 0
hicon: 0x0000000000000000
szLicensing: ""
szFeatures: "Compresses and decompresses Microsoft ADPCM audio data."

-- ACM Driver Details: MPEG Layer-3 Codec  --
cbStruct: 1804
fccType: 0x63647561 (audc)
fccComp: 0x00000000 (    )
wMid: 172
wPid: 9
vdwACM: 0x03320000 (3.50.12800)
vdwDriver: 0x01090191 (1.9.2305)
fdwSupport: 0x00000001
ACMDRIVERDETAILS_SUPPORTF_CODEC
cFormatTags: 2
cFilterTags: 0
hicon: 0x0000000067C507A5
szShortName: "MPEG Layer-3 Codec "
szLongName: "Fraunhofer IIS MPEG Layer-3 Codec (decode only)"
szLicensing: ""
szFeatures: "decoder only version"

-- ACM Driver Details: MS-PCM --
cbStruct: 1804
fccType: 0x63647561 (audc)
fccComp: 0x00000000 (    )
wMid: 1
wPid: 38
vdwACM: 0x03320000 (3.50.12800)
vdwDriver: 0x05000000 (5.0.0)
fdwSupport: 0x00000002
ACMDRIVERDETAILS_SUPPORTF_CONVERTER
cFormatTags: 1
cFilterTags: 0
hicon: 0x0000000000000000
szShortName: "MS-PCM"
szLongName: "Microsoft PCM Converter"