I am a Software Development Engineer in Test working for the Windows Sound team. You can contact me via email: mateer at microsoft dot com
Friend key: 28904932216450_59cd9d55374be03d8167d37c8ff4196b
Windows has a customizable soundtrack - when different things happen, sounds play. You can live with the default sounds, turn all the sounds off, or substitute your own sounds. All configurable via the Sound control panel:
You can assign short or long, pleasant or annoying sounds to any event in the list. For example, you could rip a CD to .wav format and assign it to the "Maximize" event.
There are two events that are a little finicky about how long the sound is, though: the "Exit Windows" and "Windows Logoff" events. If you assign a sound that is too long to either of these events, it won't play. It plays when you click the "Test" button, but when you actually log off or shut down, the sound doesn't play.
How long is too long?
Before the answer, a quick review of .wav files.
You can have any number of frames per second. CDs use a sample rate of 44100 frames per second; DVDs use 48000 frames per second. This is usually one of (11025, 22050, 44100, 88200, 176400; 48000, 96000, 19200; 8000, 16000, 32000).
You can have any number of samples in a frame: mono (1 channel), stereo (two channels), surround (four channels), 5.1 (six channels), 7.1 (eight channels), or a different number.
Each sample is a number. This can be a floating-point number (32 bits), an unsigned integer (8 bits), or a signed integer (16 bits, 20 bits, 24 bits, 32 bits.) The 20 bit and 24 bit integers can be either in 24-bit containers or in 32-bit containers.
The fatness of the samples, and how many there are per second, determines how many bytes a second of .wav data will contain. The formula is:
bytes/second = (samples/frame, AKA channels) * (frames/second, AKA sample rate) * (bits/sample) / (bits/byte, always 8)
For performance reasons, Windows Vista limits the size of shutdown/logoff sounds to 4 MB.
So how long is 4MB? Here's the answer in tabular form for most common formats. Lengths are in seconds.
unsigned 8-bit int
sample rate
mono
stereo
surround
5.1
7.1
8000
524.29
262.14
131.07
87.38
65.54
11025
380.44
190.22
95.11
63.41
47.55
16000
43.69
32.77
22050
31.70
23.78
32000
21.85
16.38
44100
15.85
11.89
48000
14.56
10.92
88200
7.93
5.94
96000
7.28
5.46
176400
3.96
2.97
192000
3.64
2.73
signed 16-bit int
8.19
1.98
1.49
1.82
1.37
24-bit container: signed 20-bit int or signed 24-bit int
174.76
29.13
126.81
21.14
10.57
5.28
4.85
2.64
2.43
1.32
0.99
1.21
0.91
32-bit container: signed 20-bit int, signed 24-bit int, or float32
4.10
0.74
0.68
The most common audio format by far is 44.1 kHz / stereo / 16-bit; if your .wav file is of this format, you get just under 23 seconds before you hit the 4 MB limit.
If you want to go over this limit, or if you want a multichannel logoff sound, you can get away with downsampling. If you shoehorn in a longer sound, though, you may run into this:
If you just wait, the sound will play to the end and logoff will continue.
If you click "Cancel", explorer.exe will exit anyway and you'll be logged in with no shell.
A trick to get the shell back: Ctrl-Shift-Esc to bring up Task Manager, and File | New Task (Run...) | explorer.exe to reinvoke the shell. This also allows you to determine which of the notification area icons are well-behaved. :-)
Vincent Tan picked up on Raymond Chen's "rotate a matrix" interview post.
As Vincent Tan points out, there is no way to create an n x n matrix R such that RA is a rotated version of A - for example, in the 2 x 2 case:
R [ a b ] = [ b a ] [ c d ] [ c d ]
Vincent Tan points out that such an R would entail hidden assumptions but asks for a simpler proof.
Here it is. I'll go back to the n by n case, assuming n >= 2. (The 0 x 0 case and the 1 x 1 case are actually trivially solvable using the identity matrix.)
Assume there is such a matrix R = (r_{ij})_{i,j∈{1...n}} which has the property that RA = B = (b_{ij})_{i,j∈{1...n} }is the rotated version of A for any n x n matrix A = (a_{ij})_{i,j∈{1...n}}.
To see that this is impossible, consider the matrix with a 1 in the upper-left-hand corner and zeros everywhere else; that is, a_{1,1} = 1, and a_{ij} = 0 for all other combinations of i and j.
To be the rotated version of this matrix, B_{} should have b_{n}_{1} = 1 and b_{ij} = 0 for all other combinations of i and j. (My coordinates are such that b_{n}_{1 }is the number in the top right corner of the matrix.)
But how did that 1 get in the top right hand corner? The rules of matrix multiplication imply that b_{n}_{1} = r_{1,1}a_{n}_{1} + r_{2,1}a_{n}_{2} + ... + r_{n1}a_{nn}. At first blush, this looks fine... until we realize that b_{n}_{1 }is 1, and all of the a_{nj }are 0... so we have the contradiction
1 = r_{1,1}0 + r_{2,1}0 + ... + r_{n1}0 = 0
QED.
But this is not how rotation matrices are applied. Rotation matrices R are not applied as RA = B; instead, they're applied as RAR^{-1} = B. Not that this helps our hapless interviewee; the operation being asked of him ("rotate a matrix") can't be done by a rotation matrix anyway. That is, there is no magic matrix R that works under this operation either.
EDIT: A simple way to prove that RAR^{-1} = B doesn't work either (for n >= 2) is to think about what happens when you choose A to be the identity matrix I (the matrix with 1's on the main diagonal [i = j] and 0's everywhere else.)
RIR^{-1} = RR^{-1} = I; and for n >= 2, I is not its own "rotation."
We therefore arrive at the paradoxical result that you can't rotate a matrix via matrix rotation.
The GIMPS project says they've found the largest prime number ever, but they're keeping quiet about what it is until they've verified it (they expect to be done in a couple of weeks.)
Pshaw. I can tell you right now what their prime number is.
Since I'm a computer scientist I'll write it down in binary.
GIMPS' LARGEST PRIME NUMBER IS (scroll down / highlight to view:)
0b111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111...111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111
EDIT: On Saturday 9/6 another prime was found. They're also keeping mum about what this one is. But I know this one too...
EDIT2: Every new Mersenne prime also means a new perfect number is discovered; counting these two new Mersenne primes, there are now 46 known perfect numbers, all of them even. (It is conjectured, but not proven, that all perfect numbers are even.) To go from a Mersenne prime (which is of the form 0b111...11, where there are a prime number of 1's) to its corresponding perfect number, tack on one fewer number of 0's onto the end of the number: e.g., 0b11 (3, the first Mersenne prime) becomes 0b110 (6, the first perfect number;) 0b111 (7, the second Mersenne prime) becomes 0b11100 (28, the second perfect number) etc. (The proof that such numbers are perfect is simple; there is a more complicated proof that all even perfect numbers are of this form.)
EDIT: I can now reveal that the number of 1s is 43,112,609.
The part where a computer can't help is the uniqueness. Here is a mathematical proof that there are no solutions if any of x, y, or z are >= 6:
Assume without loss of generality that x >= y => z. The following two inequalities hold trivially, since we're dealing with non-negative numbers: x ! + y ! + z ! > x !x * y * z <= x ^{3} Here's where 6 comes in: Lemma: for x >= 6, x ! > x ^{3} (proof momentarily.) If this lemma were true, then we'd be done... for x >= 6, x ! + y ! + z ! > x ! > x ^{3} >= x * y * z So a search for solutions for x ! + y ! + z ! = x * y * z can be restricted to 5 >= x >= y >= z. This is computationally feasible.By inspection, there is a unique solution (see later in this post.)Proof of lemma: first, look at values of x ! and x ^{3} for x = 0 through 7: 0! = 1 > 0 = 0^{3}1! = 1 = 1 = 1^{3}2! = 2 < 8 = 2^{3}3! = 6 < 27 = 3^{3}4! = 24 < 64 = 4^{3}5! = 120 < 125 = 5^{3}6! = 720 > 216 = 6^{3}7! = 5040 > 343 = 7^{3} It's not immediately clear how to progress. The key idea, as often in geometric-ish progressions: look at successive ratios. The initial 0 messes things up a bit, so we'll start with x = 1.On the left, numbers advance by a factor of: (x + 1)! / x ! = x + 1 Note this ratio gets bigger and bigger. On the right, numbers advance by a factor of (x + 1)^{3 }/ x ^{3} = ((x + 1) / x)^{3} = (1 + (1 / x))^{3} Note this rate of increase actually gets smaller. The crossover point - where x ! stops losing ground and begins to catch up - is x = 3;1 + 1 = 2 < 8 = (1 + (1/1))^{3}2 + 1 = 3 < 3.375 = (1 + (1/2))^{3}3 + 1 = 4 > 2.370370... = (1 + (1/3))^{3}By x = 6, the lost ground has been made up, and x ! never looks back. QED.
Assume without loss of generality that x >= y => z. The following two inequalities hold trivially, since we're dealing with non-negative numbers:
x ! + y ! + z ! > x !x * y * z <= x ^{3}
Here's where 6 comes in:
Lemma: for x >= 6, x ! > x ^{3} (proof momentarily.)
If this lemma were true, then we'd be done... for x >= 6,
x ! + y ! + z ! > x ! > x ^{3} >= x * y * z
So a search for solutions for x ! + y ! + z ! = x * y * z can be restricted to 5 >= x >= y >= z. This is computationally feasible.By inspection, there is a unique solution (see later in this post.)Proof of lemma: first, look at values of x ! and x ^{3} for x = 0 through 7:
0! = 1 > 0 = 0^{3}1! = 1 = 1 = 1^{3}2! = 2 < 8 = 2^{3}3! = 6 < 27 = 3^{3}4! = 24 < 64 = 4^{3}5! = 120 < 125 = 5^{3}6! = 720 > 216 = 6^{3}7! = 5040 > 343 = 7^{3}
It's not immediately clear how to progress. The key idea, as often in geometric-ish progressions: look at successive ratios. The initial 0 messes things up a bit, so we'll start with x = 1.On the left, numbers advance by a factor of:
(x + 1)! / x ! = x + 1
Note this ratio gets bigger and bigger.
On the right, numbers advance by a factor of
(x + 1)^{3 }/ x ^{3} = ((x + 1) / x)^{3} = (1 + (1 / x))^{3}
The crossover point - where x ! stops losing ground and begins to catch up - is x = 3;1 + 1 = 2 < 8 = (1 + (1/1))^{3}2 + 1 = 3 < 3.375 = (1 + (1/2))^{3}3 + 1 = 4 > 2.370370... = (1 + (1/3))^{3}By x = 6, the lost ground has been made up, and x ! never looks back. QED.
By the proof above, it only remains to look at the finite number of cases where x, y, and z are all <= 5. Here they all are: note that in addition to the unique solution for the equality
x ! + y ! + z ! = x * y * z
there are only four solutions to the inequality
x ! + y ! + z ! < x * y * z
All of the other cases have
x ! + y ! + z ! > x * y * z.
Here's the program I used to check the cases for x, y, z <= 9. I used perl. Note the similarity to Zbyszek's solution.
use strict;# initialize table of factorials with 0! = 1my @fac_table = (1);# loop executes 10 times# interesting fact - solution remains unique even if you use higher bases# (e.g., change 9 to 15 for hex)for my $i (0 .. 9) { $fac_table[$i + 1] = ($i + 1) * $fac_table[$i]; # loop executes 1 + 2 + ... + 9 + 10 = 55 times for my $j (0 .. $i) { # inner loop executes # 1 + # 1 + 2 + # ... # 1 + 2 + ... + 9 + # 1 + 2 + ... + 9 + 10 times # # Here's how I counted that up: # if $i, $j, and $k are all different, # there are (10 choose 3) = 10 * 9 * 8 / 1 * 2 * 3 = 120 ways # if $i, $j, and $k are all the same, # there are 10 ways # if $i and $k are different, but $j is the same as one or the other, # there are (10 choose 2) ways to pick $i and $k = 10 * 9 / 1 * 2 = 45 ways # and two choices in each of these for $j so there are 90 ways # total = 120 + 10 + 90 = 220 iterations # (verified with a loop counter) for my $k (0 .. $j) { # note $i >= $j >= $k if ($fac_table[$i] + $fac_table[$j] + $fac_table[$k] == $i * $j * $k) { print qq($i! + $j! + $k! == $i * $j * $k\n); } } }}
A cute little exercise.
I stumbled on hmlee's algorithm chart quite by chance. A lot of good questions in there.
Question 53 caught my eye as a mathematician.
Q53: Say you have three integers between 0 - 9. You have the equation: A! + B! + C! = ABC (where ABS is a three digit numbers, not A * B * C). Find A, B, and C that satisfies this equation.
Interesting. But I'd like to modify the question a little.
Q53': Say you have three integers between 0 - 9. You have the equation: A! + B! + C! = A * B * C.Find A, B, and C that satisfies this equation.
Astute readers will notice that A, B, and C are interchangeable. Nonetheless there is a unique solution (modulo swapping A, B, and C.)
I'm much less interested in the answer to this question (I know the answer) than I am in the quality of the program used to find the answer. (I tried finding the answer by hunt-and-peck, then gave up and wrote a program - I find the program to be more interesting than the answer.) Try to find the solution as efficiently as possible (without cheating.)
I'll post my program later.
If you see a fact, try to see it as intuitively as possible.
I suspect that the following even more general problem has the same unique solution, which would be very interesting indeed. This is the kind of thing where programs fail, and the mathematical mind becomes necessary again:
Q53'': Say you have three integers that are >= 0. You have the equation: A! + B! + C! = A * B * C. Find A, B, and C that satisfies this equation.
Sam Loyd was a wacky guy. His legacy of puzzles is well worth digging through. (I'd stay away from the 15/14 puzzle though.)
A sprinkling of his puzzles were chess-related, and plenty of them stand up well in the greater lexicon of chess puzzles. For example, his "Organ Pipes" mate-in-two problem is quite well known:
White to move and mate in two (Sam Loyd, 1859)
(If you haven't seen it before, go ahead and try it out. It's a cute little interference problem.)
OK, now we get to it.
From a technical point of view, there's a slight flaw here. Ideally, in a chess problem Black has very many options, and White has only a single path to victory in all variations.
This is not the case here. If the Bishop on f8 wanders away (1. ... Bg7, 1. ... Bh6) or is blocked by the Rook on e8 (1. ... Re7) then White has two mates: 2. Qb6# or 2. Qxb4#.
Luckily, the problem can be easily "fixed" - add a black pawn on a7:
White to move and mate in two(Sam Loyd, 1859; version by Matthew van Eerde, 2008)
This covers the b6 square. Now the only mate is to take on b4.
Beauty is a tricky beast though. This is now a more "technically correct" problem, but the position is no longer quite as "natural"-looking. (I quote "natural" because I've seen plenty of ugly over-the-board positions.) The pawn on a6 must have come from b7, and the pawn on b4 must have come from c7 (or perhaps from further away.)
I'm not sure which version I prefer, but it's nice that the flaw is not serious.
If you've bowled, you know the arrangement of the bowling pins forms a triangle.
(Image courtesy of the International Pumpkin Federation.)
If you've played eight-ball, you know the arrangement of the fifteen billiard balls forms a triangle.
Ten, fifteen... what other numbers form a triangle? The common arrangement of nine-ball and ninepins doesn't count because it's a diamond, not a triangle.
You can start with ten and add five balls to make a triangle of fifteen... then add six more to make a triangle of 21... then seven more to make a triangle of 28... and so on, with this sequence:
..., 10, 15, 21, 28, 36, 45, 55, 66, 78, 91, 105, 120, 136, 153, 171, 190, 210, 231...
Start looking for patterns. What do you see? Nothing jumps out right away. Are there any primes? Ones that end in 0, 2, 4, 6, or 8 are obviously even and therefore not primes... ones that end in 5 are divisible by 5 and therefore not primes... the remaining ones fall due to specific cases (21 = 3 x 7; 91 = 7 x 13; 153 = 3 x 3 x 17; 171 = 3 x 3 x 19; 231 = 3 x 7 x 11...)
In fact, gosh darn it, it seems like none of these numbers are prime, no matter how far we extend that "..." on the right! Can this be a coincidence?
A few mental coruscations later I have an idea that it is not a coincidence... that triangular numbers, by there very nature, cannot be prime. In fact I'm even willing to call it a "proof." Here it is.
"Theorem": there are no triangular primes.
First we need to generate a formula for the triangular numbers. Note that if you take an n x (n + 1) rectangle and draw a zig-zag line like so, you get two triangles.
Each of these triangles is composed of n diagonals, the shortest of which is 1, and the longest of which is n. That is to say, each of the triangles is composed of t_{n} squares. So we know that the total number of squares in the rectangle is 2t_{n}.
But we also know that the total number of squares is the base times the height, or n(n + 1). This gives us 2t_{n} = n(n + 1), or t_{n} = n(n + 1)/2.
The next part of the "proof" breaks down into case analysis. n can be odd (as in the diagram, where n is 5) or even.
Case where n is even:
n is an even positive number. Therefore n/2 is a positive number (maybe odd, maybe even; doesn't matter.) t_{n} can be written as (n/2)(n + 1), and is therefore not prime, since it has at least four factors: 1, n/2, n + 1, and t_{n}.
Case where n is odd:
n is an odd positive number. Therefore n + 1 is an even positive number. Therefore (n + 1)/2 is a positive number (maybe odd, maybe even; doesn't matter.) t_{n} can be written as (n)([n + 1]/2), and is therefore not prime, since it has at least four factors: 1, n, (n + 1)/2, and t_{n}.
Note that the "proof" that t_{n} is not prime inevitably concludes in a stronger result - that t_{n} has at least four factors... not only is t_{n} not prime, it can't even have as few as three factors. (Some numbers with three factors: 4, 9, 25, 49...)
Exercise: what kind of numbers have exactly three factors?
A beautiful proof. Perhaps the two cases can be elegantly folded together to normalize it a bit better. That is not a serious problem.
There is a serious problem.
The proof of our result is doomed.
Why?
Because the result does not hold! There is a triangular prime.
Exercise: find a triangular prime.
After having recovered from the shocking revelation that, our beautiful proof to the contrary, a triangular prime is so rude as to exist, a little self-examination is in order. What is wrong with the proof? This... and not the existence of triangular primes... is the lesson to be learned: that beauty is not always truth.
vbBretty on the audio forums asked how to tell if a given audio output and a given audio input were on the same physical audio card.
I found the exercise interesting enough to share here:
Ah... Casablanca.
Arguably the best movie of all time. And yet a blown line? How could I be so bold?
The line:
Play it [As Time Goes By] again, Sam. -- Casablanca
Wow... that line, blown? Am I way off base here? Arguably the best line of arguably the best movie... blown? How so?
Well, for two reasons.
The topic of this arguably-best-line of the arguably-best-movie was thus saved by a haircut.
I found a simple but nasty bug the other day in this implementation of the IMFOutputSchema interface.
The symptoms: running this code outside of a debugger caused the app to crash. Running it under a debugger (to catch the crash) caused the app to run clean.
// outputschema.hHRESULT CreateTrustedAudioDriversOutputSchema( DWORD dwConfigData, GUID guidOriginatorID, IMFOutputSchema **ppMFOutputSchema);
// outputschema.cpp// ... various include files removed ...// CMFAttributesImpl implements the IMFAttributes interface, minus the IUnknown methodsclass CTrustedAudioDriversOutputSchema : public CMFAttributesImpl<IMFOutputSchema> {friend HRESULT CreateTrustedAudioDriversOutputSchema( DWORD dwConfigData, GUID guidOriginatorID, IMFOutputSchema **ppMFOutputSchema );private: CTrustedAudioDriversOutputSchema(DWORD dwConfigData, GUID guidOriginatorID); ~CTrustedAudioDriversOutputSchema(); ULONG m_cRefCount; DWORD m_dwConfigData; GUID m_guidOriginatorID; public: // IUnknown methods HRESULT STDMETHODCALLTYPE QueryInterface( /* [in] */ REFIID riid, /* [out] */ LPVOID *ppvObject ); ULONG STDMETHODCALLTYPE AddRef(); ULONG STDMETHODCALLTYPE Release(); // IMFOutputSchema methods HRESULT STDMETHODCALLTYPE GetConfigurationData(__out DWORD *pdwVal); HRESULT STDMETHODCALLTYPE GetOriginatorID(__out GUID *pguidOriginatorID); HRESULT STDMETHODCALLTYPE GetSchemaType(__out GUID *pguidSchemaType);}; // CTrustedAudioDriversOutputSchemaHRESULT CreateTrustedAudioDriversOutputSchema( DWORD dwConfigData, GUID guidOriginatorID, IMFOutputSchema **ppMFOutputSchema) { if (NULL == ppMFOutputSchema) { return E_POINTER; } *ppMFOutputSchema = NULL; CTrustedAudioDriversOutputSchema *pSchema = new CTrustedAudioDriversOutputSchema(dwConfigData, guidOriginatorID); if (NULL == pSchema) { LOG(eError, _T("new CTrustedAudioDriversOutputSchema returned a NULL pointer")); return E_OUTOFMEMORY; } *ppMFOutputSchema = static_cast<IMFOutputSchema *>(pSchema); return S_OK;} // CreateTrustedAudioDriversOutputSchema// constructorCTrustedAudioDriversOutputSchema::CTrustedAudioDriversOutputSchema( DWORD dwConfigData, GUID guidOriginatorID): m_dwConfigData(dwConfigData), m_guidOriginatorID(guidOriginatorID){}// destructorCTrustedAudioDriversOutputSchema::~CTrustedAudioDriversOutputSchema() {}#define RETURN_INTERFACE(T, iid, ppOut) \ if (IsEqualIID(__uuidof(T), (iid))) { \ this->AddRef(); \ *(ppOut) = static_cast<T *>(this); \ return S_OK; \ } else {} (void)0// IUnknown::QueryInterfaceHRESULT STDMETHODCALLTYPE CTrustedAudioDriversOutputSchema::QueryInterface( /* [in] */ REFIID riid, /* [out] */ LPVOID *ppvObject) { if (NULL == ppvObject) { return E_POINTER; } *ppvObject = NULL; RETURN_INTERFACE(IUnknown, riid, ppvObject); RETURN_INTERFACE(IMFAttributes, riid, ppvObject); RETURN_INTERFACE(IMFOutputSchema, riid, ppvObject); return E_NOINTERFACE;}// IUnknown::AddRefULONG STDMETHODCALLTYPE CTrustedAudioDriversOutputSchema::AddRef() { ULONG uNewRefCount = InterlockedIncrement(&m_cRefCount); return uNewRefCount;}// IUnknown::ReleaseULONG STDMETHODCALLTYPE CTrustedAudioDriversOutputSchema::Release() { ULONG uNewRefCount = InterlockedDecrement(&m_cRefCount); if (0 == uNewRefCount) { delete this; } return uNewRefCount;}// IMFOutputSchema::GetConfigurationDataHRESULT STDMETHODCALLTYPE CTrustedAudioDriversOutputSchema::GetConfigurationData(__out DWORD *pdwVal) { LOG(eInfo1, _T("IMFOutputSchema::GetConfigurationData called")); if (NULL == pdwVal) { return E_POINTER; } *pdwVal = m_dwConfigData; return S_OK;}// IMFOutputSchema::GetOriginatorIDHRESULT STDMETHODCALLTYPE CTrustedAudioDriversOutputSchema::GetOriginatorID(__out GUID *pguidOriginatorID) { LOG(eInfo1, _T("IMFOutputSchema::GetOriginatorID called")); if (NULL == pguidOriginatorID) { return E_POINTER; } *pguidOriginatorID = m_guidOriginatorID; return S_OK;}// IMFOutputSchema::GetSchemaTypeHRESULT STDMETHODCALLTYPE CTrustedAudioDriversOutputSchema::GetSchemaType(__out GUID *pguidSchemaType) { LOG(eInfo1, _T("IMFOutputSchema::GetSchemaType called")); if (NULL == pguidSchemaType) { return E_POINTER; } *pguidSchemaType = MFPROTECTION_TRUSTEDAUDIODRIVERS; return S_OK;}
Recently I was musing on Order vs. Chaos and toying with my Rubik's Cube. I wondered, as a simple exercise, whether it would be possible to strongly unsolve the cube. Obviously there is a single way to put the cube into a state of greatest possible order... and a multitude of ways to put it into a state of moderate to severe chaos... but is there a "most chaotic" state?
Well, I mused, the "greatest possible order" state is achieved by bringing all the orange squares together... and all the blue squares together... and, in general, all the squares of any given color together. How homogenous... or xenophobic. Ew.
Might it not be interesting to intermingle the colors to as great an extent as possible? Can I put the cube in a state where no two orange squares are adjacent... and no two blue squares are adjacent... and, in general, no two adjacent squares are the same color?
Of course, I responded. In fact, I already know how to do that... I learned that trick before I even learned the Restore Order solution.
Twelve turns later, I had a solution to the Adjacent Squares Are Different problem.
Exercise: what are the twelve turns?
Well, that was quick.
But I wanted more.
Yeah, OK, no two adjacent squares are the same color, by the usual definition of adjacency (the squares share a common border.) But this is still a fairly homogenous solution... each face (of nine squares) still consists of only two colors, and there's a very high incidence of diagonally-touching squares of the same color. Can't we diversify this even more?
That took me a couple of days. But here's the solution I came up with:
Note that, as desired, no two adjacent squares are the same color... even if you consider squares that touch only at a corner to be adjacent... even if that corner lies on an edge, and the two squares in question lie on different faces of the cube.
The method to achieving the solution was simple in the sense that it only requires two moves (starting from a solved cube) but is probably far from optimal in the "total number of turns" sense.
There are two independent steps which can be done in either order:
Each requires knowledge of a single move and a fair amount of courage.
First, some syntax.
Hold the cube facing you. I will name the six faces of the cube:
Each face has a local definition of "clockwise"... this is the direction a clock painted on the face would turn.
Cubelet syntax
Move syntax: a letter means turn that face clockwise by 90 degrees. A letter with a subscripted -1 means turn it counterclockwise by 90 degrees. So a move "turn the Fore face clockwise, then turn the Left face counterclockwise, then turn the Top face counterclockwise" would be written:
F L^{-1} T^{-1}
Start from a solved cube.
Flip the orientation of all twelve of the "edge pieces"
Pick your favorite color - say, red - and have that be the top face.
The following move flips the orientation of the LT and TH edge pieces, and also disturbs the orientation of some corner pieces:
L^{-1} T^{-1} L^{-1} L^{-1}F^{-1} L^{-1} F^{-1} F^{-1}T^{-1} F^{-1} T^{-1} T^{-1}
All twelve of the edge pieces are now flipped. The corner pieces are still in their original positions, though they may be oriented incorrectly. That's OK.
Reposition all eight corner pieces to the opposite corner
The following move repositions FLT to TLH, TLH to TRH, and TRH to FLT:
L^{-1} T R T^{-1} L T R^{-1} T^{-1}
As a convenience, the following mirror-image move repositions FRT to TRH, TRH to TLH, and TLH to FRT:
R T^{-1} L^{-1} TR^{-1} T^{-1} L T
Strictly speaking, you can get away with only memorizing one of these moves - each move is equivalent to holding the cube in a different position and executing the other move twice.
Pick three faces that share a common corner - say, the faces whose center cubes are red, white, and blue. Position the cube so it is balancing on that corner. Note that the corner cubelets now occupy four distinct strata:
This part of unsolving the cube is completed in four distinct phases:
EDIT January 16 2012: Thanks to Dustin for pointing out that the diagram above shows some reds touching diagonally (near the topmost corner.) After some analysis I believe this is just due to an error on my part in making the image; specifically, the top corner is green-yellow-red, but so is the corner in the top left. Also, the green-yellow-orange corner is missing.
Both of these can be explained by changing the red in the top corner to orange. Updated image:
In a previous post I posed several mathematical problems... I'd like to go back and give some answers to them.
To reiterate, we take a sine wave period and wrap it around a cylinder, giving us this:
The problems are to:
Let's define the curve. Since we're going to be making analogies to planetary orbit, let's define the curve parametrically. The parametric point is the analog of a planet.
Peter piper picked a peck of pickled peppers. -- Mother Goose
Penny Pingleton, you know you are punished. From now on you're wearing a giant P on your blouse EVERY DAY to school so that the whole world knows that Penny Pingleton is permanently, positively, punished. -- Prudence Pingleton, Hairspray
OK, now that we have that out of the way... I want the parametric point to sweep its way around the unit cylinder with constant angular velocity, as viewed from above. So if we project the path of the point into the unit cylinder with z = 0, then r(t) = 1 for all t and θ(t) is linear in t. In fact let's have θ(t) be t, so at time t = 2π our parametric point has made a complete cycle.
Now we'll add the z component in. The curve is a wrapping of sin(t), so z = sin(t); we aren't affecting the vertical component of the paper by wrapping it horizontally around a cylinder.
We're actually, in a sense, done. We have the formalized equations:
But let's convert to Cartesian coordinates, since we'll be dealing with planes. The relevant formulae are:
A little intuition, checked by back-substitution, gives us these for x and y:
Putting these together with our formula for z, we have:
Note that, no matter what t is, y(t) = z(t)... that is, the parametric point travels entirely in the plane given by the equation y = z. This proves that the parametric path is planar. QED #1.
To tackle #2 we have to prove that this planar curve is, in fact, an ellipse. One way to do that would be to come up with x' and y' coordinates in the plane of the curve, transform the parametric equations into the new coordinate system, and verify that a(x')^{2} + b(y')^{2} = c^{2} for some a, b, and c... but when I came up with the problem I had a much simpler proof in mind. There's also a third way of doing this, where you note that the curve is the result of an affine transformation of the unit circle... and an affine transformation of a circle is demonstratably an ellipse... but I hadn't thought of that.
Another astronomer/mathematician, Appollonius of Perga, made a study of "conic sections"... that is, the intersection of a plane with a double-headed cone. He came up with an exhaustive list of the kinds of curves that resulted.
Basic conic sections:
For complicated reasons (to wit, that gravity diminishes as the square of the distance between two objects,) all heavenly bodies trace either an ellipse or a hyperbola as they go around (or past) another heavenly body. Parabolae are theoretically possible but would smack heavily of "design"; as yet none have been observed.
For completeness, there are other "designed" ways to intersect a double-headed cone with a plane in a coincidental fashion:
Degenerate conic sections:
Since our curve is the intersection of a plane and a cylinder, we're done! QED #2. Well, not quite.
Appolonius' results hold for cones. We have a cylinder. Those aren't quite the same thing. Right?
They aren't, quite... but a cylinder can be viewed as a degenerate form of cone. To wit, a cylinder is a cone whose apex is infinitely far off. I'll get back to a "proof" of this in a minute, but if you allow this, then we really are done.
Instead of the seven intersections of a double-headed cone and a plane, there are only four ways a cylinder and a plane can intersect. Two of these are new.
Cylindrical conics:
Now for the proof that cylindrical intersection #1 really is an ellipse. Place two double-headed cones coaxially with the cylinder. Arrange them so that:
Diagram:
Notice that the entire curve lies in between the two cones. Consider the intersection of the plane that determines the curve with each of the cones. We know both of these are ellipses; we know that the intersection of the plane with the "upper" cone lies wholly outside our unknown curve; and we know that the intersection of the plane with the "inner" cone lies wholly inside our unknown curve.
The final stage of the proof lies in pushing the apexes of both cones to infinity, which "squeezes" our unknown curve in between two known ellipses. Since the horizontal distance between the cones goes to zero as 2^{3/2} / tan(θ), our unknown curve cannot help but be elliptical-ish... to any degree of precision... and it is, thus, an ellipse. QED #2.
For the final problem, we must make a "Keplerian observation." The observation in question is that our parametric point, like the planets, sweeps out equal areas in equal times. What makes this interesting is that the parametric point does not move in the same way that a planet does... so it's a little odd that such a similar result should hold... but it does.
Let's talk a little about Kepler's second law. A planet moves around the sun in an elliptical orbit. Fine. The sun lies at one of the foci of the ellipse. An imaginary line between the planet and the sun will sweep out area at a constant rate. That is to say, when the planet is far from the sun, that line will be long... and it will move correspondingly slowly. When the planet is near to the sun, that line will be short... and it will move correspondingly quickly.
Conversely, our parametric point sweeps around the origin at constant angular velocity. So this is trivially "equal areas in equal times". Right?
Not quite. It's true that our parametric point appears to have constant angular velocity, if you project its path into the x-y plane... that is, if you view it from directly above, from a point on the z-axis.
But that's a silly way to look at things. To get an idea of the point's actual motion, it's far more natural to view it from an axis orthogonal to the plane of motion. So let's put an observer (Aranea) directly above the point, and another observer (Nellie) in the more natural location.
From Aranea's point of view, the motion is a simple constant-speed trip around a circle. But Nellie sees the parametric point's true velocity. The parametric point's velocity has two components... a constant "clockwise" component in the x-y plane, and a variable orthogonal component going "up" or "down". This is maximized when the curve crosses the x-y plane, and minimized when the curve is topping or bottoming out. So Nellie sees this:
It seems hard to believe, given that the parametric point is speeding up and slowing down according to different rules... and the notion of "center" is different... but the fact is, the parametric point also sweeps out equal areas in equal times!
To see this most easily, look at Aranea's point of view again. Have Aranea draw a grid on her picture of the system. She can count the area swept out in any given time by counting the number of squares.
Now consider what those squares look like to Nellie. They're rectangles. But they're all of equal area. Aranea and Nellie can label the squares according to some naming scheme, and will agree that the same set of squares/rectangles were swept out... but this means, since the rectangles are of equal area, that even in Nellie's point of view, equal areas are swept out in equal times! QED #3.
In the first Matrix movie there's a very interesting character called Cypher. If you go along with the theory that the Matrix series is a rough retelling of the story of Christ, Cypher is the closest analog to Judas Iscariot, who is one of the earliest very-interesting-characters.
Unfortunately I personally found that the actor kind of got in the way of the character sometimes.
For example, in his scene with Neo, Cypher has this line
The image translators work for the construct program - but there's way too much information to decode the Matrix.
The actor delivers this line in such a way (the image translators work for the construct program) as to imply that there are these thingies called "image translators", and this other thingy called a "construct program". Furthermore, these "image translator" thingies are currently in the employ of the "construct program". The "construct program" is apparently a very nasty beast with close ties to the machine Gestapo. Therefore, if we were so foolish as to attempt to recruit the talents of the "image translators", the "construct program" would find out, and then we'd be in big trouble.
It follows then, as the night the day, that we are thrown on our own, limited, resources... and what with piloting the ship and arranging the menu and what-not, decoding the matrix (being a substantial task) is rather low on the priority list.
Sigh.
This would be an interesting reading, filled with wonderful foreshadowings of the inevitable discovery of Zion by the machines, except for one thing.
It is already firmly established that the "construct program" is the Holodeck-like-thing that the crew of the Nebuchadnezzar use to equip themselves with simulated equipment (and simulated skills) for use in the simulated reality of the Matrix. So the above reading is, of course, nonsense. The correct implication of the line is:
There are these thingies called "image translators." We throw bits of code at them and they translate them into electrical stimuluses that our brain interprets as images. They can handle such-and-such amount of code.
The construct program is under our control, and we scale back the amount of code it generates to what our image translators can handle. That is why when you enter the construct program you can "see" your residual self-image and whatnot.
However, the Matrix is not under our control, so we can't scale the amount of code it generates. It so happens that the Matrix generates an awful lot of code, which overpowers our image translators... which is why these monitors just show a bunch of greenish symbols instead of pretty girls with hair of various colors.
A delivery that gets this across would be more like
The image translators work - for the construct program - but there's way too much information to decode the Matrix.
I'm a casual reader of Randall Munroe's xkcd web comic. I'm partial especially to the artistic episodes.
I was reading the "How it Works" episode, reproduced here:
Fine, a noble sentiment (women in the hard sciences are unfairly targeted) and well executed.
As is my wont, I then went for the extra bit of juiciness to the episode by hovering over the image, and what do I find?
It's pi plus C, of course.
... OK, fine. A good reference to an in-joke in the mathematical community. Well done.
... except I can't stomach the idea of "π plus a constant." It just doesn't sit right. And there's something wrong about the equation on the board. It's missing something... else.
The equation as it appears on the board:
... and the suggested improvement:
Yeah. I still don't buy it.
Under the premise that the best way to spoil a good joke is to analyze it, let's see if we can figure out what our characters are trying to do with this integral, shall we?
Well, there are integrals and integrals. There are at least three common uses of integrals:
The "plus a constant" comment applies exclusively to the first use. The function y = x^{2} is analytic everywhere and so is not likely to be interesting for a contour integral. But the second use seems to be very appropriate to this "integral equals a constant" equation:
The first thing to do is find the antiderivative. This can occasionally be very difficult, but in this case we have a textbook function which is the first recipe anyone memorizes:
The above formula holds for all p except for p = -1. That one's a little tricky because the denominator above would be 0. For completeness, here's the case with p = -1:
A little faith is required if you haven't seen the derivation, but it works out. (This is as good a definition as any of ln x, FWIW.)
Indeed, the antiderivative is a whole family of functions which differ only by a constant. But if we're going to evaluate it at certain limits, we need to (eventually) know what the limits are. Let's call them a and b for now:
(Note in particular that the evaluation doesn't depend on which of the family of functions we chose... C cancels out. This is my problem with the comment.)
Setting p = 2 we still have a single equation in two unknowns (a and b). Still need more information.
Well, let's think about it for a minute. Many functions have "natural" points of evaluation. For x^{2} this is 0. For 1/x it's 1. What if we set a = 0? The natural-ness of 0 as a base point of evaluation is made clear from a graph of y = x^{2}:
If we set p = 2 and a = 0, then the equation above reduces to the solvable problem:
The solution is now trivial:
So we can complete the original blackboard equation as follows:
or in fuller generality as
for any a.
(I'm sticking with the a = 0 solution, myself.)
I alluded to a bug in my last post but didn't explicitly say what it was. Hint: it's all about phase.
So let's talk about phase for a minute. Sine waves are periodic... although they are frequently plotted over very large intervals, their natural domain is a single period.
(original)
Exercise - The original version, when printed, makes a nice armband.
Advanced Exercise - generate a one-and-a-half period sine wave, print it on both sides of a piece of paper, and make a Möbius strip.
Well, that's a step, but it's not really getting at the nature of the sine wave. Yes, we notice that the beginning value (0) is the same as the ending value... and the beginning slope (+1) is the same as the ending slope... and there is, perhaps, a certain smoothness beyond even slope (in fact, all derivatives agree at the crossover point...)
If you see a fact, try to see it as intuitively as possible.Do the first exercise above - print out the original image, cut it out along the lines, and make a cylinder by connecting the short ends together. Rotate the cylinder through various orientations, paying particular attention to the location of the sine wave at all times.
What you get is hard to render in a two-dimensional blog post, but here's a stab at it (Matlab script below:)
Notice that this representation of the sine wave is an ellipse.
Exercise - Prove that the red line in the above graph is planar (Hint: you can type the equation of the plane in three keystrokes.)
Advanced Exercise - Prove that it's an ellipse.
Advanced Exercise - Make a Keplerian observation. (Hint: note the angles at which the red line crosses the dotted black lines.)
So what does this have to do with phase?
Everything.
There's another hint in the documentation of double sin(double), which I also linked in my last post:
sin returns the sine of x [phase]. If x [phase] is greater than or equal to 263, or less than or equal to –263, a loss of significance in the result occurs. (my emphasis)
A final hint: again from my last post, "as a side effect, we get a kinda-sorta implementation of ϕ [phase]."
When you find yourself with a kinda-sorta implementation of something, that is occasionally a hint from above that your life will be easier if you do some self-examination... perhaps a full-blown implementation of that thing will be (a) easier and (b) more what you actually want.
No more hints. If you want to figure it out for yourself, stop reading now.
OK, here's the scoop. The thing you feed to sin(...) is a radian measurement. This is naturally between 0 and 2π (or, perhaps, -π to π if you prefer.) You can feed it things that are out of this range... say, sin(100.0)... but this is a little out of the natural and there are consequences.
double-type numbers, and floating point numbers in general, can represent very small numbers to a very high degree of precision... and they can represent very large numbers with the same relative degree of precision. See IEEE754 for gory details.
An illustration is in order. Richard Feynman, celebrity physicist, tells a story of his youth. He claimed to be able to solve, in one minute, to within 10% accuracy, any problem that can be stated in ten seconds.He did pretty well until someone came up with "tan(10^{100})".
If you feed bigger and bigger numbers to sin(...), eventually it starts giving you sloppy answers. It has to. It's not getting enough information to give you a good answer.
I was able to test the tone generation function and measure the signal-to-noise ratio of the tones it was producing after varying lengths of sample time. It started off very well, but after very long times (days, weeks, years) the SNR would get slowly compromised until, after millions of years, the signal disappeared.
That is not the bug.
But it is a serious clue to the bug.
If the worst-case scenario for an implementation is that the signal will depreciate over a million years, ship it.
Take your cylinder that you made above. Place it in front of you so that the taped-together part is directly in front of your nose, and the part to the right has the red line going up. This is a sine wave.
Rotate it clockwise 90 degrees, so the part in front of your nose has the red line brushing the top. Now it is a cosine wave.
Rotate it counter-clockwise 60 degrees, so the part in front of your nose has the red line crossing the dotted line. Now it is a sine wave with a known phase (30 degrees.)
In fact, rotate it any angle (say, ϕ) and it's a sine wave with known phase ϕ.
We can't do that with our tone generator.
That is the bug. We're implementing a tweaked version of phase which has a forgivable side effect, but which is not a full implementation of phase.
Why do we care about implementing phase?
Well, for example, one problem I sometimes have when testing audio fidelity is I connect the stereo cables to the Audio Precision equipment the wrong way (left-to-right and right-to-left.) If I could generate the test tones out of phase by 90 degrees, with the left channel in front of the right channel, I could take a phase measurement and log a "you have the cables backwards" error. I send different frequencies over the cables, so the ifFrameOffset parameter is of little use here.
So what's the improved implementation?
Instead of
// given a buffer, fills it with sine wave dataHRESULT FillBufferWithSine( LPCWAVEFORMATEX pWfx, // includes sample rate double lfAmplitude, double lfFrequency, UINT64 ifFrameOffset, PBYTE pBuffer, UINT32 cbBytes);
do
// given a buffer, fills it with sine wave dataHRESULT FillBufferWithSine( LPCWAVEFORMATEX pWfx, // includes sample rate double lfAmplitude, double lfFrequency, // in/out // On input, initial phase (0 to 2pi). // On output, phase of next sample. double *plfPhase, PBYTE pBuffer, // out - byte count is cbBytes UINT32 cbBytes);
Now, if you want to generate a lot of contiguous sine tone buffers, you do something like this:
double lfPhase = 0.0; // start at 0... or somewhere elsewhile (...) { // ... // FillBufferWithSine takes care of updating lfPhase as necessary hr = FillBufferWithSine(pWfx, lfAmp, lfFreq, &lfPhase, pBuffer, cbBytes); // ... }
As a side effect of this new phase feature, the quality degradation non-issue goes away. So the generated tones will sound good forever... which is kind of neat... but not the motivation for this fix.
Why did this happen?
Because there are many ways to measure time. You can count samples... you can count milliseconds... you can count hundred-nanosecond intervals... you can count CPU flops... you can do so many different things. Some are more natural than others in different contexts.
The reason this happened is because a time measure that was very natural in one context (counting frames: playing an audio stream) was somewhat artificially shoved into another context (generating a sine wave) where there was another, more natural context (changing phase.) Alas, context switching is a frequent source of bugs.
Matlab script to generate the three-dimensional rendering of a sine wave:
One common task on the Windows Sound test team is taking a known-good sine wave, passing it through some chunk of code, and looking at what comes out. Is it a sine wave? Is it the same frequency that it should be? Etc., etc.
So the quality of the sine wave generator is important.
There's a commonly available sine wave generator: double sin(double). It's fairly simple to use this to generate a buffer, if you keep a few things in mind. But there are pitfalls too.
In full generality, a sine wave can be completely described by four measurements:
These can all be linked together by a single formula which tells you the mathematical value of a sine wave with these features at any given time. Let:
a = amplitude (0.0 to 1.0)f = frequency (in cycles/sec)c = dc (0.0 to 1.0; to avoid clipping, 0.0 to (1.0 - a))ϕ = starting phase (0.0 to 2π)t = the time of interest in seconds
Then
w(t) = a sin(2πft + ϕ) + c
Note in passing that w(0) = a sin((2πf)0 + ϕ) + c = a sin(ϕ) + c is independent of f.
So much for mathematics. When you want to realize a sine wave in a PCM wave format, you have to discretize the time dimension and the sample dimension. To discretize the time dimension you need a sample rate (usually 44100 samples per second) and to discretize the sample dimension you need to specify a sample format (usually 16-bit int).
Let's talk about discretizing the time dimension first. Let s = the sample rate, and i be the sample of interest. Now:
w(i) = a sin(2πfi / s + ϕ) + c
As a design simplification, we collect constants, and eliminate ϕ and c...
w(i) = a sin((2πf / s) i)
And we have an implementable function
// given a buffer, fills it with sine wave dataHRESULT FillBufferWithSine( LPCWAVEFORMATEX pWfx, // includes sample rate double lfAmplitude, double lfFrequency, PBYTE pBuffer, UINT32 cbBytes);
This would start at i = 0, loop through i = cbBytes / pWfx->nBlockAlign, and set sample values for each i.
It is often necessary to get continuous sine wave data in chunks... say, 1 KB at a time. It is important that the phase of the sine wave match at chunk boundaries. This suggests another parameter:
// given a buffer, fills it with sine wave dataHRESULT FillBufferWithSine( LPCWAVEFORMATEX pWfx, // includes sample rate double lfAmplitude, double lfFrequency, UINT32 ifFrameOffset, PBYTE pBuffer, UINT32 cbBytes);
... where ifFrameOffset is the count of frames to skip. Now a client can fill a 1 KB buffer with 256 frames of 16-bit int stereo sine wave, pass it off to someone else, fill another 1 KB buffer starting at frame offset 256 with 16-bit int stereo sine wave, etc., and the effect is an almost infinite continuous sine wave. (Oh, and as a side effect, we get a kinda-sorta implementation of ϕ.)
Almost.
Why almost?
Well, because I used UINT32 as the frame offset. Suppose you, as a client, use a WAVEFORMATEX with a nice high sample rate like 384 kHz. A UINT32 wraps at 2^{32}. At 384,000 samples per second, you'll burn through 2^{32} samples in (2^{32} / 384000) seconds... that's, um, 1184.8106666... seconds... three hours, six minutes, twenty-five seconds. This is uncomfortably close to reasonable.
At that point, one of two things happen. If you were clever in your chosen frequency, you will wrap back to a point in the sine wave where the phase is the same as what you want anyway... and there will be no observable artifact in the generated sine wave. Or, if you were unlucky, you will wrap back to a point in the sine wave where the phase is different - and there will be a discontinuity. This will probably result in an audible "pop" artifact in the signal tone.
Our test tone generator had this problem. We knew about it - we didn't care.
But sometimes little things nag at me and I have to fix them or I can't sleep at night. It's a fixation... a terrible disease... but there it is.
So I fixed it - almost. Now we use a UINT64 for the sample offset. That won't wrap until 2^{64} samples. Even at 384 kHz, that won't wrap until (2^{64}/384000) seconds - that's, um, 48038396025285.29 seconds... 1,521,800-odd years. I'll let a tester from that era file a bug then. :-)
Well, it turns out that fixing that bug uncovered another bug. We know about it - we don't care. (But sometimes little things nag at me...)
Exercise... what is the bug?
We figured they were more actual guidelines. -- Pirates of the Caribbean
Macros are wonderful things. I love macros. They're exciting and dangerous. Every time I write a macro I salivate at the power I am wielding. It's like using a blowtorch.
It is perhaps no coincidence that many coding standards put big warning signs around macroland. "Consider carefully before using macros", some say. "When considering using a macro, see if you can use an inline function instead," some say. "Macros are frequent sources of bugs," some say. "C++ has evolved to the point where, with inline functions and templates, macros are no longer necessary".
Heresy.
All absolutely true, and yet heresy nonetheless. It is true that there are many bugs out there due to misuse of macros. And yet only a few simple tips are necessary to prevent misuse of macros: (The problem is that the tips don't always get followed).
She generally gave herself very good advice, (though she very seldom followed it) -- Alice in Wonderland
Let's talk about the "no expressions with side effects" rule a little more. It is OK to do things like
if (SUCCEEDED(CoInitialize(...))) { ... }
if (FAILED(CoCreateInstance(...))) { ... }
return HRESULT_FROM_WIN32(GetLastError());
... but for different reasons. SUCCEEDED(...) and FAILED(...) only evaluate their arguments once, as can be verified from their definitions in winerror.h. I'm using the Vista RTM version of winerror.h:
//// Generic test for success on any status value (non-negative numbers// indicate success).//#define SUCCEEDED(hr) (((HRESULT)(hr)) >= 0)//// and the inverse//#define FAILED(hr) (((HRESULT)(hr)) < 0)
Note the heavy (and correct) use of parentheses. I still personally don't think it's a good idea to rely on this, since macros have a habit of changing... but I won't ding anyone on a code review for it.
HRESULT_FROM_WIN32(GetLastError()) is OK for a totally different reason. HRESULT_FROM_WIN32(...), classically, does evaluate its argument multiple times... this time I'm using the Windows 2000 version of winerror.h:
#define HRESULT_FROM_WIN32(x) (x ? ((HRESULT) (((x) & 0x0000FFFF) | (FACILITY_WIN32 << 16) | 0x80000000)) : 0 )
... so this looks like a bug... until you realize that GetLastError() is idempotent. If you call it a bunch of times in a row on the same thread, it will (per its contract) always return the same value.
Note that the macro is not completely parenthesized! The bold x is missing its protective parentheses. This was fixed in a Windows 2000 service pack. For example, here's the Win2K SP3 version:
#define HRESULT_FROM_WIN32(x) ((HRESULT)(x) <= 0 ? ((HRESULT)(x)) : ((HRESULT) (((x) & 0x0000FFFF) | (FACILITY_WIN32 << 16) | 0x80000000)))
Much better.
I'm still personally leery of using side effects in HRESULT_FROM_WIN32 though. Why? Because it sets a bad precedent. It's easier to have a simple rule (no side effects in a macro call!) than to remember for an infinitude of macros and function calls which macros evaluate the arguments multiple times and which function calls really have side effects. You also get legitimate bugs like this.
Macros, macros, macros... you deviate just a little bit from the straight-and-narrow and what happens? You get blown out of the water. Wonderful things, macros.
Nasty little bugs arising from harmless-looking violations of the safety tips are the cause of the dirty secret I am about to reveal...
As of Vista RTM...
HRESULT_FROM_WIN32 is NO LONGER a macro!
This is one of those annoying little facts I try so hard to forget because they complicate life unnecessarily. From my point of view, there is no bad consequence of treating HRESULT_FROM_WIN32 as the macro it looks like (and was... and who knows, may be again.) The inline function it has become is uniformly safer.
This has apparently been a long time in coming. Here are some excerpts of winerror.h from various releases of Windows:
Windows 2000 RTM:
Windows XP RTM:
// __HRESULT_FROM_WIN32 will always be a macro.// The goal will be to enable INLINE_HRESULT_FROM_WIN32 all the time,// but there's too much code to change to do that at this time.#define __HRESULT_FROM_WIN32(x) ((HRESULT)(x) <= 0 ? ((HRESULT)(x)) : ((HRESULT) (((x) & 0x0000FFFF) | (FACILITY_WIN32 << 16) | 0x80000000)))#ifdef INLINE_HRESULT_FROM_WIN32#ifndef _HRESULT_DEFINED#define _HRESULT_DEFINEDtypedef long HRESULT;#endif#ifndef __midl__inline HRESULT HRESULT_FROM_WIN32(long x) { return x < 0 ? (HRESULT)x : (HRESULT) (((x) & 0x0000FFFF) | (FACILITY_WIN32 << 16) | 0x80000000);}#else#define HRESULT_FROM_WIN32(x) __HRESULT_FROM_WIN32(x)#endif#else#define HRESULT_FROM_WIN32(x) __HRESULT_FROM_WIN32(x)#endif
Whew. The net result is that HRESULT_FROM_WIN32 is a macro. Oh, unless you decide to be avant-garde and #define INLINE_HRESULT_FROM_WIN32 before you #include <winerror.h>. Which probably very few people did, except for build engineers running experiments. So this is an "opt-in" inline function.
Windows Vista RTM:
//// HRESULT_FROM_WIN32(x) used to be a macro, however we now run it as an inline function// to prevent double evaluation of 'x'. If you still need the macro, you can use __HRESULT_FROM_WIN32(x)//#define __HRESULT_FROM_WIN32(x) ((HRESULT)(x) <= 0 ? ((HRESULT)(x)) : ((HRESULT) (((x) & 0x0000FFFF) | (FACILITY_WIN32 << 16) | 0x80000000)))#if !defined(_HRESULT_DEFINED) && !defined(__midl)#define _HRESULT_DEFINEDtypedef __success(return >= 0) long HRESULT;#endif#ifndef __midlFORCEINLINE HRESULT HRESULT_FROM_WIN32(unsigned long x) { return (HRESULT)(x) <= 0 ? (HRESULT)(x) : (HRESULT) (((x) & 0x0000FFFF) | (FACILITY_WIN32 << 16) | 0x80000000);}#else#define HRESULT_FROM_WIN32(x) __HRESULT_FROM_WIN32(x)#endif
The experiments appear to have been completed. Apparently no imaginable consequence of changing to an inline function was worse than the known consequence of bugs.
People who want the macro can still use __HRESULT_FROM_WIN32.
Why would you want to? Hmmm... maybe you're building a switch statement. Something like:
#define CASE_HRESULT(x) case x: return #x#define CASE_ERR(x) case HRESULT_FROM_WIN32(x): return #x // need __HRESULT_FROM_WIN32 here?LPCSTR FriendlyNameOf(HRESULT hr) { switch(hr) { CASE_HRESULT(S_OK); CASE_HRESULT(S_FALSE); ... many more... CASE_ERR(ERROR_SUCCESS); CASE_ERR(ERROR_INVALID_FUNCTION); ... } return "Unrecognized";}
LPCSTR FriendlyNameOf(HRESULT hr) { switch(hr) { CASE_HRESULT(S_OK); CASE_HRESULT(S_FALSE); ... many more... CASE_ERR(ERROR_SUCCESS); CASE_ERR(ERROR_INVALID_FUNCTION); ... } return "Unrecognized";}
I haven't tried it out, but I suspect the compiler will reject the attempt to squeeze the result of an inline function into the value of a case, but will accept a complicated expression involving a bunch of constants (i.e., the macro will work.)
Insight is a tricky thing. To a certain degree people are born with it (or without it) - but if you are gifted with a measure of it, you can develop it (as though it were a muscle) by working at it.
The late mathematician George Pólya had a number of helpful problem-solving mottos, one of which is the title of this post. There's a nice echo in the chess world, where the maxim "If you see a good move, wait - look for a better one!" is popular (as is the simpler version, "Sit on your hands!")
The granddaddy of them all is William of Ockham's "entia non sunt multiplicanda praeter necessitatem"... AKA, Occam's [sic] Razor.
KISS, as they say.
What am I going on about?
In the Computer Science world, it happens very often that your first idea... though it works... is inferior to your second idea. A great tragedy that is played over and over is:
I'm exaggerating a bit by calling this a great tragedy. It's also a perfectly good development strategy to begin coding with the knowledge that you will probably come back and redo a large chunk of what you're doing as you become more familiar with the problem domain. The key words here are "with the knowledge"... if you know that what you're coding is experimental, you can avoid a lot of scheduling Sturm und Drang.
Bottom line: it happens frequently that a good idea has a better one as a sequel. Keep your eyes open... are you solving a specific case when you could, with less work, solve the general problem? Look at your fixed code fresh - are you introducing another problem with the fix? Is there a more appropriate way to fix the problem? Is it better to fix the symptom instead of the disease? Stop and think. Even if you don't find a better way, it's good exercise for your noodle.
There are several ways to get a number that represents "the time right now." Some of them wrap; some of them don't.
Some that wrap:
Some that don't:
A common programming meme is to start an operation and then wait for it to complete... but give up after a certain preset "timeout." There are multi-threaded ways of doing this that are perhaps better, but for better or worse, there is a fair amount of code out there that looks like this:
StartSomething();while ( ! Done() ) { if ( BeenWaitingTooLong(Timeout) ) { break; } WaitABit();}if ( Done() ) { Great();} else { Darn();}For this post I want to focus on the BeenWaitingTooLong(Timeout) snippet emphasized above. If the timing function being used wraps, it is all too easy to get this to work most of the time... when it's just as easy to get it to work all of the time. Alas, the consequences of software that works most of the time tend to be more severe than software that never works.I wouldn't last long in the banking business being accurate most of the time! -- The Music ManI'll use GetTickCount() for these examples, but I want to emphasize that the same malady affects all of the wrappy time counters.
Here are some different ways to do it:
StartSomething();DWORD msTimeout = 10 * 1000; // give up after ten secondsDWORD msAtStart = GetTickCount();// assume Done() always returns within, say, a daywhile ( ! Done() ) { if ( // most of the time, these are the same. // which one will always work? // GetTickCount() - msAtStart > msTimeout // GetTickCount() - msTimeout > msAtStart // GetTickCount() > msAtStart + msTimeOut ) { break; } // assume that WaitABit() always returns within, say, a day WaitABit();}if ( Done() ) { Great();} else { Darn();}
The correct answer is:
GetTickCount() - msAtStart > msTimeout
The other two will work most of the time, but will occasionally fail.
There's an easy rule I use to always remember the right one.
When dealing with a clock that wraps, only compare intervals.
Allow me to redecorate my variable names using Joel Spolsky's "Making Wrong Code Look Wrong" technique.
StartSomething();DWORD intervalTimeout = 10 * 1000; // give up after ten secondsDWORD instantStart = GetTickCount();// assume Done() always returns within, say, a daywhile ( ! Done() ) { if ( // which one will always work? // GetTickCount() - instantAtStart > intervalTimeout // GetTickCount() - intervalTimeout > instantAtStart // GetTickCount() > instantAtStart + intervalTimeout ) { break; } // assume that WaitABit() always returns within, say, a day WaitABit();}if ( Done() ) { Great();} else { Darn();}
Some thought experiments immediately fall out of this.