Larry Osterman's WebLog

Confessions of an Old Fogey
• So where DOES the mass of a tree come from?

Yesterday, I asked where the mass comes from in a tree.

The answer is actually really simple: Carbon.  Photosynthesis is the act of converting CO2 from the air into O2 and a bit of H2O.

It turns out that if you ask new Harvard graduates this question, the vast majority of them answer some variant of "It comes from the soil".  When people think of photosynthesis, they don't think about the carbon that's left behind.  They'll usually be puzzled as they answer, because in their hearts, they realize that "the soil" doesn't actually work as an answer, but they can't quite put all the pieces together.

If you ask 7th graders the same question right after they've finished their photosynthesis unit, they end up coming up with a variant of the same answer.  Or they say it comes from the water the plant absorbs. Soil and water have mass, and that seems to the determining factor in their answer.

You see, 7th graders don't seem to get the idea that air contains mass - it's just the stuff that's around them, it doesn't "weigh" anything.  Since they have always been exposed to the weight of air, it doesn't occur to them that it has any real properties at all.  If you show them a block of dry ice, and ask them what it is, they'll say "It's ice". If you ask them to weigh it, they get that part.  It's not until you follow that up and ask "So what's happening to the ice?" and they realize that the fog it's generating is disappearing into the air that they'll start figuring out what's happening - that the mass of the dry ice is being absorbed into the air.  The very smartest of the kids will then put the pieces together and realize that air DOES have mass.

Yesterday's "quiz" netted over 70 correct answers and about 15 incorrect answers, I've got to say that I'm pretty impressed. The reality is that since I only let the incorrect answers through, it biased the sample towards correctness (several people mentioned that they'd read the other comments and realized that their first thought was wrong).

Valorie had one more question: Could those of you who got the answer right, and who are under 30, and who were educated under the American education system post a comment below?

• Going gaga over XGL

Chris Pirillo's been making a ton of noise over a video he posted showing off a YouTube video of a demo of the XGL desktop running on KDE.

He then turns around and asks "Why can't Vista look like this?".  I'm not a UX (user experience) guy, but I have watched the video and I've got some pretty strong opinions about it.

First off, he's right - this is a pretty amazing demo.  It has TONS of eye candy.  The "bouncy" effects on the windows are very pretty.  The rotating cube is cool, as is the "windows bump into each other" effect. Having said all that, there's a TON of distance between a cool demo (or proof of concept, or whatever it is you call something that's not shipping in a product for millions of consumers).

For instance, the bouncy windows make you seasick after a while.  And the cube desktop, while slick has some serious issues - for instance, you've got a strong potential for "losing" your windows (because they're on a face of the cube that's obscured).

The key thing to realize is that it's relatively easy to make a cool UI.  I've seen the most amazing proof of concepts for Windows UI coming from our advanced UX team.  Really compelling stuff, that just knocks your socks off.

And not one of them has ever seen the light of day outside of Microsoft (to my knowledge).

Why is this?  Because making a good user experience is HARD.  It's easy to make a cool user experience, it's REALLY hard to make one that's good, and that works for millions of users.  There are a ton of things you need to consider.  You need to consider usability, accessibility, localizability (yeah, it matters - Right-To-Left languages may have differnt visual conventions than Left-To-Right languages), all sorts of other *bilities.  I've been through enough and read enough UX reviews over the redesigned multimedia control panel in Vista to realize the complexity of the things that these guys have to deal with.  It's a lot harder than you think.  John Gruber over at Daring FIreball has a classic post entitled "Ronco Spray-On Usability where he talks about some of hte issues.

Take floppy windows for example.  The Shell Fit&Finish dude (Dave Vroney) just put out a post explaining why they disabled floppy windows.  The answer is that they significantly reduce the usability of the system.  They may be cool but they get really annoying really soon.

And, of course, Vista is only V1 of the DWM.  This release is about getting the heavy lifting and building a new desktop compositing engine.  Future releases are likely to have a ton more cool stuff coming from the UI wizards now that they have a platform on which they can do really cool things.

• Why is Control-Alt-Delete the secure attention sequence (SAS)?

When we were designing NT 3.1, one of the issues that came up fairly early was the secure attention sequence - we needed to have a keystroke sequence that couldn't be intercepted by any application.

So the security architect for NT (Jim Kelly) went looking for a keystroke sequence he could use.

It turned out that the only keystroke combination that wasn't already being used by a shipping application was control-alt-del, because that was used to reboot the computer.

I've got to say that the first time that the logon dialog went into the system, I pressed it with a fair amount of trepidation - I'd been well trained that C-A-D rebooted the computer and....

• Farewell to one of the great ones

Yesterday was the last day at Microsoft for David WeiseI've written about David (in passing) in the past, but never in detail.

David started at Microsoft in 1986, when Microsoft acquired Dynamical Systems Research.  Before founding DSR, he was a member of the ORIGINAL MIT blackjack team - not the latecomers that you see in all the movies, but the original team, back in the 1970s.  According to Daniel Weise (David's twin brother), they ran it like an investment company - MIT people could invest money in the blackjack team, and the blackjack team would divide their winnings up among them.  Apparently RMS was one of the original investors, during David's going away party, Daniel joked that the FSF was founded on David's blackjack winnings :)

After leaving Princeton with a PhD in molecular biophysics, David, Chuck Whitmer, Nathan and Cameron Myhrvold, and a few others founded DSR to create a "Topview" clone.  For those new to the industry, Topview was a text based multitasking shell that IBM created that was going to totally revolutionize the PC industry - it would wrest control of the platform from Microsoft and allow IBM to maintain its rightful place as leader of the PC industry.  Unfortunately for IBM, it was an utter flop.

And, as Daniel pointed out, it was unfortunate for DSR.  Even though their product was twice as fast as IBMs and 2/3rds the size, when you base your business model on being a clone of a flop, you've got a problem.

Fortunately, at the time, Microsoft was also worried about Topview, and they were looking for a company that understood the Topview environment so that if it was successful, Microsoft would have the ability to integrate Topview support into Windows.

Finding DSR may have been one of the best acquisitions that Microsoft ever made.  Not only did they find the future CTO (and founder of Microsoft Research) Nathan Myhrvold, but they also hired David Weise.

You see, the DSR guys were wizards, and David was a wizard's wizard.  He looks at programs and makes them smaller and faster.  It's absolutely magical to watch him at his work.

I (and others) believe that David is single handedly responsible for making Microsoft over a billion dollars.  He's also (IMHO) the person who is most responsible for the success of Windows 3.0.

Everywhere David worked, he dramatically improved the quality of the product.  He worked on the OS/2 graphics drivers and they got faster and smarter.  He (and Chuck) figured out tricks that even the designers of the hardware didn't realize could be done.

And eventually, David found himself in the Windows group with Aaron Reynolds, and Ralph Lipe (and several others).

Davids job was to move the graphics drivers in windows into protected mode on 286 and better processors (to free up precious memory below 640K for Windows applications).  He (and Chuck) had already figured out how to get normal Windows applications to use expanded memory for their code and data, but now he was tackling a harder  problem - the protected mode environment is subtler than expanded memory - if you touched memory that wasn't yours, you'd crash.

David succeeded (of course).  But David, being David, didn't stop with the graphics drivers.

He (along with Murray Sargent, creator of the SST debugger) also figured out how to get normal Windows applications running in protected mode.

Which totally and utterly and irrevocably blew apart the 640K memory barrier.

I remember wandering over to the Windows group over in Building 3 to talk to Aaron Reynolds about something to do with the MS-DOS redirector (I was working on DOS Lan Manager at the time).  I ran into David, and he called me into his office "Hey, look at what I've got working!".

He showed me existing windows apps running in protected mode on the 286.  UNMODIFIED Windows 1.0 applications running in protected mode.

He then ran me around the rest of the group, and they showed me the other stuff they were working on.  Ralph had written a new driver architecture called VxD.  Aaron had done something astonishing (I'm not sure what).  They had display drivers that could display 256 color bitmaps on the screen (the best OS/2 could do at the time was 16 colors).

My jaw was dropping lower and lower as I moved from office to office.  "Oh my goodness, you can't let Steve see this, he's going to pitch a fit" (those aren't quite the words I used, but this is a family blog).

You see, at this time, Microsoft's systems division was 100% focused on OS/2 1.1.  All of the efforts of the systems division were totally invested in OS/2 development.  We had invested literally tens of millions of dollars on OS/2, because we knew that it was the future for Microsoft.  OS/2 at the time just ran a single DOS application at a time, and it had only just recently gotten a GUI (in 1989).  It didn't have support for many printers (only about 5, all made by IBM, and (I believe) the HP Laserjet).

And here was this little skunkworks project in building three that was sitting on what was clearly the most explosive product Microsoft had ever produced.  It was blindingly obvious, even at that early date - Windows 3.0 ran multiple DOS applications in virtual x86 machines.  It ran Windows applications in protected mode, breaking the 640K memory barrier.  It had a device driver model that allowed for development of true 32bit device drivers.  It supported modern displays with color depths greater than had been available on PC operating systems.

There was just no comparison between the two platforms - if they had to compete head-to-head, Windows 3.0 would win hands down.

Btw, David had discussed it with Steve (I just learned that yesterday).  As David put it, he realized that this was potentially an issue, so he went to Steve, and told him about it.  Steve asked Dave to let him know when he'd made progress.  That night, David was up until 5AM working on the code, he got it working, and he'd left it running on his machine.  He left a note on SteveB's office door saying that he should stop by David's office.  When David got in the next day (at around 8AM), he saw that his machine had crashed, so he knew that Steve had come by and seen it.

He went to Steve's office, and they had a chat.  Steve's only comment was that David should tell his manager and his manager's manager so that they'd not be surprised at the product review that was going to happen later that day.  At the product review, Steve and Bill greenlighted the Windows 3.0 project, and the rest was history.  My tour was apparently a couple of days after that - it was finally ok to let people know what the Windows 3.0 team was doing.

The rest was history.  At its release, Windows 3.0 was the most successful software project in history, selling more than 10 million copies a month, and it's directly responsible for Microsoft being where it is today.

And, as I mentioned above, David is responsible for most of that success - if Windows 3.0 hadn't run Windows apps in protected mode, then it wouldn't have been the unmitigated success it was.

David's spent the last several years working in linguistics - speech generation, etc.  He was made a distinguished engineer back in 2002, in recognition of his contribution to the industry. The title of Distinguished Engineer is the title to which all Microsoft developers aspire, it is literally the pinnacle of a developers career at Microsoft when they're named DE - other DE's include Dave Cutler, Butler Lampson, Jim Gray, Anders Hejlsberg.  This is unbelievably rarified company - these are the people who have literally changed the world.

And David absolutely belongs in their company.

David's leaving to learn more about the state of molecular biology today, he wants to finally be able to use his PhD, the field has changed so much since he left it, and it's amazing what's happening in it these days.

As I said as I was leaving his goodbye party:

"Congratulations, good luck, and, from the bottom of my heart, thank you".

Bonne Chance David, I wish you all the best.  When you get your Nobel Prize, I'll be able to say "I knew him back when he worked at Microsoft".

Edit: Corrected David's PhD info based on Peter Woit's blog post here.  Sorry David, and thanks Peter.

Edit2: Grey->Gray :)  Thanks Jay

• How do I divide fractions?

Valorie works as a teacher's aid in a 6th grade classroom at a local elementary school.

They've been working on dividing fractions recently, and she spent about two hours yesterday working with one student trying to explain exactly how division of fractions works.

So I figured I'd toss it out to the blogsphere to see what people's answers are.  How do you explain to a 6th grader that 1/2 divided by 1/4 is 2?

Please note that it's not sufficient to say: Division is the same as multiplication by the inverse, so when you divide two fractions, you take the second one, invert it, and multiply.  That's stating division of fractions as an axiom, and not a reason.

In this case in particular, the teacher wants the students to be able to graphically show how it works.

I can do this with addition and subtraction of numbers (both positive and negative) using positions on a number line. Similarly, I can do multiplication of fractions graphically - you have a whole, divide it into 2 halves.  When you multiply the half by a quarter, you are quartering the half, so you take the half, divide it into fours, and one of those fours is the answer.

But how do you do this for division?

My wife had to type this part because we have a bit of, um, discussion, about how simple this part is....

How can you explain to 9-11 year old kids why you multiply by the reciprocal without resorting to the axiom? It's easy to show graphically that 1/2 divided by 1/4 is 2 quarters because the kids can see that there are two quarters in one half. Equally so, the kids can understand that 1/4 divided by 1/2 is 1/2 of a half because the kids can see that only half of the half is covered by the original quarter. The problem comes in when their intuition goes out.  They can solve it mathematically, but the teacher is unwilling to have them do the harder problems “on faith“ and the drawing is really confusing the kids. Having tried to draw the 5/8 divided by 3/10, I can assure you, it is quite challenging. And no, the teacher is not willing to keep the problems easy. And no, don't get me started on that aspect of this issue.

I'm a big fan that if one method of instruction isn't working, I try to find another way to explain the concept. I visited my usual math sites and found that most people don't try to graph this stuff until 10th grade or adulthood. Most of the sites have just had this “go on faith“ response (show the kids the easy ones, and let them “go on faith“ that it will hold true for all cases). I really wish I could figure out a way to show successive subtraction, but even that gets difficult on the more complicated examples.

What I am hoping is that someone out there can provide me with the “aha!“ I need to come up with a few more ways to explain this. What this has been teaching me is that I've been doing this “on faith“ most of my life and never stopped to think about why myself.

Any ideas/suggestions would be much appreciated.

• Microsoft just doesn't get Security - NOT!

I was reading Robert Scoble’s post on “Longhorn Myths”, and I noticed this comment from “Dave” in his comments thread:

Most outlandish Longhorn myth? I mean this with all due respect, and say it with complete sincerity.... it will be one that MS will in fact say: that Longhorn will be a very secure sytstem.

Yes, it will be much more secure than any other verison of Windows. Yes, it will be as secure as MS can possibly make it. But try as they might, a few factors come into play that will make it next to impossible for Longhorn to be a very secure system.

(1) Longhorn, being a Microsoft product and a popular product, is destined to be targeted by hackers around the world. If there's a hole to be found, they'll find it. And nobody can make a system 100% secure.

(2) MS still places a higher emphasis on new forms of functionality/interaction than they do on security. Yes, they have a greater emphasis on security than even one year ago, but their concern - at this point in the Longhorn product life cycle - is more on getting things to work and work well than it is to play devil's advocate and find all the security holes they can find.

My response (updated and edited): Um Compared to what? Linux? Hands down, Longhorn will be more secure out-of-the-box than any Linux distribution available at the time.

There will be holes found in Longhorn, absolutely. But Microsoft GETS security nowadays. In general, Linux/Open Source community doesn't yet (The OpenBSD guys appear to get it, but I’ve not seen any indications of this level of scrutiny associated with the other distributions).

The Linux guys will eventually, but they don't get it yet.

If you're going to argue that Linux/OSX is somehow safer because they're not popular, then that's relying on security by obscurity. And that dog don’t hunt :)

Even today, I'd stack a Win2K3 machine against ANY Linux distribution out there on the internet. And Longhorn's going to be BETTER than Win2K3.  After all, Longhorn’s starting with an amalgam of Win2K3 and XP SP2, and we’re enhancing system security even beyond what’s gone into the previous releases.

“Dave’s” comment #2 is the one I wanted to write about though.  Microsoft doesn’t place a higher emphasis on new forms of functionality than they do on security.  Security is an integral part of every one of our development processes here at Microsoft.  This hits every aspect of a developer’s life.  Every developer is required to attend security training, starting at New Employee Orientation, continuing through annual security refreshers.

Each and every new feature that’s added to the system has to be thoroughly threat-modeled, we need to understand every aspect that any new component can be attacked, and the kind of compromise that can result from a failure of each system.  If there’s a failure mode, then we need to understand how to defend against it, and we need to design mitigations against those threats.

Every test group at Microsoft is required to have test cases written that test exploiting ALL of our interfaces, by various means.  Our test developers have all gone through the same security testing that the other developers have gone through, with an intensive focus on how to test security holes.

Every line of code in the system is code reviewed before it’s checked into the mainline source tree, to check for security problems, and we’ve got a security push built-into the schedule where we’ll go back and re-review all the code that was checked in during the lifetime of the project.

This is a totally new way of working, it’s incredibly resource intensive, but the results are unmistakable.  Every system we’ve released since we started implementing these paradigms has been significantly more secure than the previous ones, Longhorn will be no different.

I’m not saying that Longhorn will be security hole free, it won’t be.  We’re human, we screw up.  But it’ll be orders of magnitude better than anything out there.

By the way, I want to be clear: I'm not trying to denegrate the entire open source community.  There ARE people who get it in the open source community.  The OpenBSD link I mentioned above is a good example of a team that I believe DOES understand what's needed these days.

I just don't see the same level of rigor being applied by the general community.  Maybe I'm just not looking in the right places.  Believe me, I'd LOVE to be proven wrong on this one.

Edit: Replaced thread-modeled with threat-modeled :)

• What's wrong with this code, part 21, a psychic debugging example

Over the weekend, one of the developers in my group sent me some mail - he was seeing one of the registers in his code getting corrupted across a procedure call.  He was quite surprised to see this, and asked me for any suggestions.

With the help of the info he gave me, I was able to figure out what had gone wrong with his code, and I realized that it'd make a great "what's wrong with this code" example.

There are three parts to the code associated with this "what's wrong".  The first is an interface definition:

` class IPsychicInterface {public:    virtual bool DoSomeOperation(int argc, _TCHAR *argv[]) = 0;};`

Next, you have a tiny test application:

`int _tmain(int argc, _TCHAR* argv[]){    register int value1 = 1;    IPsychicInterface *psychicInterface = GetPsychicInterface();    register int value2 = 2; `

` `

`    psychicInterface->DoSomeOperation(argc, argv);    assert(value1 == 1);    assert(value2 == 2);    return 0;}`

The failure  happened when the caller returned from psychicInterface->DoSomeOperation - upon the return, the ESI register, which is supposed to be preserved got trashed.  Further debugging showed that the reason that ESI was trashed was that the stack was imbalanced after the call to DoSomeOperation.

There's one more piece of information that I was given that let me immediately realize the root cause of the problem.

I know that if I include that information, what went wrong should be blindingly obvious, so I'm going to be mean and ask you to tell me what the one additional piece of information was.   The reason that the other developer in my group didn't find it was simply because he was looking at too much data - if I had pointed out that one piece of additional data, he'd have instantly figured it out too.

So the "answer" to this part of the "What's wrong" problem is "What single additional piece of information was I given that made this problem simple to solve?"

• Why do people think that Windows is "easy"?

Every once in a while, someone sends me mail (or a pointer to a blog post) and asks "Why can't you guys do something like that?".  The implication seems to be that Windows would be so much better if we simply rewrote the operating system using technology <foo>.

And maybe they're right.  Maybe Windows would be better if we threw away the current kernel and rewrote it using <pick your favorite operating environment>.  I don't know, and I doubt that I'll ever find out.

The reason is that making any substantial modifications to an operating system as large and as successful as Windows is hard.  Really, really, really hard.  You can see this with Vista - in the scheme of things, there were relatively few changes made to existing elements of the operating system (as far as I can tell, the biggest one was the conversion from the XP display driver model to the Vista display driver model), but even those changes have caused a non trivial amount of pain for our customers.

Even relatively small modifications can cause pain to customers - one of the changes I made to the legacy multimedia APIs was to remove support for NT4 style audio drivers from winmm.  This functionality has been unsupported since 1998, and we were unaware of any applications that actually used it.  Shortly after Beta2 shipped, we started receiving bug reports from the field - people reported that some call center applications had stopped working.  We started digging and discovered that these call centers were using software that depended on the NT4 style audio drivers.  These call centers didn't have the ability to upgrade their software (the vendor had gone out of business, and the application worked just fine for their needs).  So we put the support for NT4 drivers back, because that was what our customers needed to have happen.

Windows is an extraordinarily complicated environment - as a result, it's extremely unlikely that any changes along the line of "throw away the kernel and replace it with <foo>" are going to happen.   Of course, I've been wrong before :).

• IE Code quality commentary...

I just saw this post by Michal Zalewski on BugTraq.  From the post:

It appears that the overall quality of code, and more importantly, the
amount of QA, on various browsers touted as "secure", is not up to par
with MSIE; the type of a test I performed requires no human interaction
and involves nearly no effort. Only MSIE appears to be able to
consistently handle [*] malformed input well, suggesting this is the
only program that underwent rudimentary security QA testing with a
similar fuzz utility.

I'm wondering when Michael's post will show up on slashdot.

Edit: Corrected Michal's name - Sorry about that.

• Where do you go to get answers to your technical questions.

One of the things I'm currently working on is analyzing our community efforts, so I'd like to turn the blog around and ask:

When you have a technical question about a product, where do you go to look for answers?

Places I know about (in no particular order):

• My blog :).
• Other Microsoft people's blogs :).
• The MSDN Support forums.
• The Microsoft Newsgroups.
• Paid Support.
• Mailing lists (wdmauddev is a great example of this)

I'm not just looking for programming questions - even questions like "where do I get a driver for my <whatever> card" or "how do I do <blah>" count.

Any and all answers would be appreciated - I'm just trying  to understand the landscape right now.

Edit to add: Btw, for those of you proposing "Google" as the generic answer, what happens when the answer isn't on the search engines?

• The purpose of an operating system, redux

Something changed in the stress mix last week, and I've been swamped with stress failures, that's what caused the lack of posts last week.  But I've spent a fair amount of time while driving into work thinking about my last post (on the purpose of an operating system).  I've not actually read most of the comments (I've really been busy), so if I'm echoing comments made on the other post, forgive me for poaching your ideas, it really is a coincidence.  I'm a little surprised it's taken me this long to have this idea gel in it's current form - it's blindingly obvious, but I never put the pieces together in this way before.

I still stand by my original premise.  The purpose of an operating system is to shield the application from the hardware.

But the thing that you purchase in the store (or buy with your new computer, or download from RedHat) ISN'T an operating system.  It's a platform.

And a platform does more than just isolate the user from the hardware, a platform is something on which you build applications.  This is a key distinction that it seems many people have a really hard time making.

Let me take Linux as an example, even though I'm far from a Linux expert.  The Linux operating system exists to isolate the user from the hardware.  It hosts drivers and presents a fundamental API layer on which to build systems.  But someone who receiving a machine with just copy of Linux would be rather upset, because it's not really very useful.  What makes Linux useful is that it's a platform on which to build applications.  In the FOSS community, the "platform" is called a "distribution", it contains a set of tools (Apache, Perl, etc).  Those tools make up the platform on which you write applications.  Btw, RMS has been saying this for years now in insisting that people call the "OS" known as Linux "GNU/Linux" - he's explicitly making the differentiation between Linux the operating system and GNU/Linux the platform.

Similarly, Windows isn't an operating system.  Windows is a platform.  Nowadays, the Windows platform runs on the NT operating system, in previous years, the Windows platform ran on the Windows VXD operating system, and before that it ran on the MS-DOS operating system.  OSX is also a platform, it runs on an operating system that is (I believe) a derivative of the Mach OS, running with a BSD personality layer (I may be wrong on this, I'm not enough of an OSX expert to know the subtleties of its implementation).

For convenience sake, we refer to the Windows platform as an operating system, just like people refer to OSX or Linux as operating systems, it's simply easier to present it that way to users.

If you make the paradigm shift to considering the "operating system" as an operating system plus a development platform, it makes a heck of a lot more sense why the platform contains things like an HTML renderer, a sockets layer, a TCP/IP stack, an HTTP server, a directory service, a network filesystem, etc.  These aren't things that shield an application from the hardware, but they ARE things that provide value to applications that want to run on that platform.

As an easy example, consider an application like the game Neverwinter Nights.  Because the developers of Neverwinter Knights knew that there was an HTML renderer built-into the platform, it meant that they could leverage that renderer in their launcher application (or their update application, I forget which of them used the MSHTML control).  Because they knew that the platform contained a multimedia stack with WAV file rendering, they didn't have to build in a WAV renderer into the game.  Because they knew the platform had support built-in for video rendering, they didn't have to include a video renderer.  They might have had to include a video codec along with their application, because the platform didn't necessarily include that, but it's orders of magnitude easier to write (or license) a video codec than it is to write or license an entire multimedia pipeline.

A rich platform means that applications can depend on that platform, which, in turn makes the platform more attractive to the applications.  Everything that the platform does that an application doesn't have to do is one less thing an application needs to worry about.  Of course, the challenge when enhancing the platform is to ensure that the platform provides the right level of capabilities, and the right ease of use for those capabilities, otherwise applications won't use the platform's implementation, but will chose to roll their own.

• Mother, Can I?

Recently someone posted the attached screen shot on the internal self hosting alias.

What's wrong with this English?

It's the use of "Can" instead of "May". This happens to be one of my minor pet peeves with common English usage.  The difference between "Can" and "May" can be quite subtle and most people don't catch it.  "Can" reflects the ability to do something, "May" requests permission to do something.

To use my kids as an example, a dialog with them might go something like:

"Dad, can I go to the store?"  "Absolutely you can - it's just down the street, so it's not a big deal."

"Dad, may I go to the store?" "No you may not, without a parent accompanying you."

The first question asks if the kid asking has the ability to go to the store - of course they do, it's nearby.  The second question asks permission to go to the store.

Unfortunately I didn't notice this until yesterday, so it's too late get this fixed for Vista, but it'll be fixed in a subsequent release.  And it's going to annoy the heck out of me every time I see it (which shouldn't be that often :->).

Edit: Fixed typo pointed out by Peter Ritchie (I love the power of the edit button to make me look less stupid) :)

• What's wrong with this code, part 6

Today, let’s look at a trace log writer.  It’s the kind of thing that you’d find in many applications; it simply does a printf and writes its output to a log file.  In order to have maximum flexibility, the code re-opens the file every time the application writes to the filelog.  But there’s still something wrong with this code.

This “what’s wrong with this code” is a little different.  The code in question isn’t incorrect as far as I know, but it still has a problem.  The challenge is to understand the circumstances in which it doesn’t work.

`/*++`
` * LogMessage`
` *      Trace output messages to log file.`
` *`
` * Inputs:`
`*      FormatString - printf format string for logging.`
` *`
` * Returns:`
` *      Nothing`
` *     `
` *--*/`
`void LogMessage(LPCSTR FormatString, ...)`
`#define LAST_NAMED_ARGUMENT FormatString`
`{`
`    CHAR outputBuffer[4096];`
`    LPSTR outputString = outputBuffer;`
`    size_t bytesRemaining = sizeof(outputBuffer);`
`    ULONG bytesWritten;`
`    bool traceLockHeld = false;`
`    HANDLE traceLogHandle = NULL;`
`    va_list parmPtr;                    // Pointer to stack parms.`
`    EnterCriticalSection(&g_TraceLock);`
`    traceLockHeld = TRUE;`
`    //`
`    // Open the trace log file.`
`    //`
`    traceLogHandle = CreateFile(TRACELOG_FILE_NAME, FILE_APPEND_DATA, FILE_SHARE_READ, NULL, OPEN_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL);`
`    if (traceLogHandle == INVALID_HANDLE_VALUE)`
`    {        goto Exit;`
`    }`
`    //`
`    // printf the information requested by the caller onto the buffer`
`    //`
`    va_start(parmPtr, FormatString);`
`    StringCbVPrintfEx(outputString, bytesRemaining, &outputString, &bytesRemaining, 0, FormatString, parmPtr);`
`    va_end(parmPtr);    //`
`    // Actually write the bytes.`
`    //`
`    DWORD lengthToWrite = static_cast<DWORD>(sizeof(outputBuffer) - bytesRemaining);`
`    if (!WriteFile(traceLogHandle, outputBuffer, lengthToWrite, &bytesWritten, NULL))`
`    {`
`        goto Exit;`
`    }`
`    if (bytesWritten != lengthToWrite)`
`    {`
`        goto Exit;`
`    }`
`Exit:`
`    if (traceLogHandle)`
`    {`
`        CloseHandle(traceLogHandle);`
`    }`
`    if (traceLockHeld)`
`    {`
`        LeaveCriticalSection(&g_TraceLock);`
`        traceLockHeld = FALSE;`
`    }`
`}`

One hint: The circumstance I’m thinking of has absolutely nothing to do with out of disk space issues.

As always, answers and kudos tomorrow.

• Moore's Law Is Dead, Long Live Moore's law

Herb Sutter has an insightful article that will be published in Dr. Dobb's in March, but he's been given permission to post it to the web ahead of time.  IMHO, it's an absolute must-read.

In it, he points out that developers will no longer be able to count on the fact that CPUs are getting faster to cover their performance issues.  In the past, it was ok to have slow algorithms or bloated code in your application because CPUs got exponentially faster - if you app was sluggish on a 2GHz PIII, you didn't have to worry, the 3GHz machines would be out soon, and they'd be able to run your code just fine.

Unfortunately, this is no longer the case - the CPU manufacturers have hit a wall, and are (for the foreseeable future) unable to make faster processors.

What does this mean?  It means that (as Herb says) the free lunch is over. Intel (and AMD) isn't going to be able to fix your app's performance problems, you've got to fall back on solid engineering - smart and efficient design, extensive performance analysis and tuning.

It means that using STL or other large template libraries in your code may no longer be acceptable, because they hide complexity.

It means that you've got to understand what every line of code is doing in your application, at the assembly language level.

It means that you need to investigate to discover if there is inherent parallelism in your application that you can exploit.  As Herb points out, CPU manufacturers are responding to the CPU performance wall by adding more CPU cores - this increases overall processor power, but if your application isn't designed to take advantage of it, it won't get any faster.

Much as the financial world enjoyed a 20 year bull market that recently ended (ok, it ended in 1999), the software engineering world enjoyed a 20 year long holiday that is about to end.

The good news is that some things are still improving - memory bandwidth continues to improve, hard disks are continuing to get larger (but not faster).  CPU manufacturers are going to continue to add more L1 cache to their CPUs, and they're likely to continue to improve.

Compiler writers are also getting smarter - they're building better and better optimizers, which can do some really quite clever analysis of your code to detect parallelisms that you didn't realize were there.  Extensions like OpenMP (in VS 2005) also help to improve this.

But the bottom line is that the bubble has popped and now it's time to pay the piper (I'm REALLY mixing metaphors today).  CPU's aren't going to be getting any faster anytime soon, and we're all going to have to deal with it.

This posting is provided "AS IS" with no warranties, and confers no rights.

• Did you know that OS/2 wasn't Microsoft's first non Unix multi-tasking operating system?

Most people know about Microsoft’s official timeline for its operating-system like products

1.      Xenix - Microsoft’s first operating system, which was a version of UNIX that we did for microprocessors.

2.      MS-DOS/PC-DOS, a 16 bit operating system for the 8086 CPU

3.      Windows (not really an operating system, but it belongs in the timeline).

4.      OS/2, a 16 bit operating system written in joint development with IBM.

5.      Windows NT, a 32 bit operating system for the Intel i386 processor, the Mips R8800 and the DEC Alpha

But most people don’t know about Microsoft’s other multitasking operating system, MS-DOS 4.0 (not to be confused with PC-DOS 4.0)

MS-DOS 4.0 was actually a version of MS-DOS 2.0 that was written in parallel with MS-DOS 3.x (DOS 3.x shipped while DOS 4 was under development, which is why it skipped a version).

DOS 4 was a preemptive real-mode multitasking operating system for the 8086 family of processors.  It had a boatload of cool features, including movable and discardable code segments, movable data segments (the Windows memory manager was a version of the DOS 4 memory manager).  It had the ability to switch screens dynamically – it would capture the foreground screen contents, save it away and switch to a new window.

Bottom line: DOS 4 was an amazing product.  In fact, for many years (up until Windows NT was stable), one of the DOS 4 developers continued to use DOS 4 on his desktop machine as his only operating system.

We really wanted to turn DOS 4 into a commercial version of DOS, but...   Microsoft at the time was a 100% OEM shop – we didn’t sell operating systems, we sold operating systems to hardware vendors who sold operating systems with their hardware.  And in general the way the market worked in 1985 was that no computer manufacturer was interested in a version of DOS if IBM wasn’t interested.  And IBM wasn’t interested in DOS.  They liked the idea of multitasking however, and they were very interested in working with that – in fact, one of their major new products was a product called “TopView”, which was a character mode window manager much like Windows.  The wanted an operating system that had most of the capabilities of DOS 4, but that ran in protected mode on the 286 processor.  So IBM and Microsoft formed the Joint Development Program that shared development resources between the two companies.  And the DOS 4 team went on to be the core of Microsoft’s OS/2 team.

But what about DOS 4?  It turns out that there WERE a couple of OEMs that had bought DOS 4, and Microsoft was contractually required to provide the operating system to them.  So a skeleton crew was left behind to work on DOS and to finish it to the point where the existing DOS OEM’s were satisfied with it.

Edit: To fix the title which somehow got messed up.

• What's wrong with this code, part 3

This time, let’s consider the following routine used to determine if two strings are equal (case insensitively).  The code’s written in C# if it’s not obvious.

static bool CompareStrings(String string1, String string2)
{
//
//    Quick check to see if the strings length is different.  If the length is different, they are different.
//
if (string1.Length != string2.Length)
{
return false;
}

//
//    Since we're going to be doing a case insensitive comparison, let's upper case the strings.
//
string upperString1 = string1.ToUpper();
string upperString2 = string2.ToUpper();
//
//    And now walk through the strings comparing the characters to see if they match.
//
for (int i = 0 ; i < string1.Length ; i += 1)
{
if (upperString1[i] != upperString2[i])
{
return false;
}
}
return true;
}

Yes, the code is less efficient than it could be, but there’s a far more fundamental issue with the code.  Your challenge is to determine what is incorrect about the code.

Answers (and of course kudos to those who found the issues) tomorrow.

• Beep Beep

What's the deal with the Beep() API anyway?

It's one of the oldest Windows API, dating back to Windows 1.0.  It's also one of the few audio APIs that my team doesn't own.  The Beep API actually has its own dedicated driver (beep.sys).  The reason for this is that the Beep() API works totally differently from any other audio API in the system.

Back when IBM built the first IBM PCs, they realized that they needed to have the ability to do SOME level of audio, even if it wasn't particularly high quality.  So they built a speaker into the original PC hardware.

But how do you drive the speaker?  It turns out that the original PC hardware used an 8253 programmable interval timer to control the system hardware timer.  The 8253 was a pretty cool little chip - it would operate in 5 different modes - one shot timer, interrupt on terminal count, rate generator, square wave generator, software strobe or hardware strobe.  It also contained three independent counters - counter 0 was used by the operating system, counter 1 was reserved for the hardware.  The third counter, counter 2 was special.  The IBM hardware engineers tied the OUT2 line from the 8253 to the speaker line, and they programmed the timer to operate in square wave generation mode.

What that means is that whenever the 2nd counter of the 8253 counted to 0, it would toggle the output of the OUT2 line from the 8253.  This gave the PC a primitive way of generating very simple tones.

The original Windows Beep() API simply fiddled the controls on the 8253 to cause it to generate a square wave with the appropriate frequency, and that's what Beep.sys continues to do.  Legacy APIs can be hard to remove sometimes :)

Nowadays, the internal PC speaker is often also connected to the PCs audio solution, that allows the PC to have sound even when there are no external speakers connected to the machine.

In addition to the simple beep, some very clever people figured out how they could use the 8253 to generate honest to goodness audio, I'm not sure how they succeeded in doing it but I remember someone had a PC speaker based sound driver for DOS available at one point - it totally killed your PCs performance but it DID play something better than BEEEEEEP.

Edit: s/interrupt conroller/interval timer/

Edit2: fixed description of channel 1 (in case someone comes along later and decides to depend on my error).

• Why don't I agree with Bruce Schneier all the time :)

Friday's post about security blogs apparently contained a bit of unintended controversy.

When describing Bruce Schneier's blog, I said "I don't agree with a lot of what he says".  Apparently this is heresy in some parts, although I don't understand why.  Bruce is unquestionably a very, very smart man (and an excellent writer, I simply loved Applied Cryptography), but he's no Chuck Norris :)

On most topics - security architecture, crypto design, threat analysis, etc, Bruce is remarkable.  I find most of what he writes to be insightful.

But Bruce seems to have a complete blind eye when it comes to Microsoft.  To my knowledge, even though essentially every other serious security analyst has acknowledged that Microsoft has done a staggering amount of work to improve the security of its products, Bruce still maintains that Microsoft has no clue when it comes to security.  That stings.

The #2 hit in a search for Bruce Schneier Microsoft is: http://searchsecurity.techtarget.com/originalContent/0,289142,sid14_gci1011474,00.html which includes: " Microsoft is certainly taking it more seriously than three years ago, when they ignored it completely. But they're still not taking security seriously enough for me. They've made some superficial changes in the way they approach security, but they still treat it more like a PR problem than a technical problem".  This couldn't be farther from the truth (the #1 hit is Schneier's FAQ about the PPTP analysis he did where he neglected to acknowledge the work that Microsoft did to rectify the issues he found after his analysis).

And then there was this gem (from February of this year): http://www.schneier.com/blog/archives/2007/02/drm_in_windows.html.  He took Peter Gutmann's article and accepted it as the gospel truth, even though Gutmann had absolutely no factual basis for his speculation - Gutmann hadn't verified a single one of his claims, heck he hadn't even installed Vista at the time he wrote his paper.

On the basis of one paper from someone who had never even RUN Vista, Schneier leapt to the conclusion that Microsoft had embedded DRM into all levels of the operating system and that was a reason to avoid Vista.

For the following 5 paragraphs, please note: I AM NOT A LAWYER.  I AM NOT GIVING A LEGAL OPINION, THESE ARE JUST MY THOUGHTS.

I also believe that he hasn't fully thought out his position on holding companies financially liable for the security holes in his product.  At first blush his idea is attractive, but I firmly believe that the consequences of his idea would totally destroy the Internet as we know it today.

It's also entirely possible that it would kill the open source movement (talk about unintended consequences).  Let's say that there's a security vulnerability found.  If the vulnerability is found in a closed source product (or in proprietary code), then the corporation would be the only one that could be held liable for the damages - the individual developer would be protected by the corporate liability shield.

But for open source projects, often there is no such corporate liability shield (I could imagine scenarios where a corporate liability shield might apply, but I don't think they apply in general).  So who pays up if a vulnerability is found in an open source project?  The only likely target is the individual developer (or developers) who introduced the defect (I suspect that those involved in the distribution that contained the vulnerable code would also be targeted).

This means that it's highly likely that the individual contributors to open source projects would be held personally financially liable for security vulnerabilities they introduce.  So to contribute to open source projects, you'd have to have many millions of dollars of personal liability insurance (or run the risk of financial ruin if a mistake is found in your code).  That is highly likely to result in a stifling of the open source movement, and there's no easy way to work around it.

It's also likely to decrease the likelihood that a corporation would adopt an OSS solution.  Consider the situation where a bank (or major retailer) is worried about having its customer records hacked.  Since the bank/retailer is going to be held responsible for its security breaches, then the bank/retailer has to factor that risk when it chooses a vendor for its database solution.  If the bank/retailer thinks it can sue the software developer who developed the database solution in the event of a breach, and it has two choices for a database vendor, one of them developed by a bunch of people who don't have any real assets and the other comes from a company with insurance and assets, it would be crazy to choose the one where you have no one to sue.

Those are a couple of reasons why I disagree with Bruce Schneier on occasion.

• What’s wrong with this code, part 22 – Drawing Text…

Recently I’ve been working on something that I’ve never done before in my almost 24 years at Microsoft.

For the past 23ish years, I’ve been a plumber – all the work I’ve done has been under the covers.  But for the next version of Windows, I decided to stretch my boundaries a bit and try some UI programming.  I’ve just spent the past few days working on a cool change to the volume control (it’s not important what it is, and most people will never know about the change, but those that do will probably agree with me :)).

As part of the change, I needed to measure the dimensions of a text string.  This is a dummy version of some code I wrote, I simply called DrawText with the DT_CALCRECT into a memory DC that I created.

```BOOL InitInstance(HINSTANCE hInstance, int nCmdShow)
{
HWND hWnd;

hInst = hInstance; // Store instance handle in our global variable

hWnd = CreateWindow(szWindowClass, szTitle, WS_OVERLAPPEDWINDOW,
CW_USEDEFAULT, 0, CW_USEDEFAULT, 0, NULL, NULL, hInstance, NULL);

if (!hWnd)
{
return FALSE;
}```
```<BEGIN LARRYS CODE>
HDC hdc = CreateCompatibleDC(NULL);

RECT rcText = {0, 0, 88, 34};

DrawText(hdc, L"My Text String", -1, &rcText, DT_CENTER | DT_END_ELLIPSIS | DT_EDITCONTROL | DT_WORDBREAK | DT_NOPREFIX | DT_CALCRECT);

CAtlString string;
string.Format(L"Text String occupies: %d x %d pixels", rcText.right - rcText.left, rcText.bottom - rcText.top);
MessageBox(hWnd, string, L"String Size", 0);
<END LARRYS CODE>
ShowWindow(hWnd, nCmdShow);
UpdateWindow(hWnd);

return TRUE;
}```

This is just code I took by using Visual Studio to create a Windows Win32 project and inserting the code between “BEGIN LARRYS CODE” and “END LARRYS CODE”.  The meat of the code is just 3 lines of code.

Even though there’s almost no code here, it still has a bug in it that was quite subtle and took me several hours to find.

• Does Visual Studio make you stupid?

Charles Petzold recently gave this speech to the NYC .NET users group.

I've got to say, having seen Daniel's experiences with Visual Basic, I can certainly see where Charles is coming from.  Due partly to the ease of use of VB, and (honestly) a lack of desire to dig deeper into the subject, Daniel's really quite ignorant of how these "computer" thingies work.  He can use them just fine, but he has no understanding of what's happening.

More importantly, he doesn't understand how to string functions/procedures together to build a coherent whole - if it can't be implemented with a button or image, it doesn't exist...

Anyway, what do you think?

• What's up with Audio in Windows Vista?

Steve Ball (the GPM for the MediaTech group (of which Windows Audio is a part)) discussed some of these changes in the Windows Audio Channel 9 video, but I'd like to spend a bit more time talking about what we've done.

A lot of what I'm discussing is on the video, but what the heck - I've got a blog, and I need to have some content to fill in the white space, so...

The Windows audio system debuted in Windows 3.1 with the "Multimedia Extensions for Windows", or MME APIs.  Originally, only one application at a time could play audio, that was because the original infrastructure didn't have support for tracking or mixing audio streams (this is also why the old audio apps like sndrec32 pop up an error indicating that another device is using the audio hardware when they encounter any error).

When Windows 95 (and NT 3.1) came out, the MME APIs were stretched to 32 bits, but the basic infrastructure didn't change - only one application could play audio at one time.

For Windows 98, we deployed an entirely new audio architecture, based on the Windows Driver Model, or WDM.  As a part of that architectural change, we added the ability to mix audio streams - finally you could have multiple applications rendering audio at the same time.

There have been numerous changes to the audio stack over the years, but the core audio architecture has remained the same until Vista.

Over the years, we've realized that there three major problem areas with the existing audio infrastructure:

1. The amount of code that runs in the kernel (coupled with buggy device drivers) causes the audio stack to be one of the leading causes of Windows reliability problems.
2. It's also become clear that while the audio quality in Windows is just fine for normal users, pro-audio enthusiasts are less than happy with the native audio infrastructure.  We've made a bunch of changes to the infrastructure to support pro-audio apps, but those were mostly focused around providing mechanisms for those apps to bypass the audio infrastructure.
3. We've also come to realize that the tools for troubleshootingaudio problems aren't the greatest - it's just too hard to figure out what's going on, and the UI (much of which comes from Windows 3.1) is flat-out too old to be useful.

Back in 2002, we decided to make a big bet on Audio for Vista and we committed to fixing all three of the problems listed above.

The first (and biggest) change we made was to move the entire audio stack out of the kernel and into user mode.  Pre-Vista, the audio stack lived in a bunch of different kernel mode device drivers, including sysaudio.sys, kmixer.sys, wdmaud.sys, redbook.sys, etc.  In Vista and beyond, the only kernel mode drivers for audio are the actual audio drivers (and portcls.sys, the high level audio port driver).

The second major change we made was a totally revamped UI for audio.  Sndvol32 and mmsys.cpl were completely rewritten (from scratch) to include new, higher quality visuals, and to focus on the common tasks that users actually need to do.  All the old functionality is still there, but for the most part, it's been buried deep below the UI.

The infrastructure items I mentioned above are present in Vista Beta1, unfortunately the UI improvements won't be seen by non Microsoft people until Vista Beta2.

• My office guest chair

Microsoft's a big company, and, like all big company has all sorts of silly rules of what you can have in your office.   One of them is that for office furniture, you get:

1. A desk chair
2. One PED (sort of a mobile filing cabinet)
3. One curved desk piece (we have modular desk pieces with adjustable heights)
4. One short straight desk piece
5. One long straight desk piece
6. One white board
7. One cork board
8. One or two hanging book shelves with THREE shelves (not 4)
9. One guest chair.

If you're a manager, you can get a round table as well (presumably to have discussions at).

In my case, most of my office stuff is pretty stock - except I got my manager to requisition a round table for his office for me (he already had one).  I use it to hold my manipulative puzzles.  I also have two PEDs

But I'm most proud of my guest chair.  I have two of them.  One's the standard Microsoft guest chair.  But the other one's special.  You see, it comes from the original Microsoft campus at 10700 Northup Way, and is at least 20 years old.

I don't think that it's the original chair I had in my original office way back then - that was lost during one of my moves, but I found the exact match for the chair in a conference room the day after the move and "liberated" it.

But I've had this particular chair since at least 1988 or so.  The movers have dutifly moved it with me every time.

Daniel loves it when he comes to my office since it's comfy - it's padded and the standard guest chairs aren't.

Edit: Someone asked me to include a picture of the chair:

• Young Turks

Ok, this is a bit of a rant.  I recently encountered an email exchange from someone I respect where the person in question asked (more-or-less) "I can't, for the life of me, see why on earth this particular piece of functionality exists in Windows".

Now this person is somewhat younger than I (ok, most everyone in the industry is somewhat younger than I), but he is a super smart guy.

The thing is, he has NO CLUE about how the personal computer world operated back in the early 80's when Windows was designed.  Windows was designed to run on machines with 512K of RAM, on machines with a 10M hard disk.  In addition, the CPU on which Windows was intended to run didn't support memory protection, so the concept of "separation of privilege" was meaningless.  MS-DOS (on which Windows 1.0 was built) had a long history of putting critical OS information into an application's data space.  For Windows, things were no different - the line between application and system was often blurred.

Whenever there was a possibility of offloading potentially optional functionality onto the running application, Windows took it.  Instead of having a preemptive scheduler, Windows used a cooperative scheduler.  That meant that applications never had to deal with ugly issues like synchronization of data, etc.  The consequence of this cooperation was that a single errant Windows application could hang all the running applications.

But that was ok, because the overhead of the infrastructure to FIX the problem (per-application message queues, etc) would have meant that Windows wouldn't be able to run on its target systems.  And adding all that extra stuff really wouldn't make that much of a difference since the applications were all running in the same address space (along with Windows and the operating system).

So it's not surprising that there were a lot of things present in the early versions of Windows that would make people cringe today.  Sometimes this isn't a problem, but one of the key values of the Windows platform is that Microsoft very rarely intentionally breaks applications.  We'll break applications when the applications depend on a security flaw, and sometime applications will break when there's a fundamental architectural shift occurring (we already know that some multimedia apps are broken in Vista because they depend on being able to call multimedia APIs during DLL initialization, which only worked by luck in XP).

But barring that, Microsoft's made a strong commitment to not break customers applications.  The good thing is that it means that the Windows platform is remarkably stable.  Many applications written for Windows 1.0 still run on Windows Vista.  It means that corporations that have made an investment in technology aren't going to lose that investment by moving to a newer version of Windows.  It also means means that every version of Windows carries forward the designs from previous versions.

If there was any "mistake" made, it was Microsoft's unceasing commitment to backwards compatibility.  And I personally believe that a huge part of the reason for Windows success in the marketplace IS that commitment.  If we didn't have it, people would have moved onto other platforms long ago.

So when someone starts questioning why ancient stuff exists in Windows, they really need to understand the environment in which those decisions were made.  Part of the value of being a young turk is that they challenge the decisions that were made by their elders.  But before you decide to challenge an earlier decision, you need to understand the environment in which the decision was made.  Sometimes what no longer makes sense did at one time.

Btw, before people start claiming that this was somehow "Microsoft's" fault, the original Mac OS had many of the same issues, it was designed to run on a machine with 128K of RAM and didn't even HAVE a hard disk - it only supported a 400k floppy disk.  The designers of the Mac OS made many of the same decisions that the Windows designers did (Mac OS was also a cooperative multitasking environment), in addition, the Mac designers went even further and put significant parts of the OS into the system ROMs on the Mac, further blurring the lines between application and system.

• Seen on the TV in "Larry's Lounge"

Just outside my office is a little "lounge" area - it's a sitting area with a couch and a couple of comfy chairs that's used for meetings.  It's sort-of become known as "Larry's Lounge" or "The Larry Lounge".

In the "Larry Lounge", there's a 42" flat screen TV connected to a media center box (and a newly added xbox 360).  Yes, there ARE perks to working in multimedia.

Both Daniel and Sharron have had fun with the TV when they're in the office - when the lounge isn't being used as an office, they watch videos on the TV (and the DVD team has a huge video library).

Anyway we were futzing with the TV yesterday and Steve Ball noticed that the volume control on the TV goes from 0 to 63.

We both saw that and laughed.  Our best guess is that some engineer specified that the volume control should go from 0..99 but wrote it as 0x00..0x63.

And the UX person handling the request didn't figure out what the 0x thingy meant and left it off :)

• Looking for new skillz (turning the blog around)…

Just for giggles, I went looking at the various job listings within Microsoft and outside Microsoft (no, I’m not going anywhere, I was just curious).  While looking, I realized that I had absolutely no marketable skills :).  Nobody seems to be hiring an OS developer these days.

To repeat and be even more clear: I’m *not* leaving Microsoft.  I’m *not* leaving Windows.

I’m just looking for a book or two to read to improve my skills (I do this regularly – most of my recent reading has either been on Security or WPF and to be honest, I’m kinda bored of those topics so I’m interested in branching out beyond security and UI topics)…

I could run out and browse the bookstores (and I might just do that) but I figured “Hey, I’ve got a blog, why don’t I ask the folks who read my blog?”.  So let me turn the blog around and ask:

If I wanted to go out and learn web development, which books should I read?

I’ve already read “Javascript: The ood Parts” and it was fascinating but it was more of a language book (and a very good language book), but it’s not a web development book.  So what books should I read to learn web development?

Page 2 of 33 (819 items) 12345»