April, 2013

  • The Old New Thing

    The problem with adding more examples and suggestions to the documentation is that eventually people will stop reading the documentation


    I am a member of a peer-to-peer discussion group on an internal tool for programmers which we'll call Program Q. Every so often, somebody will get tripped up by smart quotes or en-dashes or ellipses, and they will get an error like

    C:\> q select table –s “awesome table”
    Usage: q select table [-n] [-s] table
    Error: Must specify exactly one table.

    After it is pointed out that they are a victim of Word's auto-conversion of straight quotes to slanted quotes, there will often be a suggestion, "You should treat en-dashes as plain dashes, smart quotes as straight quotes, and fancy-ellipses as three periods."

    The people who support Program Q are members of this mailing list, and they explain that unfortunately for Program Q, those characters have been munged by internal processing to the point that when they reach the command line parser, they have been transformed into characters like ô and ö, so the parser doesn't even know that it's dealing with an en-dash or smart-quote or fancy-ellipsis.

    Plus, this is a programming tool. Programmers presumably prefer consistent and strict behavior rather than auto-correcting guess-what-I-really-meant behavior. One of the former members of the Program Q support team recalled,

    It might be possible to detect potential unintended goofiness and raise an error, but that creates the possibility of false positives, which in turn creates its own set of support issues that are more difficult to troubleshoot and resolve. Sometimes it's better to just let a failure fail at the point of failure rather than trying to be clever.

    There was a team that had a script that started up the Program Q server, and if there was a problem starting the server, it restored the databases from a backup. Automated failure recovery, what could possibly go wrong? Well, what happened is that the script decided to auto-restore from a week-old backup and thereby wiped out a week's worth of work. And it turns out that the failure in question was not caused by database corruption in the first place. Oops.

    "Well, if you're not going to do auto-correction, at least you should add this explanation to the documentation."

    The people who support Program Q used to take these suggestions to heart, and when somebody said, "You should mention this in the documentation," they would more than not go ahead and add it to the documentation.

    But that merely created a new phenomenon:

    I can't get Program Q to create a table. I tried q create -template awesome_template awesome_table, but I keep getting the error "Template 'awesome_template' does not exist in the default namespace. Check that the template exists in the specified location. See 'q help create -template' for more information." What am I doing wrong?

    Um, did you check that the template exists in the specified location?

    "No, I haven't. Should I?"


    After some troubleshooting, the people on the discussion group determine that the problem was that the template was created in a non-default namespace, so you had to use a full namespace qualifier to specify the template. (I'm totally making this up, I hope you realize. The actual Program Q doesn't have a template-create command. I'm just using this as a fake example for the purpose of storytelling.)

    After this all gets straightened out, somebody will mention, "This is explained in the documentation for template creation. Did you read it?"

    "I didn't read the documentation because it was too long."

    If you follow one person's suggestion to add more discussion to the documentation, you end up creating problems for all the people who give up on the documentation because it's too long, regardless of how well-organized it is. In other words, sometimes adding documentation makes things worse. The challenge is to strike a decent balance.

    Pre-emptive snarky comment: "TL;DR."

  • The Old New Thing

    On giving a name at the register to be called when your order is ready


    Shultzy's Sausage describes itself as "Seattle's Wurst Restaurant since 1989!" It's a local hangout for sausage, beer, chili, and advanced dishes like sausage with beer or sausage with chili.

    In the early 1990's, Shultzy's expanded to a second location just a few blocks from Microsoft's main campus in Redmond, at a location known to my circle of friends as the location of death.

    Many neighborhoods have a location of death: It's the location where there's a restaurant that can never manage to stay open. First it's a gyro restaurant. After a few months the gyro restaurant shuts down and a crêpe restaurant opens in its place. After another few months, the crêpe restaurant closes and a tea shop opens. You get the idea. It's not that we want the restaurants to fail. It's just that the location appears to be haunted.

    Anyway, we wanted Shultzy's to succeed, so we made a point of going there semi-regularly. One evening, we went to the register and placed our orders, and the person behind the counter asked for a name to call when the order was ready.

    Sometimes, when we went out to eat as a group and were feeling particularly whimsical, we'd all give the same name (Dave). Each time the name Dave was called, whoever felt lucky went to the counter to pick up the order. If the order was theirs, then they "won." It was cheap amusement, I admit.

    Anyway, at this particular visit to Shultzy's, we did not play the Dave game. Each of us just gave our names. Except for Bob, who decided to be a bit of a smart aleck. When the person behind the counter asked, "What's your name?", Bob shot back, "What's your name?"

    The person behind the counter calmly replied, "Shultzy."

    Bob realized that he had just sassed the restaurant's owner and meekly replied, "My name's Bob."

    Shultzy was gracious. "Thanks, Bob. I'll call you when your order's ready."

    Even though Bob was only in his early 30's, he was a relative old-timer by Microsoft standards of the early 1990's. He perhaps should have recognized Shultzy, because Shultzy himself was featured in a full-page Microsoft advertisement a few years earlier touting how a small business uses Microsoft Excel to manage its day-to-day operations.

    The location of death took another victim. The Shultzy's location near Microsoft main campus shut down after a few months. But the reason wasn't lack of business. Shultzy said that business at the new location was fine. Rather, he came to the conclusion that it was too much work to manage two locations, so he scaled his business back to its original size.

    Today's article is in celebration of Seattle Restaurant Week.

  • The Old New Thing

    Another meaning of the word leptoceratops


    I dreamed that a number of the form (10ⁿ−1)/9 was called a "leptoceratops." And it had to be tied up with a squid.

  • The Old New Thing

    The managed way to retrieve text under the cursor (mouse pointer)


    Today's Little Program is a managed version of the text-extraction program from several years ago. It turns out that it's pretty easy in managed code because the accessibility folks sat down and wrote a whole framework for you, known as UI Automation.

    (Some people are under the mistaken impression that UI Automation works only for extracting data from applications written in managed code. That is not true. Native code can also be a UI Automation provider. The confusion arises because the name UI Automation is used both for the underlying native technology as well as for the managed wrappers.)

    using System;
    using System.Windows;
    using System.Windows.Forms;
    using System.Windows.Automation;
    class Program
     static Point MousePos {
      get { var pos = Control.MousePosition;
            return new Point(pos.X, pos.Y); }
     public static void Main()
      for (;;) {
       AutomationElement e = AutomationElement.FromPoint(MousePos);
       if (e != null) {
        Console.WriteLine("Name: {0}",
        Console.WriteLine("Value: {0}",

    We use the From­Point method to locate the automation element under the current mouse position and print its name and value.

    Well that was pretty simple. I may as well do something a little more challenging. Since the feature is known as UI Automation, I'll try automating the Run dialog by programmatically entering some text and then clicking OK.

    using System.Windows.Automation;
    class Program
     static AutomationElement FindById(AutomationElement root, string id)
      return root.FindFirst(TreeScope.Children,
       new PropertyCondition(AutomationElement.AutomationIdProperty, id));
     public static void Main()
      var runDialog = AutomationElement.RootElement.FindFirst(
       new PropertyCondition(AutomationElement.NameProperty, "Run"));
      if (runDialog == null) return;
      var commandBox = FindById(runDialog, "12298");
      var valuePattern = commandBox.GetCurrentPattern(ValuePattern.Pattern)
                         as ValuePattern;
      var okButton = FindById(runDialog, "1");
      var invokePattern = okButton.GetCurrentPattern(InvokePattern.Pattern)
                         as InvokePattern;

    The program starts by looking for a window named Run by performing a children search on the root element for an element whose Name property is equal to "Run".

    Assuming it finds it, the program looks for a child element whose automation ID is "12298". How did I know that was the automation ID to use? The documentation for UI Automation suggests using a tool like UI Spy to look up the automation IDs.

    Mind you, since I am automating something outside my control, I have to accept that the automation ID may change in future versions of Windows. (It's not like they check with me before making changes.) But this is a Little Program, not a production-level program, so that's a limitation I will accept, since I'm the only person who's going to use this program, and if it stops working, I know who to talk to (namely, me).

    Anyway, afer we find the command box, I ask for its Value pattern. Automation elements can support patterns which expose additional properties and methods specific to particular uses. In our case, the Value pattern lets us get and set the value of an editable object, so we use the Set­Value method to set the text in the Run dialog to calc.

    Next, we look for the OK button, which UI Spy told me had automation ID 1. We ask for the Invoke pattern on the button and then call the Invoke method. The Invoke pattern is the pattern for objects that do just one thing, and Invoke means "Do that thing that you do."

    Open the Run dialog and run this program. It should programmatically set the command line to calc, then click OK. Hopefully, this will run the Calculator.

    Just for fun, here's another program that just dumps the automation properties and patterns for whatever object is under the mouse cursor:

    using System;
    using System.Windows;
    using System.Windows.Forms;
    using System.Windows.Automation;
    class Program
     static Point MousePos {
      get { var pos = Control.MousePosition;
            return new Point(pos.X, pos.Y); }
     public static void Main()
      for (;;) {
       AutomationElement e = AutomationElement.FromPoint(MousePos);
       if (e != null) {
        foreach (var prop in e.GetSupportedProperties()) {
         object o = e.GetCurrentPropertyValue(prop);
         if (o != null) {
          var s = o.ToString();
          if (s != "") {
           var id = o as AutomationIdentifier;
           if (id != null) s = id.ProgrammaticName;
           Console.WriteLine("{0}: {1}", Automation.PropertyName(prop), s);
        foreach (var pattern in e.GetSupportedPatterns()) {
         Console.WriteLine("Pattern: {0}", Automation.PatternName(pattern));
  • The Old New Thing

    How do I wait until all processes in a job have exited?


    A customer was having trouble with job objects, specifically, the customer found that a Wait­For­Single­Object on a job object was not completing even though all the processes in the job had exited.

    This is probably the most frustrating part of job objects: A job object does not become signaled when all processes have exited.

    The state of a job object is set to signaled when all of its processes are terminated because the specified end-of-job time limit has been exceeded. Use Wait­For­Single­Object or Wait­For­Single­Object­Ex to monitor the job object for this event.

    The job object becomes signaled only if the end-of-job time limit has been reached. If the processes exit without exceeding the time limit, then the job object remains unsignaled. This is a historical artifact of the original motivation for creating job objects, which was to manage batch style server applications which were short-lived and usually ran to completion. The original purpose of job objects was to keep those processes from getting into a runaway state and consuming excessive resources. Therefore, the interesting thing from a job object's point of view was whether the process being managed in the job had to be killed for exceeding its resource allocation.

    Of course, nowadays, most people use job objects just to wait for a process tree to exit, not for keeping a server batch process from going runaway. The original motivation for job objects has vanished into the mists of time.

    In order to wait for all processes in a job object to exit, you need to listen for job completion port notifications. Let's try it:

    #define UNICODE
    #define _UNICODE
    #define STRICT
    #include <windows.h>
    #include <stdio.h>
    #include <atlbase.h>
    #include <atlalloc.h>
    #include <shlwapi.h>
    int __cdecl wmain(int argc, PWSTR argv[])
     CHandle Job(CreateJobObject(nullptr, nullptr));
     if (!Job) {
      wprintf(L"CreateJobObject, error %d\n", GetLastError());
      return 0;
     CHandle IOPort(CreateIoCompletionPort(INVALID_HANDLE_VALUE,
                                           nullptr, 0, 1));
     if (!IOPort) {
      wprintf(L"CreateIoCompletionPort, error %d\n",
      return 0;
     Port.CompletionKey = Job;
     Port.CompletionPort = IOPort;
     if (!SetInformationJobObject(Job,
           &Port, sizeof(Port))) {
      wprintf(L"SetInformation, error %d\n", GetLastError());
      return 0;
     PROCESS_INFORMATION ProcessInformation;
     STARTUPINFO StartupInfo = { sizeof(StartupInfo) };
     PWSTR CommandLine = PathGetArgs(GetCommandLine());
     if (!CreateProcess(nullptr, CommandLine, nullptr, nullptr,
                        FALSE, CREATE_SUSPENDED, nullptr, nullptr,
                        &StartupInfo, &ProcessInformation)) {
      wprintf(L"CreateProcess, error %d\n", GetLastError());
      return 0;
     if (!AssignProcessToJobObject(Job,
             ProcessInformation.hProcess)) {
      wprintf(L"Assign, error %d\n", GetLastError());
      return 0;
     DWORD CompletionCode;
     ULONG_PTR CompletionKey;
     LPOVERLAPPED Overlapped;
     while (GetQueuedCompletionStatus(IOPort, &CompletionCode,
              &CompletionKey, &Overlapped, INFINITE) &&
              !((HANDLE)CompletionKey == Job &&
               CompletionCode == JOB_OBJECT_MSG_ACTIVE_PROCESS_ZERO)) {
      wprintf(L"Still waiting...\n");
     wprintf(L"All done\n");
     return 0;

    The first few steps are to create a job object, then associate it with a completion port. We set the completion key to be the job itself, just in case some other I/O gets queued to our port that we aren't expecting. (Not sure how that could happen, but we'll watch out for it.)

    Next, we launch the desired process into the job. It's important that we create it suspended so that we can put it into the job before it exits or does something else that would mess up our bookkeeping. After it is safely assigned to the job, we can resume the process's main thread, at which point we have no use for the thread and process handles.

    Finally, we go into a loop pulling events from the I/O completion port. If the event is not "this job has no more active processes", then we just keep waiting.

    Officially, the last parameter to Get­Queued­Completion­Status is lpNumber­Of­Bytes, but the job notifications are posted via Post­Queued­Completion­Status, and the parameters to Post­Queued­Completion­Status can mean anything you want. In particular, when the job object posts notifications, it puts the notification code in the "number of bytes" field.

    Run this program with, say, cmd on the command line. From the nested cmd prompt, type start notepad. Then type exit to exit the nested command prompt. Observe that our program is still waiting, because it's waiting for Notepad to exit. When you exit Notepad, our program finally prints "All done".

    Exercise: The statement "Not sure how that could happen" is a lie. Name a case where a spurious notification could arrive, and how the code can protect against it.

  • The Old New Thing

    Don't forget, the fourth parameter to ReadFile and WriteFile is sometimes mandatory


    The Read­File and Write­File functions have a parameter called lp­Number­Of­Byte­Read, which is documented as

      __out_opt LPDWORD lpNumberOfBytesRead,
    // or
      __out_opt LPDWORD lpNumberOfBytesWritten,

    "Cool," you think. "That parameter is optional, and I can safely pass NULL."

    My program runs fine if standard output is a console, but if I redirect standard output, then it crashes on the Write­File call. I verified that the handle is valid.

    int __cdecl main(int, char **)
      // error checking removed for expository purposes
      HANDLE hStdOut = GetStdHandle(STD_OUTPUT_HANDLE);
      WriteFile(hStdOut, "hello", 5, NULL, NULL);
      return 0;

    The crash occurs inside the Write­File function trying to write to a null pointer.

    But you need to read further in the documentation for Write­File:

    lp­Number­Of­Bytes­Written [out, optional]

    A pointer to the variable that receives the number of bytes written when using a synchronous hFile parameter. Write­File sets this value to zero before doing any work or error checking. Use NULL for this parameter if this is an asynchronous operation to avoid potentially erroneous results.

    This parameter can be NULL only when the lp­Over­lapped parameter is not NULL.

    That second paragraph is the catch: The parameter is sometimes optional and sometimes mandatory. The annotation language used in the function head is not expressive enough to say, "Sometimes optional, sometimes mandatory," so it chooses the weakest annotation ("optional") so as not to generate false positives when run through static code analysis tools.

    With the benefit of hindsight, the functions probably should have been split into pairs, one for use with an OVERLAPPED structure and one without. That way, one version of the function would have a mandatory lp­Number­Of­Bytes­Written parameter and no lp­Over­lapped parameter at all; the other would have a mandatory lp­Over­lapped parameter and no lp­Number­Of­Bytes­Written parameter at all.

    The crash trying to write to a null pointer is consistent with the remark in the documentation that the lp­Number­Of­Bytes­Written is set to zero before any work is performed. As for why the code runs okay if output is not redirected: Appearing to succeed is a valid form of undefined behavior. It appears that when the output handle is a console, the rule about lp­Number­Of­Bytes­Written is not consistently enforced.

    At least for now.

  • The Old New Thing

    How can I move an HTREEITEM to a new parent?


    Suppose you have a TreeView control, and you created an item in it, and you want to move the HTREEITEM to a new parent. How do you do that?

    You can't, at least not all in one motion.

    You will have to delete the HTREEITEM and then re-create it in its new location.

    If you want to move an HTREEITEM within the same parent (say, to reorder it among its siblings), then you can use Tree­View_Sort­Children­CB and pass a custom sort function that rearranges the children into the order you want.

  • The Old New Thing

    Where did the research project RedShark get its name?


    Project code names are not arrived at by teams of focus groups who carefully parse out every semantic and etymological nuance of the name they choose. (Though if you read the technology press, you'd believe otherwise, because it turns out that taking a code name apart syllable-by-syllable searching for meaning is a great way to fill column-inches.) Usually, they are just spontaneous decisions, inspired by whatever random thoughts jump to mind.

    Many years ago, there was an internal user interface research project code named RedShark. Not Red Shark but RedShark, accent on the Red. Where did this strange name come from?

    From a red shark, of course.

    When the project started up, the people in charge were sitting around and realized they needed to give the project a name. It so happened that the office they were sitting in belonged to a team member who collected a lot of strange toys. One of those toys was an small inflatable red shark.

    Somebody looked around the room and spotted the red shark. "Let's call it RedShark." Nobody else had a better idea, so the name passed by default.

    That small inflatable red shark became their mascot and hung from the ceiling in the hallway.

    No deep, hidden meaning. Just a $3 cheap plastic toy that happened to be in the right place at the right time.

  • The Old New Thing

    The joke's on you, because PATH goes to Penn Station, not Grand Central!


    I dreamed that I was asked to develop a hill-avoiding bike route from my childhood home. Along the way, I rode through a daycare playroom (twice, due to a spiral path), met Madonna, the ghost of Alec Baldwin's wife, a team from The Amazing Race trying to figure out which PATH train goes to Grand Central (trick question!) and waited for the next train with a supervillain who demonstrated his powers by using his stretchy arms to punch one of my colleagues who was standing ten feet away and who electrocuted another colleague's pacemaker by force of will.

    Yes, I know that Alec Baldwin is not a widower, and my colleague does not have a pacemaker, but I guess my subconscious doesn't care.

  • The Old New Thing

    Some trivia about the //build/ 2011 conference


    Registration for //build/ 2013 opens tomorrow. I have no idea what's in store this year, but I figure I'd whet your appetite by sharing some additional useless information about //build/ 2011.

    The internal code name for the prototype tablets handed out at //build/ 2011 was Nike. I think we did a good job of keeping the code name from public view, but one person messed up and accidentally let it slip to Mary-Jo Foley when they said that the contact email for people having tax problems related to the device is nikedistⓐmicrosoft.com.

    The advance crew spent an entire week preparing those devices. One of the first steps was unloading the devices from the pallettes. This was done in a disassembly line: The boxes were opened, the devices were fished out, then removed from the protective sleeve. At the end of this phase, you had one neat stack of boxes and one neat stack of devices.

    The advance crew also configured the hall so they would be ready to start once Redmond sent down the final bits of the Developer Preview build. The hall was divided into sections, and each section consisted of eight long tables. Four of the tables were arranged in a square, and the other four tables were placed outside the square, one parallel to each side, forming four lanes.


    Along the inner tables, there were docking stations, each with power, wired access to a private network, and a USB thumb drive. Along the outer tables, there were desk organizers like this one, ready to hold several devices in a vertical position, and next to the organizer was a power strip with power cables at the ready.

    In this phase of the preparation, the person working the station would take a device, pop it into a docking station, and power it on with the magic sequence to boot from USB. The USB stick copied itself to a RAM drive, then ran scripts to reformat the hard drive and copy all the setup files from the private network onto the hard drive, then it installed the build onto the machine, installed Visual Studio, installed the sample applications, flashed the firmware, and otherwise prepared the machine for unboxing. (Not necessarily in that order; I didn't write the scripts, so I don't know what they did exactly. But I figure these were the basic steps.) Once the setup files were copied from the private network, the rest of the installation could proceed autonomously. It didn't need any further access to the USB stick or the network. Everything it needed was on the RAM drive or the hard drive.

    The scripts changed the screen color based on what step of the process it was in, so that the person working the station could glance over all the devices to see which ones needed attention. Once all the files were copied from the network, the devices were unplugged from the docking station and moved to the vertical desk organizer. There, it got hooked up with a power cable and left to finish the installation. Moving the device to the second table freed up the docking station to accept another device.

    Assuming everything went well, the screen turned green to indicate that installation was complete, and the device was unplugged, powered down, and placed in the stack of devices that were ready for quality control.

    The devices that passed quality control then needed to be boxed up so they could be handed out to the conference attendees. Another assembly line formed: The devices were placed back in the protective sleeves, nestled snugly in their boxes, and the boxes closed back up.

    Now, I'm describing this all as if everything ran perfectly smoothly. Of course there were problems which arose, some minor and some serious, and the process got tweaked as the days progressed in order to make things more efficient or to address a problem that was discovered.

    For example, the devices were labeled preview devices, but shortly before the conference was set to begin, the manufacturer registered their objection to the term, since preview implies that the device will actually turn into a retail product. They insisted that the devices be called prototype devices. This meant that mere days before the conference opened, a rush print job of 5000 stickers had to be shipped down to the convention center in order to cover the word preview with the word prototype. A new step was added to the assembly line: place sticker over offending word.

    Another example of problem-solving on the fly: The SIM chip for the wireless data plan was preinstalled in the device. The chip came on a punch-out card, and the manufacturer decided to leave the card shell in the box. Okay, I guess, except that the card shell had the SIM card's account number printed on it. Since the reassembly process didn't match up the devices with the original boxes, you had all these devices with unmatched card shells. In theory, somebody might call the service provider and give the account number on the shell rather than the number on the SIM card. To fix this, a new step was added to the assembly line: Remove the card shells. All the previously-assembled boxes had to be unpacked so the shells could be removed. (At some point, somebody discovered that you could extract the shells without removing the foam padding if you held the box at just the right angle and shook it, so that saved a few seconds.)

    Now about the devices themselves: They were a very limited run of custom hardware, and they were not cheap. I think the manufacturing cost was in the high $2000s per unit, and that doesn't count all the sunk costs. I found it amusing when people wrote, "What do you mean a free tablet? Obviously they baked that into the cost of the conference registration, so you paid for it anyway." Conference registeration was $2,095 (or $1,595 if you registered early), which nowhere near covered the cost of the device.

    Some people whined that Microsoft should have made these devices available to the general public for purchase. First of all, these are developer prototypes, not consumer-quality devices. They are suitable for developing Windows 8 software but aren't ready for prime time. (For one thing, they run hot. More on that later.) Second of all, there aren't any to sell. We gave them all away! It's not like there's a factory sitting there waiting for orders. It was a one-shot production run. When they ran out, they ran out.¹

    Third, these devices, by virtue of being prototypes, had a high infant morality rate. I don't know exactly, but I'm guessing that maybe a quarter of them ended up not being viable. One of the things that the advance crew had to do was burn in the devices to try to catch the dead devices. I remember the team being very worried that the hardware helpdesk at the conference would be overwhelmed by machines that slipped through the on-site testing. Luckily, that didn't happen. (Perhaps they were too successful, because everybody ended up assuming that pumping out these puppies was a piece of cake!)

    Doing a little back-of-the-envelope calculations, let's say that the machines cost around $2,750 to produce, and that a quarter of them failed burn-in. Add on top of that a 25% buffer for administrative overhead, and you're looking at a cost-per-device of over $4,500. I doubt there would be many people interested in buying one at that price.

    Especially since you could buy something very similar for around $1100 to $1400. It won't have the hardware customizations, but it'll be close.

    The hardware glitches that occurred during the keynote never appeared during rehearsals in Redmond. But when rehearing in Anaheim, the hardware started flaking out like crazy and eventually self-destructing. (And like I said, those devices weren't cheap!) One of my colleagues got a call from Los Angeles: "When you come down here, bring as many extra Nikes as you can. We're burning through them like mad!" My colleague ended up pissing off everybody in the airport security line behind her when she got to the X-ray machine and unloaded nine devices onto the conveyer belt. "Great, I just put tens of thousands of dollars worth of top-secret hardware on an airport X-ray machine. I hope nothing happens to them."

    Why did the devices start failing during rehearsals in Anaheim, when the ran just fine in Redmond? Because in Anaheim, the devices were being run at full brightness all the time (so they show up better on camera), and they were driving giant video displays, and they were sitting under hot stage lights for hours on end. On top of that, I'm told that the HDMI protocol is bi-directional, so it's possible that the giant video displays at the convention center were feeding data back into the devices in a way that they couldn't handle. Put all that together, and you can see why the devices would start overheating.

    What made it worse was that in order to cram all the extra doodads and sensors into the device, the intestines had to be rearranged, and the touch processor chip ended up being placed directly over the HDMI processor chip. That meant that when the HDMI chip overheated, it caused the touch processor to overheat, too. If you watched the keynote carefully, you'd see that shortly before the machine on stage blew up, you saw the touch sensor flip out and generate phantom touches all over the screen. That was the clue that the machine was about to die from overheating and it would be in the presenter's best interest to switch to another machine quickly. (The problem, of course, is that the presenter is looking out into the audience giving the talk, not staring at the device's screen the whole time. As a result, this helpful early warning signal typically goes unnoticed by the very person who can do the most about it.)

    The day before the conference officially began, Jensen Harris did a preview presentation to the media. One of the glitches that hit during his presentation was that the system started hallucinating an invisible hand that kept swiping the Word Hunt sample game back onto the screen. Jensen quipped, "This is our new auto-Word Hunt feature. We want to make sure you always have Word Hunt when you need it. We've moved beyond touch. Now you don't even need to touch your PC to get access to Word Hunt."

    Jensen's phenomenal calm in the face of adversity also manifested itself during his keynote presentation. You in the audience never noticed it, but at one point, one of the demo applications hit a bug and hung. Jensen spotted the problem before it became obvious and smoothly transitioned to another device and continued. What's more, while he was talking, he went back to the first device and surreptitiously called up Task Manager, killed the the hung application, and prepared the device for the next demo. All this without skipping a beat.

    We are all in awe of Jensen.

    When he stopped by the booth, Jensen said to me, "I don't know how you can stand it, Raymond. Now I can't walk down the hallway without a dozen people coming up to me and wanting to say something or shake my hand or get my autograph!" (One of the rare times we are both in the same room.)

    Welcome to nerd celebrity, Jensen. You just have to smile and be polite.

    Bonus chatter: What happened to the devices that failed quality control? A good number of them were rejected for cosmetic reasons (scuff marks, mostly). As a thank-you gift to the advance crew for all their hard work, everybody was given their choice of a scuffed-up device to take home. The remaining devices that were rejected for purely cosmetic reasons were taken back to Redmond and distributed to the product team to be used for internal testing purposes.

    ¹ My group had one of these scuffed-up devices that we used for internal testing. Somebody dropped it, and a huge spiderweb crack covered the left third of the screen, so you had to squint to see what was on the screen through the cracks. We couldn't order a replacement because there was nowhere to order replacements from. We just had to continue testing with a device that had a badly cracked screen.

Page 3 of 3 (30 items) 123