Random Disconnected Diatribes of a p&p Documentation Engineer
Nobody could accuse me of being posh, and compared to most of the developers I work with here at Microsoft I'm probably not the brightest button in the box. But I did study mathematics in the past, including matrix theory. I just never got to pronounce it right.
It all came rushing back to me as I was watching a presentation about using singular value decomposition (SVD) to identify textual semantic spaces in a Big Data solution. I guess with a title like that I should have known better, but it did sound interesting. And would probably be really useful if I could understand any of it. Mind you, it was the end of a particularly stressful day and I was trying to get some other jobs finished at the same time it was playing on the second screen. Maybe an early start straight after a couple of cups of strong coffee will help next time.
But what struck me was that I remember everyone at school and college pronounced "matrix" with an "a" sound like in "apple", not an "ay" sound like in "say". Perhaps that's where being a rough English Northerner rather than a posh Home Counties softie comes into play. We say "grass" with the same "a as in apple" sound rather than "grahhss".
And even when the movie "The Matrix" came out and everyone called it "The May-tricks", I never thought about the different pronunciation. I suppose because, in my day job, you don't come across many matrices. Though I will accept that if you pronounce the plural as "may-tress-ease" rather than "mat-ress-ease" you're less likely to get confused when visiting a bed shop.
Though it did remind me of the story about an Englishman and an American who were trying to set up a meeting at a mutually convenient time:
Englishman: "You don't pronounce it like that! The proper way is 'shed-yool'" American: "Really? Is that what they taught you at sshool..."
Sometimes I think I'm the only person who takes Wi-Fi security seriously. Unlike all of my neighbors, I run my Wi-Fi access point with a hidden SSID so that nobody casually browsing the available networks will be tempted to try and connect to it. I also run it on half power, which is plenty sufficient to reach all round the house and garden without exposing it all along the street.
Of course, I also have it set to use WPA2-PSK, and it has a long and complex non-dictionary password. On top of that I enabled MAC authentication so that only known devices can connect. Yes, I know that most of these features can be cracked by determined attackers but all the good books say that defence in depth is the best approach, and the more layers of protection I have enabled the less the risk.
Should I actually worry about anybody connecting to my internal network through Wi-Fi? There's several other computers and devices on the internal network, although they are all secured with user names and passwords different from the wireless router credentials, and all sensitive folders and shares are locked down to the network admin account. But I really don't fancy having somebody I don't know wandering around my network.
Plus, anyone who did connect could get out onto the Internet through my proxy server, absorbing my bandwidth and exposing me to the risk of action if they do anything illegal over my connection. And I have to pay for my bandwidth, so why should I let other people soak it up browsing Facebook, playing games, and viewing doubtful content.
So it seems like my security approach is sensible. Unfortunately, Google doesn't agree. I recently bought my wife a Google Nexus 7 tablet so that she can soak up my expensive bandwidth browsing Facebook, playing games, and viewing pictures of cats. All the reviews I read said it's really easy to set up - you just choose your locale and your network connection, enter your Google account details, and (as we say over here, though I don't know why) "Bob's your uncle."
Yeah, you reckon? At step two you have to choose an existing wireless network and connect to it, or select "Add a network" if you use a hidden SSID. That's fine, but if I don't enter the MAC code of the device into the wireless router's configuration I can't connect. At this point the screen just says "Not in range" and you can't do anything about it.
Usually, when setting up any other computer, I skip the network setup and then go into the device information page to find the MAC address (that's what I had to do with our HTC Android phones). But Android on a tablet is obviously paranoid about not being able to talk to its Do No Evil home because there's no option to set up a network later. I guess they think that nobody would ever dream of using a tablet (where you can read books, watch videos, and listen to music) if there's no Internet connection.
And just to make matters worse, when you set up a new connection and don't get it exactly correct (such as the wrong letter case in the SSID, or an incorrect password) you can't edit it. The only options are "Connect" and "Forget It" - you have to remove the connection and then start all over again. And the dialog quite happily closes without saving the settings or warning you they'll be lost if your finger wavers a little on the onscreen keyboard.
So the only remedy to finish the setup seemed to be to go into the router's configuration and turn off MAC authentication while the tablet connected. Then, after setup is complete, find the MAC address in the tablet's system information pages, add it to the list in the router, and then turn MAC authentication back on. Assuming, of course, that turning off MAC authentication didn't lose the list of existing permitted addresses (I suggest you take a screenshot or copy them into Notepad first).
However thankfully, after three attempts when I finally got everything right in the tablet's connection dialog, my wireless router configuration page (after I turned MAC authentication off) detected that some unknown device was trying to connect and displayed the MAC address for me to add to the permitted clients list. After that I could turn MAC authentication back on and it worked. So completing the tablet's three page setup wizard only took the best part of an hour. Including swearing time.
It was only then that I discovered why I had so much trouble with the connection settings dialog - the tablet was suffering from the "phanton keystrokes" issue several other people have encountered (search the web for "nexus 7 phantom typing" for more details). So the next day it was back to the store to swap it for another one. From a different batch. And go through all the MAC authentication thing again because the MAC address is different.
And now I just need to figure out how to get it to talk to my wife's Exchange Server email account - which is exposed as a service over HTTP by our remote email hosting provider. And convert all the music she indoors wants putting onto it from WMV to MP3 format. Perhaps I'll need to take a holiday and stock up on new swear words before then...
So here's a question: why aren't our European masters hounding a certain well-known company to stop them installing unwanted software on our computers? Every time a hole in the Flash plugin is fixed they insist on fiddling with people's computers in a way that, if not actually illegal, seems to cause some users no end of hassle. If Microsoft included an update in every patch Tuesday that changes the user's default web browser to Internet Explorer, I'm sure there would be a huge outcry.
I mean, here in the People's Republic of Europe our faceless and unaccountable despots insist that I put my company's registration number in every email message I send, apply for a license before I can save somebody's email address in a database, and I even have to ask visitors to my website if they mind me sending them a cookie. Yet they do nothing about a company that tricks people into installing browser toolbars, and even whole web browsers.
Yes, it's a rant, and mainly because - yet again - I've had calls from friends and colleagues who have discovered that their computer has "gone funny". One even thought it was a virus, and is now too frightened to use the computer at all. And one call was from a relative whose computer I "fixed" just last month by resetting Internet Explorer as the default browser after the previous Flash player update.
I know you can argue that there's a checkbox you can un-tick if you don't want your system interfered with, but most inexperienced users won't dare do that in case they "break the computer" - as an industry we regularly impress on users that they should not fiddle with settings unless they know what they are doing.
And, yes, you could argue that the option is clearly shown with a description of what it does. But why is it set by default? If I want a new web browser, then surely I should have to say yes - rather than forgetting (or being too frightened of breaking something) to say no. If your local supermarket required you to tell them every time you didn't want some extra items automatically added to your shopping basket, you'd soon be writing to the local newspaper to complain. So at least try and persuade me to tick "yes" by telling me how wonderful the new browser is, rather than hoping I won't notice you decided "yes" was the default.
But I suppose that, if you want to win the browser wars, maybe one way is to pay some other company to surreptitiously install it on everyone's computer as part of a routine update...
You'd think that, after all the years I've been writing guidance for Microsoft technologies and tools, I'd have at least grasped how to organize the structure of a guide ready to start pouring content into it. But, just as we're getting into our stride on the Windows Azure HDInsight project here at p&p, it turns out that Big Data = big problem.
Let me explain. When I worked on the Enterprise Library projects, it was easy to find the basic structure for the guides. The main subdivision is around the individual application blocks, and for each one it seems obvious that all you need to do is break it down into the typical scenarios, the solutions for each one, the practical implementation details, and a guide to good practices.
In the more recent guide for migrating existing applications to Windows Azure (see Moving Applications to the Cloud) we identified the typical stages for moving each part of the application to the cloud (virtual machines, hosted services, cloud database, federated authentication, Windows Azure storage, etc.) and built an example for each stage. So the obvious subdivision for the guide was these migration stages. Again, for each one, we document the typical scenarios, the solutions for each one, the practical implementation details, and a guide to good practices.
In the cases of our other Windows Azure cloud guides (Developing Multi-tenant Applications and Building Hybrid Applications) we designed and built a full reference implementation (RI) sample that showcases the technologies and services we want to cover. So it made sense to subdivide the guides around the separate elements of the technologies we are demonstrating - the user interface, the data model, the security mechanism, the communication patterns, deployment and administration, etc.
But none of these approaches seems to work for Big Data and HDInsight. At first I thought I'd just lost the knack of seeing an obvious structure appear as I investigate the technology. I couldn't figure out why there seemed to be no instantly recognizable subdivisions on which to build the chapter and content structure. And, of course, I wasn't alone in struggling to see where to go. The developers on the team were suddenly faced with a situation where they couldn't provide the usual type of samples or RI (or, to use the awful marketing terminology, "an F5 experience").
The guidance structure problem, once we finally recognized it, arises because Big Data is one of those topics that - unlike designing and building an application - doesn't have an underlying linear form. Yes, there is a lifecycle - though I hesitate to use the term "ALM" because what most Big Data users do, and what we want to document, is not actually building an application. It's more about getting the most from a humungous mass of tools, frameworks, scenarios, use cases, practices, and techniques. Not to mention politics, and maybe even superstition.
So do we subdivide the guide based on the ethereal lifecycle stages? After collecting feedback from experts and advisors it looks as though nobody can actually agree what these stages are, or what order you would do them in even if you did know what they are. The only thing they seem to agree on is that there really isn't anything concrete you can put into a "boxes-and-arrows" Visio diagram.
What about subdividing the guide on the individual parts of the overall technology? Perhaps a chapter on Hive, one on custom Map/Reduce component theory and design, one on configuring the cluster and measuring performance, and one on visualizing the results. But then we could easily end up with an implementation guide and documentation of the features, rather than a guide that helps you to understand the technology and make the right choices for your own scenario.
Another approach might be to subdivide the guide across the actual use cases for Big Data solutions. We spent quite some time trying to identify all of these and then categorize them into groups, but by the time we'd got past fifteen (and more were still appearing) it seemed like the wrong approach as well. Perhaps what's really big about Big Data is the amount of paper you need to keep scrawling a variety of topic trees and ever-changing content lists.
What becomes increasingly clear is that you need to keep coming back to thinking about what the readers actually want to know, and how best you can present this as a series of topics that flow naturally and build on each other. In most previous guides we could take some obvious subdivision of content and use it to define the separate chapters, then define a series of flowing topics within each chapter. But with the whole dollop of stuff that is Big Data, the "establishing a topic flow" thing needs to be done at the top level rather than at individual chapter level. Once we figured that, all the other sections fell naturally into place in the appropriate chapters.
So where did we actually end up after all this mental gyration? At the moment we're going with a progression of topics based on "What is it and what does it do", "Where and why would I use it?" "What decisions must I make first?", "OK, so basically how do I do this?" and "So now I know how to use it, how does it fit in with my business?" Then we'll have four or five chapters that walk through implementing different scenarios for Big Data such as simple querying and reporting, sentiment analysis, trend prediction, and handling streaming data. Plus some Hands-on Labs and maybe a couple of appendices describing the tools and the Map/Reduce patterns.
Though that's only today's plan...
So it's true. Senility had obviously settled in and my addled brain can no longer maintain even the simplest items of information such as a two-fingered keyboard combination. It seems that in future I'll be wandering aimlessly around my server room dribbling helplessly onto the network switches, muttering profanities in response to the strange symbols appearing on the monitors, and talking into the mouse.
What's brought me to this late stage of realization? Could it be because my habitual dabs at AltGr (the right-hand Alt key) and Delete no longer bring up the login page in my Hyper-V hosted machines? For some weeks I've been confounded by the fact my ailing brain seemed to remember that this always worked before, but now it doesn't. Even poking around in Hyper-V Manager and the properties of the VMs didn't reveal anything useful.
In fact, things got so bad that I actually had to look up the Hyper-V key combinations on TechNet after I got fed up restoring down the VM's window and clicking the Ctrl-Alt-Delete icon in the top menu bar. It seems that what you need is Ctrl-Alt-End, but how could I have forgotten that when most days I'm administering the servers?
However, after some Binging it turns out that I might have a few more months before I finally turn into a doddering and disoriented wreck. According to the Virtual PC and Virtual Server help pages, the equivalent of Ctrl-Alt-Delete in a virtual machine is HOSTKEY and Delete. Of course, it took ages more to find out that the default HOSTKEY is AltGr (you can change it), which was obviously maintained in Hyper-V. Probably so that the world's systems admins wouldn't all decide to retire in the same week.
As far as I can tell, some recent update must have removed this backwards compatibility - though I can't find any mention of it on the web. Maybe it's just me...? Did I break something...?
A few influential people in our little world of Developer Guidance here are Microsoft have recently been avoiding the word "scenario". It seems that it's now so overused, and has so many apocryphal meanings, as to render it useless in terms of determining user's documentation requirements and for planning the creation of product guidance.
As my job description includes "creating scenario-focused guidance" and "exploring typical customer scenarios" this could be a bit of a problem (maybe I've become overused and apocryphal as well). Perhaps, in line with the current trend to make guidance simpler and less formal by using common words and "talking to the user", we should replace "scenario" with "needs" so that I can just "explore typical customer needs".
However, my US-English thesaurus doesn't list "needs" as an equivalent to "scenario", but it does list "situation", "state", "set-up", "picture", and "development" - none of which feel quite right. If I created "state-focused guidance" people will probably ask if it applied only to developers in Wisconsin, and "set-up focused guidance" wouldn't seem to be much use after you'd finished installing the application.
But where we had a struggle this week as we continue to develop the structure and plan for our upcoming guide on Big Data and HDInsight is with the difference between "scenario" and "case-study". We want to create some examples of using HDInsight that correspond to typical users' requirements, covering different types of data and query approach. For example, as well as the old chestnut of analyzing web log files, we want to do something with numerical data and social media content.
We have an outline of the examples, but I still need to decide how to present them. If I phrase each one as though it had been done by some fictitious corporation (yes, you guessed: Contoso) and show how they did it, it seems like it will be a "case study" that is specific to that organizations needs. But can it really be a case study if the organization doesn't actually exist?
If I phrase it as a step-by-step explanation of how you would do it yourself then it seems like it's an "example". And the code that we provide for download will be a "sample". The "scenario", meanwhile, looks rather like the umbrella under which all of this occurs. Maybe I'm just a parenthesis short of a lambda expression, but to me it appears as though there's a hierarchy of things here - something like:
Scenario -> Case Study -> Example -> Sample
where a scenario describes the requirements and a case study provides the solution by including an example of how the sample code was used.
The problem is that many people seem to be put off by case studies because the natural initial response is that it will be specific to somebody else's requirements rather than their own. But while this may be true of case studies that show real life implementations, such as how [insert name of global company here] saved 30% on pizza and cola by adopting Hyper-V, we're inventing a case study to resolve a scenario that we also made up.
However, we made up the scenario based on feedback from real users and advisory boards, so it must apply to a lot of people. Therefore the solution should also be relevant, especially where we explore different options and show alternative implementations - together with, of course, guidance on which to choose based on your own specific needs. So it can't be a case study because now we're covering several cases.
I was going to say "catering for several cases" there, but I was worried it would just make readers think about pizza and cola again.
So do we need a new word to replace the possibly deprecated "scenario", and what should it be? Obviously it's not "case study", and "example" just sounds too minimal. We could try falling back on the old technique of combining words, though "scexample" sounds a little dubious. Mind you, it could be worse. Once the marketing people get started on this we'll end up with some action-based, solution-oriented, brainstorm-generated word to replace "scenarios".
I'll probably have to call them "opportunities" instead...
I'm increasingly seeing how big the disconnect is between people who use computers occasionally just because they need to do stuff on the Internet, and those of us who live and breathe computing. And we're not talking stupid people here; I see it most weeks with friends and acquaintances that are fully capable of managing almost any other technological domestic device.
It's both interesting and worrying. Interesting because I'm part of a group within Microsoft working on a project that will help to discover more quickly and more accurately the issues people typically face when using our products. While here in p&p we are tilted towards the needs of developers, software designers, and system admins, I'm interested to learn how you can make operating systems and user-oriented software easier for home users to grasp. After all, we spend inordinate amounts of time and effort making them intuitive, and providing pop-up help pages and tips.
And it's worrying because, for most of the non-technical population, having to spend time learning how to use technology is a thing of the past. We want instant gratification. Few technological devices come with proper manuals these days anyway (it's all on a CD that gets lost within minutes of opening the box), and instead these devices have intuitive UIs that mean you don't need to resort to the help file. Although I have to admit that some do seem to make common tasks difficult - our new kitchen oven has so many knobs and buttons, and different cooking settings, that my wife keeps the instruction book handy just to decipher the strange symbols.
Coming back to computers, though, this week I encountered a perfect illustration of the issue. Some friends called round, complete with laptop and a list of questions, seeking my help. They could no longer print anything because the Print button and menu bar had disappeared from their web browser. Plus, the desktop shortcut that opened their email now just showed Google search engine. And they were concerned that they'd lose all of their precious photos if the computer broke down or was lost because the backup software couldn't find them.
It took only a cursory examination to discover that Internet Explorer seemed to have disappeared, and now all their web shortcuts had a Chrome icon - which I assumed was down to the recent Adobe Flash upgrade (see Not So Shiny). Rather than uninstalling Chrome I just fired up Internet Explorer and used the Programs tab in the Internet Options dialog to make IE the default browser again. This meant that they now had a Print button on the toolbar, though I still had to mess about resetting the Home page and the link to Hotmail.
But they still couldn't figure out how to get the menu bar to appear in the browser. They never realized that you could click the little down arrow next to the Print button to see more printing options, and always did that before using the File option on the menu bar. I explained that they just had to press the Alt key to see the menu bar; but, other than mumbling something about extra room for the content of the page, I couldn't answer the subsequent question "Why?" So I reset the menu bar to be there all the time. Yet all of these operations are explained in the help file - if only I could persuade them to press the F1 key!
And then we got to the question of backing up their photos. Some while ago I'd given them an old USB thumb drive and copied their photos onto it. But now, every time you plug it in, the computer just displays a dialog saying "No more pictures found to import". It turns out that, when they bought a new printer a while ago, another friend had installed it - along with all of the accessory programs that came with it.
One of these programs was a utility that scanned for photos and displayed them for printing, and this program had helpfully set the autoplay option for USB thumb drives to run itself. Obviously it had imported all the photos from the thumb drive the first time it ran, and so there were no new ones. They thought the program was saving the photos onto the thumb drive, but examining it revealed that it contained only those I'd copied to it a year ago. None of their later photos from several trips abroad had been backed up.
To sort this out I had to go into the Autoplay settings in Control Panel to set it back to "Ask me every time", and then create a simple batch file in the root of the drive to copy the new photos. But, of course, there wasn't room on the thumb drive for all the new photos so we fetched a 500GB USB disk from a local store and I set up the batch file on that. Now all they need to do is run the batch file after loading new photos, music, videos, or documents onto the computer and they'll all get backed up automatically.
Except that, until you come to show someone how to do this, you don't realize how unintuitive it all is. Plug in the drive and wait for the autoplay dialog. Select "Open folder to view the files". Double-click on BackUpMyFiles (the name I gave to the BAT file), wait until the black window disappears, close the window showing the files on the USB drive, click the icon in the notification area that looks like a tall thin box with a tick on it, select "Disconnect storage drive D:", wait for the confirmation message, and finally unplug the drive. And if you forget to close the file window first you get an error that the device is still in use, but no indication of what to do about it.
OK, so this is Windows Vista, and thankfully Windows 8 can do all this through the cloud much more easily. But, despite my pleas to upgrade, they're unlikely to do so any time soon (probably only when the computer breaks down and has to be replaced). And they are the exception - most home user help requests I get are still for Windows XP.
Perhaps when I make my fortune and become a philanthropist my calling will be to upgrade everyone I know to Windows 8 for free, though whether it will install on my neighbor's ten year old HP tower computer (which doesn't even have a built-in CD drive) is questionable. And we'll probably be on Windows 23 by then anyway...
There's something rather disturbing about sitting on the sofa looking at a large brown water stain on the lounge ceiling while the mains electricity circuit breaker is occasionally tripping out. Somewhere in the back of your mind is the worrying thought that the two events might be connected. And that another stream of tradesmen, who will tear up the floor and bash holes in the ceiling, is imminent. And that's after three months of visits from men with toolboxes, and startlingly rapid deflation of my bank balance, during our recently completed house modernization saga.
Before I ramble on any further, however, I guess I should apologize both for the overuse of bad song titles (see also UPS Outside Your Head), and the strange focus on boxes with a big battery inside. As you probably guessed by now, we're back in what's turning out to be "interruptible-power-supply" land. And something I've never seen happen before with a UPS.
I've been using APC UPSs for more years that I like to remember, and generally they do what you expect. After a while, or when they get a bit too warm, the battery inside starts to expand and stops providing backup power, but for the rest of the time they just sit there - maybe flickering a few LEDs now and then, and beeping contentedly when the mains power goes off.
There's four 1000W ones in the server cabinet in my garage powering all of the hi-tech stuff you need to connect a few servers to the Internet and an internal network. One of the servers is a cold-swap backup Hyper-V host that can take over all the VMs if (or, more likely, when) the main host server dies. I power up the backup server once a week so it can sync Active Directory (and to check that it still works, you know what computers are like). When it's off I usually turn off its UPS as well, though that's still connected to the mains supply so the battery is kept charged.
But last week when I pressed the "1" button to turn it on (which automatically boots the server) it immediately tripped out the overload protector in the main fuse box for that ring main circuit. Could this be the cause of our mains electricity problem? But the server powered up fine running on battery so the inverter is obviously working, and the overload switch on the UPS was not tripped. Maybe it's a problem with the battery, though that seemed unlikely as it was powering the server, but I replaced it anyway and everything seemed to work again.
At least it did for a while. After scurrying repeatedly (often in complete darkness) for the fuse box reset button at various intervals during the next two days, and several other experiments trying to isolate the fault (isn't it amazing how many things you have plugged in around the house when you are trying to figure which one is playing up), I decided it needed more decisive action.
When I disconnected and removed the UPS from the cabinet it was quite warm, and an examination of the internal gubbins looking for stray wires or that familiar burning smell offered no clue. Could it just be the cold wet weather we've been having for a couple of weeks causing condensation inside? Yet we have cold wet weather every year (often it's the default climate setting here in England), so why should it suddenly happen this year?
However, after a day next to a radiator in a warm kitchen, I test the UPS and it's working fine again. It looks like it didn't take well to having a hurricane of freezing cold and damp air blown over it for a week while the server was powered down and not drawing any current. Though it did start tripping the circuit breaker again a few days later after I reinstalled it, even though I left it turned on this time. But I suppose it makes a welcome change to the more usual situation of trying to keep everything in the server cabinet from melting during the rest of the year. Maybe it's time to invest in temperature-controlled fans?
And the large brown water stain on the lounge ceiling? I do seem to remember having a bathing accident a while ago, at the time when the bath panel had been removed to lay the bathroom floor, so I'm fervently hoping that was the cause...
Footnote: After the UPS tripped the circuit breaker again I gave up and replaced it (there are good deals on reconditioned ones around at the moment). Perhaps I'll give this one another try in the summer to see if it's got over it's bad habit.