Random Disconnected Diatribes of a p&p Documentation Engineer
A couple of weeks ago I was ruminating on how somebody in our style guidance team here at Microsoft got a new Swiss army knife as a holiday-time gift, and instead of a tool for removing stones from horse's hooves it has one for removing capital letters and hyphens from documentation. Meanwhile the people in the development teams obviously got handkerchiefs or a pair of slippers instead because they are still furiously delivering capital letters whenever they get the chance.
As you will probably have noticed, the modern UI style for new products uses all small capital letters in top level navigation bars and menus. I guess your view of this is based on personal preference combined with familiarity with the old fashioned initial-capital style; I've seen a plethora of comments and they seem to be fairly balanced between like and dislike. Personally I quite like the new style, especially in interfaces such as the new Windows Azure Preview Management Portal. It looks clean and smart, and fits in really well.
Meanwhile my editor and I have been pondering on how we cope with this in our documentation. No doubt some official style guidance will soon surface to resolve our predicament, but in the meantime I've been experimenting with possibilities for our Hands-on Labs. I started out with the obvious approach that matches the way we currently document steps that refer to UI elements (bearing in mind the accessibility guidelines described in It Feels Like I've Been Snookered).
Choose +NEW, select CLOUD SERVICE, and then choose QUICK CREATE.
But written down on virtual paper that does look a bit awkward and "shouty". Perhaps I should just continue to use the initial capitalized form:
Choose New, select Cloud Service, and then choose Quick create.
However, that doesn't match the UI and one of the rules is that the text should reflect what the UI looks like to make it intuitive and easy for users. Maybe I can just use ordinary words instead, in a kind of chatty informal way, so that they don't actually need to match the UI:
Choose new, select cloud service, and then choose quick create.
But that looks wrong and may even be confusing. Perhaps I should just abandon any attempt to specify the actual options:
Create a new cloud service without a using custom template.
Though that just seems vague and unhelpful. Of course, you might assume that a user would already know how to create a new cloud service, so it's redundant anyway. But something more complicated may not be so obvious without more specific guidance about where to start from:
Open the management window for your Windows Azure SQL Database.
I did suggest to my editor that we simply run with something like:
Choose the part of the window that contains what appears to be some text that would say "cloud services" if it was all lowercase, and then...
Ahh, but wait! In a non-web-based application UI I can use shortcut keys, like this:
Press Alt-F, then N, then press P.
Oh dear, that violates the accessibility rules, and doesn't work in a web page anyway. Maybe I'll just go with:
Get the person sitting next to you to show you how to create a new cloud service.
And, as a bonus, this approach may even foster team cohesiveness and encourage agile paired programming. Though you probably can't call it guidance...
You may not remember, but when ASP.NET was in the early stages of development it was called ASP+. I wonder if we'll see history repeating itself so that, when it finally clambers out of beta, Google+ will actually be called Google.NET. Probably not. I guess there's too many hard-up lawyers around at the moment looking for work. But it does seem that, in many spheres of life, we never get the hang of the notion that history repeats itself.
In politics we go through regular cycles of left-leaning (run out of money) and right-leaning (no new taxes, maybe) government. With the environment we can't make up our mind whether we want nuclear (clean but dangerous) or fossil-fuelled (safe but global warming). As for the financial crisis, we're torn between printing money (more debt) and cutting spending (less growth). It's almost like history doesn't teach us anything.
So what about my own little corner of the world: technical guidance and developer documentation? It sometimes seems like the process for this is built around the concept of conveniently ignoring the lessons of history. Here at p&p, our goal is to provide guidance for developers, system architects, administrators, and decision makers using Microsoft technologies - our tag line is, after all, "proven practices for predictable results". But designing projects to achieve this sometimes seems to be history repeating itself. And not always in a good way.
The problem tends to center around how to figure out what guidance users require, and how to go about creating it. My simple take is that you just need to discover exactly what the users need, what you want to guide users towards (the technologies, scope, and technical level that is appropriate), and the format of the guidance (such as book, online, video, hands-on-labs, FAQ, and more). From that you can figure what the TOC looks like and what example code you need, or what frameworks or applications you must build to support this. So, as usual, it's all about scenarios and user stories.
In the majority of cases, this is exactly how we plan and then start work on our projects. But the problem is that it's very easy to start by throwing a bunch of highly skilled developers at a new technology and letting them play for a while. OK, so they need to explore the technology to find out how it works, and figure out what it can do. They go through the process of spiking to discover the capabilities and issues early on so that their findings can help to shape the decision process for designing the guidance and defining the scope.
However, it's very easy for developers to be influenced by the capabilities of the technology rather than the scenarios. As a writer, I ask questions such as "Is this feature useful; and, if so, when, where, and why?", "What kind of application or infrastructure will users have, and how do the technology features fit in with this?", and "Does the technology solve the problems they have, and add value?" In other words, are we looking at the technology from a user requirements point of view, or are we just reveling in the amazing new capabilities it offers?
Here's an example. When you read about Windows Azure Service Bus, it seems like an amazing technology that you could use for almost any communication requirement. It gives you the opportunity to do asynchronous and queued reliable messaging between on-premises and cloud-based applications, and supports an eventing publish/subscribe model. It can use a range of communication and messaging protocols and patterns to provide delivery assurance, can scale to accommodate varying loads, and can even be integrated with on-premises BizTalk Server artifacts.
The Microsoft BizTalk Server Roadmap talks about future integration with Windows Azure but it seems as though you could support many of the BizTalk scenarios with Azure right now using Azure Service Bus, as well as extending BizTalk to integrate with cloud-based applications. But what are the realistic scenarios for real-world users? Will developers try to retrofit Service Bus capabilities into existing applications, or does it make sense only when building new applications? Will they attempt to extend BizTalk applications using Service Bus, or aim to replace part or all of their BizTalk infrastructure with it?
And what Service Bus capabilities are most useful and practical to implement in real-world scenarios and on existing customer infrastructures? Are most developers desperate to implement a distributed publish/subscribe mechanism between on-premises and multiple cloud-based applications, or do they mainly want to implement queuing and reliable message communication? Will it mean completely reorganizing their network and domain to get code that works OK in development spikes to execute on their systems? Is there a danger that experimental development spikes not planned with specific scenarios in mind will end up being completely unrealistic in terms of being applied to today's typical application implementations and infrastructure?
I can see that playing with the technology is one way to find this out, but it's also an easy way to build example code and applications that don't reflect real-world scenarios and requirements. They aren't solutions; they can turn out to be just demonstrations of features and capabilities. This is why we expend a great deal of effort in p&p on vision and scope meetings, customer and advisory group feedback, and contact with product groups to get these kinds of decisions right from the start.
And let's face it, there are several thousand other people in EPX here at Microsoft writing product documentation, each focused on their own corner of the technology map and concentrating on their own specific product features and capabilities. So it's left to our merry little band here in a forgotten corner of the campus, and scattered across the world, to look at it from an outsider's point of view and discover the underlying scenarios that make the technology shine. Finding the real-world scenarios first can help to prevent the dreaded disease of feature creep, and the wild exuberance of over-complexity, from burying the guidance in a morass of unnecessary technological overkill. And that's before I can even start writing it...
And just in case, as a writer, you are feeling this pain here's some related scenario-oriented diatribes:
Once again I'm at one of those gloriously satisfying stages in my p&p working life when I'm trying to define the structure for a new guide. We know what technologies we want to cover, how we will present the guidance, and the kind of sample that we'll provide to demonstrate the all-encompassing wonderfulness of the technologies on offer. But after two weeks of watching videos, perusing technical documents, consulting experts, and RSI from repeated spells of vicious Visioing, I'm still floundering in a cloud of Azure confusion.
The target for the project is simple enough: explore the opportunities for building hybrid applications that run across the cloud/on-premises boundary, and provide good practice guidance on implementing such applications. It obviously centers on integration between the various remotely located bits, the customers and partners you interact with, and the stuff running in your own datacenter; and there is a veritable feast of technologies available in Azure targeted directly at this scenario.
So why is it so difficult to get started? Surely we can toss a few components such as Web and Worker roles, databases, applications, and services into a virtual food mixer and pour out a nice architectural schematic that shows how all the bits fit together. I wish. Even with bendy arrows and very small text I still can't fit the result onto a single Visio page.
Obviously you need a list of the technologies you want to use. In our case, the first things going into the plastic jug are ingredients such as Azure Service Bus (with its myriad and still growing set of capabilities), Azure Connect, Virtual Network Manager, Access Control Service, Data Sync, Business Intelligence, Data Market, and Azure Cache. Then add to that a pinch of frameworks such as Enterprise Library Extensions for Azure and Stream Insight.
Yet every connection between the parts begs different questions. Where do I put the databases (cloud or on-premises) to resemble real-world scenarios but still show technologies such as Connect and Data Sync in action? Do I use Service Bus Queues or Topics and Rules to communicate between the cloud application and the suppliers? If I use ACS for authentication, when and where do I match the unique customer ID with their data in the Customers database? What's the most realistic location for the stock database, and do I replicate it to SQL Azure or just cache the minimum required content in the cloud instances? Does SQL Federation fit my scenarios, or is that a whole different kettle of fish that deserves a separate recipe book?
And, most confusing of all, how do I cope with multiple geographical locations for the Azure datacenters and the warehouse partners who fulfill the orders? Do I allow customers to place orders that will be fulfilled from any warehouse (with the associated problem of delivery costs), or do I limit them to ordering only from their local warehouse? And if I take the second option (assuming I have a warehouse partner in both the East and West US), what happens if somebody in New York wants to place an order for delivery to California?
And after you decide that, look what happens when you factor in Azure Traffic Manager. If you use it to minimize response times and protect against failures, the customer in New York might end up being routed to the California datacenter. That's fine if they want the goods delivered to California, but most likely they'll want them delivered to New York and so the order needs to go to that warehouse. Unless, of course, the New York warehouse is out of stock but they have some in the California warehouse.
Of course, the whole concept of integrating applications and services is not new. Enterprise Application Integration (EAI) is a big money-spinner for many organizations, and everybody has their own favored theory accompanied by a different architectural layer model. And don't forget BPM (Business Process Management) and BPR (Business Process Reengineering). I read a dozen different reports and guides and none of them had the same layers, or recommended the same process.
And, in reality, building a hybrid application (or adapting an existing on-premises application into a hybrid application) is not EAI, BPM, BPR, or any of the myriad other TLAs. It's a communication and workflow thing. Surely the core questions focus on how you get service calls, messages, and management functions to work securely across the boundaries, and how you manage processes that require these service calls and messages to work in the correct order, and make decisions at each step of the process. Yes you can match these questions to layers in many of the EAI models, but that doesn't really help with the overall architecture.
What went wrong with the whole design process was that we started with a list of technologies rather than a business scenario that required a solution. We went down the route of trying to design an application that used all of the technologies in our list, but used each one only once (otherwise we'd be introducing unnecessary duplication). We'd effectively taken ingredients at random from the cupboard and expected the food mixer to turn them into a palatable, attractive, and satisfying beverage. It's obviously not going to work, especially if you keep the cat food in the same cupboard as the bananas.
In the real world people start out with a problem that the technologies can help to solve, not a predefined list of technologies chosen because they have tempting names and capabilities. If you want to build a public-facing website with an on-premises management and reporting application, you wouldn't start by buying 100 copies of Microsoft Train Simulator and a refrigerator. You'd design the application based on requirements analysis and recommended architectural styles, then order the stuff in the resulting Visio schematic. Somewhere along the line the choice of technologies would be based on the application requirements, rather than the other way round.
So at the moment we're tackling the issues from all three ends at once, and hoping for some central convergence. On our mental whiteboard there's a big circle containing the list of required technologies, another containing EAI and other TLA layer models, and a third containing the possible real-world scenarios. I'm just hoping that, like a Euler diagram, there will be a tiny triangle in the middle where they overlap.
But that's enough rambling. The pains in my fingers are starting to recede, so I need to get Visioing again. I reckon I've still got some bendy arrows left that I can squeeze in somewhere...
There's lots of comment at the moment about the "post-PC age". Seemingly everyone will just use some Internet tablet or device that installs the O/S and applications from the cloud, keeps all of the data in the cloud, and uses only services running in the cloud. No need for a fast processor, hard drive, or tons of memory because it's just a web browser and display for applications running somewhere else. The thin client for the 21st century.
However plenty of people dispute this assertion, citing the need to run powerful and complicated applications and to store data locally. And, of course, to maintain control. If your whole life is held by some huge and faceless cloud-based corporation (not mentioning any names), what happens when they accidently lose your account? Or decide you are no longer welcome and remove you from their system? Supposedly it's already happened to people who have made some unwelcome comment about their provider, or been mistakenly charged with being a hacker and forcibly ejected.
For most of these reasons, and others, I'm staying with my combination of PCs, servers, and various back-up devices. Yes I do keep a backup of my important data and photos in SkyDrive; though (no doubt due to my well-publicized paranoia) it's all in compressed PGP-encrypted files. But I reckon I've discovered not so much the "post-PC age", but a "same-PC age". Maybe this is as much a problem for PC suppliers as the flood of tablets and smartphones now swamping the world.
The "same-PC age" is a simple concept. Instead of buying the latest, greatest, fastest new machine every couple of years, you just keep the old one. In the past this hasn't really been an option unless you were prepared to turn it on the day before you wanted to use it, and stop for coffee each time you paged down in a document. But recently it's become clear that older PCs can just keep on working.
For example, my wife's four year old Dell XPS laptop with Vista was starting to show the signs of being ready for replacement with something a bit snappier. Yet a simple FDISK and a fresh install of Windows 7 brought it back to life so that it feels like a brand new machine. It's responsive, starts quickly, and handles everything she throws at it.
Even better, a friend's six year old Dell laptop (a huge and ugly beast that originally ran XP) was equally transformed by FDISK and Windows 7 into something that is a pleasure to use. My friend tells me that it's faster now than it was with XP, though I suspect he's being a little optimistic. Of course, it doesn't support Aero, but he never had that anyway so it's no loss. What he is mourning is the lack of scroll support for the trackpad - it seems there's no driver for it that works in Windows 7.
Update: After some experimentation, it turns out that the latest ALPS driver from the Dell website does work with Windows 7.
I suppose that's the problem. Dell is hardly likely to create Windows 7 drivers for a machine that was designed to run XP. It would be like expecting Ford to provide a fuel pipe to connect up a 3 litre BMW engine you shoe-horned into your Focus. And, anyway, my friend is less concerned now after I pointed out that there are Page Up and Page Down keys on the keyboard. I suspect that, until the hard drive dies or he graduates to a tablet, the laptop will continue to serve its "same-PC age" functions.
But the biggest "same-PC age" issue I have at the moment is with my working-day laptop. When I'm not trapped in front of the workstation and huge screens upstairs in the office I use a rather nice, four-year-old Dell Latitude laptop for everything work-related. Its fast, has a wonderful LCD-backlit matte screen, loads of disk space, a superb keyboard, excellent battery life, and still looks prettier than any other laptop out there (including the Apple ones). It runs every piece of complex software I need for my day job, including acting as my office telephone.
But it won't be long before I'm forced to do something about the O/S. Amazingly it's still running the original installation of Vista, but pretty soon company policy will remove Vista from the list of supported operating systems on the corporate VPN. At that point I'll need to make the decision on either Windows 7 or Windows [whatever Windows 8 will be called]. Ah, I hear you say, why not just do the same as with the other machines and hit it with the FDISK/Win7 thing now?
Well I'd love to, but there's a major problem here. To be allowed onto the company network in Windows 7, I have to enable Bit Locker. Yes, it's great idea, but the machine doesn't have a TPM module so it seems I'd need to plug in a thumb drive every time I log on. As the policies applied by the domain force the password-enabled screensaver after 10 minutes, this will be regularly throughout every day. If I leave the thumb drive plugged in I'm sure to break it and the socket at some point as I wander aimlessly around seeking guidance-creation inspiration. If I take it out every time, there's almost no doubt I'll spend the first hour of every day searching for it, or lose it altogether. Either way, I'm destined to regular cycles of FDISK and reinstall. Can I buy a plug-in TPM module I wonder?
Anyway, in preparation, I ran the Windows 7 Upgrade Advisor. It says all of my applications will work without problems! Great! However, it also listed all the devices and drivers that won't work in Windows 7. OK, the built-in camera never did work from new, but as I never use it that's not a problem. But when I ordered the machine I specified a built-in smart card reader and fingerprint reader. It even came with a proximity card reader. It's true I never managed to get the terrible clunky device setup software to recognize any of these devices (I assumed it was a Vista issue), and when I did find a driver for the smart card reader it just told me that my corporate smart card was "not a recognized format" so I've been using a separate plug-in card reader instead. And a separate plug-in fingerprint reader because the built-in one seems to be there only for decoration rather than for any functional reason.
So I suppose I shouldn't expect Windows 7 to work with any of these devices either. But I can't make up my mind which is the most annoying outcome of all this investigative effort. Is it that I'll end up junking an otherwise fully-usable machine that cost a lot of money (over 1500 pounds or 2000 dollars)? Or that I'll spend my remaining working days hunting for lost thumb drives and then reinstalling everything? Or, maybe most annoying of all, it reminded me that I paid good money for features that never worked?
If you'd bought a typical consumer device with all the bells and whistles and discovered that several of them didn't actually bell or whistle, you'd soon be back at the store with the box under your arm. How come we computer users accept that only part of the hugely expensive kit we buy will actually work? Perhaps, after all, there is a case for the ubiquitous Internet tablet or device that "just works"...
Don't you hate it when someone says "Do you want the good news or the bad news?" and then, when asked for the good news, replies "There isn't any". I really do try so hard to avoid inflicting this on people I know, but sometimes it's inevitable. And usually it's when they've asked me to look at their computer which is "playing up", "running very slowly", or simply won't start at all. I really should look in the mirror sometime to see what I look like when smiling sweetly at the same time as gritting my teeth.
So there I was, a few days earlier, sitting in front of a Packard Bell mini-tower that proudly wears its "Pentium 4 Inside" and "Designed for Windows XP" badges. According to the manufacturer's label on the back it's just over five years old, and it's even got a blue LED on the front so it looks reasonably respectable sitting alongside my more contemporary machines. But, inside, it's severely screwed up. You can tell that because the only program that will run is Internet Explorer; and it takes three minutes to struggle onto the screen. Any other .exe pops up the "Choose a program to open this file with" dialog, and all of the Control Panel applets just display an error message that Rundll can't be found.
It looks like it's suffering from at least the W32.Sircam virus, or something similar, and no doubt others as well since none of the four different anti-virus software programs that have been installed during its lifetime are running now. And this was probably confirmed when the owner, a friend of my wife, revealed she'd had a phone call from "a foreign-sounding gentleman" who said he was "associated with Microsoft", had been alerted that her computer had a virus, and that he could repair it over the phone for only 65 pounds (to be pre-paid by credit card). Needless to say I advised her against taking up his offer.
So what to do? I can't get Regedit or any of the utilities on my home-made rescue/repair CD to run. If I boot into safe mode I have no keyboard - neither I nor the computer owner has one with the old PS2 connector, and it doesn't recognize a USB one at boot time. So I can't use the boot menu options, and my original plan of simply stopping the boot loading of drivers and running some scans to remove malicious software is in tatters. Do I want to take the drive out and put it in one of my working machines to scan it? Probably not.
Of course, a quick phone call to the owner reveals that it has all of her photos, letters, and other never-been-backed-up-and-irreplaceable files on it. And, as expected, she "didn't get any discs with it", and there seems to be no rescue mechanism installed either. Windows Explorer won't run, but I can get Internet Explorer to show the disk contents by typing "C:\" in the address bar. So at least I can rescue those valuable files onto a thumb drive.
But as to the operating system, what do I suggest she do? With no rescue or O/S disk, I can't reinstall XP. I could suggest she buy a copy of Windows 7, but I have no idea if it will work on this machine and I can't run the Upgrade Advisor. A full version costs more than the machine is worth, and I don't know if an upgrade version will work (there is no Windows Key sticker on the machine, so I don't know how valid the installed O/S is). In either case, it's going to cost somewhere north of 70 pounds to buy Windows 7 and it may not work afterwards.
In reality, the advice has to be to dispatch the machine to the great God of recycling and buy something more up to date. I can rescue the precious files, and there's nothing inside the box in terms of hardware that's worth saving. She'll end up with something that's much faster and responsive, more resilient to malware, nicer to use, and has a lot more capabilities. But it means finding 250 to 300 pounds that she probably didn't want to spend.
Yet, only a couple of weeks ago, I was raving about how Windows 7 brought several old computers back to life. However, the problem machine was obviously a bargain basement version compared to the various Dell machines mentioned in that post. The beast I'm looking at here seems to use technology from the 2002 - 2003 era, even though it was built in 2006.
Maybe this is the real issue. Most people I talk to still think of a computer as a "thing" that is the same no matter where it comes from or how much it costs. The same people would realize that a TV costing 100 pounds would be very unlikely to have a 48 inch high-definition screen and a full surround sound system, or that paying the price of a budget motorcar would get them a Ferrari.
Perhaps the issue is that almost any computer you buy, even those at the bottom end of the price range, works just fine out of the box. It's only when you actually get to use it for real over a long period, or upgrade it in a few years' time, that you discover you bought something that was effectively out of date when it was new. Oh well, as they say, you get what you pay for...
One of the nice things about working for a UK company but being on permanent assignment to a US one is that you get twice as many public holidays. While I'm not sure we want a Black Monday here in Little Olde England, maybe we could come up with some excuse for celebrating Thanksgiving. Perhaps without the turkey. Even though it's a moveable feast (the fourth Thursday of November) it usually coincides with our wedding anniversary, so it's a great opportunity for a few days away.
This year it was South Wales, and I managed to avoid the usual "going to Wales" joke (Q: How do you get two whales in a Mini? A: Across the Severn Bridge). Talking of the Severn Bridge, you have to pay £5.70 for the privilege of crossing it to get into Wales, but they say it’s not so bad because there's no charge to get out again. Except we came back a different route via Monmouth, so where do I go to get my £2.85 refunded?
What did surprise me, though, is how few Welsh people there seem to be in South Wales. When I booked the hotel, the website said it was "in the heart of Welshest Wales". Yet we'd got past Swansea before I heard a Welsh accent. And you can't even stop to look at the sea in Swansea without climbing over a huge wall. I suspect they built it especially so that all the English visitors have to pay to park their car. And that's after charging me just to get into Wales...
But once you get past Swansea to somewhere like Mumbles and Knab Rock, the whole outlook changes. Old-fashioned towns and villages with superb views across the estuary, and the wonderful hospitality (though the waiter in the cafe was Polish, so still no Welsh accent).
Then, the next day, one of the most beautiful places I've ever been in the UK. Llanstefan is like somewhere time forgot. Wide open and deserted beaches and mud flats, incredible views towards the valleys, and a sense of peace and tranquility that you never expected after going through places such as Port Talbot and Cardiff.
There wasn't a breath of wind that morning, and all you could hear was a farm tractor slowly plodding up and down a distant field. What an amazing place. OK, so I suppose you can't expect the only shop and tea-room to be open in late November, even though there was a huge sign saying "Open All Year" (it didn't say which year).
Next stop, Saundersfoot. No, I have no idea where the name came from. Strangely, the further West you go in South Wales, the less Welsh the place names get (think Pembroke, Haverfordwest, and Fishguard). Maybe we passed through Welshest Wales and out of the other side. But Saundersfoot is a nice little seaside resort where we were told to go and try the local delicacy named "Cawl". Supposedly it's a kind of mutton stew, though the place we were recommended to serves it with beef and cheese bread. It was nice, but cost about the same as a three-course meal here in the wilds of Derbyshire where I live.
But the highlight of the trip was to see Tenby. We watched an edition of Grand Designs on TV some weeks ago that featured a couple who had converted the old lifeboat station into an amazing house. The town and beach looked so nice in the program that we thought it was worth a visit, and we weren't disappointed.
The old town is quaint, though it would have been even quainter if the tide hadn't been out, and if local council hadn't been digging up all the cobbled streets that week. However, the views from the cliff are wonderful, and the deserted South beach provides an opportunity for a pleasant stroll along the coast. Preferably in the same direction as the 40 mph wind is blowing.
Coming home, we motored into the valleys planning to see the scenery and the Brecon Beacons, but were defeated by the sudden change to cold, wet, and misty weather. I managed to shoehorn in a brief diversion to the Brecon Mountain Railway, in the delightfully named village of Pant, but other than the cafe it was closed for the winter. The one highlight was in Neath Valley when the weather calmed down for a short while. A brisk and refreshing walk up a beautiful valley to the local waterfalls proved well worthwhile. OK, so it's not quite Victoria or Niagara, but it's a beautiful place.
I wonder if I can use the left-over half of my Severn Bridge fee to go back to Wales in the summer...
My wife will tell you that I'm really not very good at getting the point of things. I mean, when it comes to making typically vital choices such as whether I want brown sauce or ketchup on my sausage sandwiches, I can't see the point of long-winded pondering and tortuous decision making. Just put brown on one half and ketchup on the other. In fact if there was a competition for getting the point, and she made me enter, I probably wouldn't even get the point.
The week after I published this post I was amazed to see the same question appear on the quiz show Million Pound Drop: "According to a recent survey, which do men prefer on their sausage sandwiches, brown sauce or tomato sauce?" The answer was brown sauce...
Yet there are so many other things out there in the real world (which don't involve sausages) that it's hard to see the point of. I watched a main evening news broadcast and noticed that three of the reports included footage from "on the spot" reporters. One stood in the rain outside the House of Commons telling us about this week's faux-pas by some Government minister, but he didn't speak to anyone, or even walk purposely into the building while talking, or actually move at all. There were plenty of people milling around in the background, but nobody I recognized. Why bother? Why not have him stand in front of a photo in the nice warm studio, or even let the rather scary news-anchor lady just read it out?
And then there was one about a huge car pile-up on the motorway, which they think might have been caused by smoke from a golf club fireworks display. There was the intrepid roving reporter standing on the side of a country lane explaining the intricacies of the event. OK, so there was an empty police car parked behind him, but nobody else in sight. And you couldn't see the cars involved, or the motorway itself, or the golf club buildings, or any fireworks, or even any smoke. He might as well have been standing outside our house (maybe he was) for all the point of doing a "live from the scene" report.
I guess all this is done just to try and keep people's attention for the massive fifteen minutes duration of the program. It's almost like they don't expect the people who tune in to actually be interested in the news, so they have to make it exciting with lots of different scenes and people. And, of course, they have to tell you what's in the program at the start, and then keep telling you "what's coming up" between each item. Wow, I really do want to hear about the lady whose cat had to be rescued from a tree, and I need you to keep telling me that you haven't forgotten about it.
Can you imagine trying to create technical documentation based on pointlessness like you see every day in TV news broadcasts? I'd have to recruit dozens of writers who could travel the country writing paragraphs in appropriate locations. Send Fred, together with a huge support team of laptop preparation operators, maintenance engineers, Microsoft Word technical support staff, desk light electricians, and office furniture assembly operatives to sit in our server room and write the part about minimizing server peak load.
Meanwhile Christina would be dispatched, along with half a dozen security staff and experts in the use of pizza- and cola-proof protective clothing, to sit with the development team when writing the paragraph that describes how developers can use Visual Studio to add WIF authentication features to their applications.
And, of course, not forgetting Ravi, who would begin the long journey to the local telephone exchange accompanied by around 50 specially trained health and safety experts, telecommunications jargon translators, public relations staff, company policy compliance advisors, facilitation collaborators (and, hopefully, his laptop) in order to provide the vital paragraph about ADSL networking reliability.
Then, when we come to assemble it into the final book format for release, we'll have to remember to include an "upcoming chapters list" every fifth page, and an index after every first-level heading, so people don't get fed up halfway through - or start to panic that we might have missed out the bit they were really looking forward to.
Just imagine how exciting this kind of technical documentation will be to read...
My new pet snake is installed, working, and really flies. Deathly silent, yet it instantly responds to every command. It's like somebody speeded up the world. Or at least speeded up my television. And, yes, this is a follow-on from last week's rambling post about our new "Mamba" Media Center box from QuietPC.com. In fact, even the title continues the not-quite-a-song theme.
The long and sometimes tortuous setup and installation is over. It's nestled neatly in the TV cabinet, and after a few days use it really does seem to be a superb machine - and a significant upgrade from the old I-US Media Center box. OK so most of the setup hassle was my fault (more later) because I wanted it to be on my local domain and integrated with the network. It needs to have remote Event Log access turned on, my "failed recording" monitor service installed, a custom screensaver, auto logon, and a few other tweaks.
What surprised me, though, was the benefits from the new TV cards. The old box had only one PCI slot, whereas most modern tuner cards are PCI-E only these days so I had to choose between terrestrial (DVB-T) and satellite (DVB-S). And none supported HD. The new Mamba has a dual DVB-T2 (HD) and a dual DVB-S2 (HD) card. And, amazingly, Media Center accepted both, and tuned both of them, so that we now get all of the terrestrial and the satellite channels. You can still record from only two tuner instances concurrently (either on the same tuner card or one from each tuner card) and watch a previously recorded program at the same time. But it's wonderful to get back some old favorite channels that aren't on satellite, and to finally be able to get all the HD channels.
Of course, the actual tuning process is still a pain, and really does need to come closer to the capabilities offered by ordinary TVs that can detect broadcast update signals and automatically retune channels that move around. Media Center has the facility to add new channels, but it never seems to fully work. In the past, when they moved channels around, I had to do a complete re-setup of all the channels - which means getting back the 500+ I don't want and had removed from the guide, and having to go through the laborious process of finding listings for channels where the channel name and the listing name are slightly different. Though maybe in the Windows 8 version of Media Center it will work better. No doubt I'll find out in time.
The final setup process was made more infuriatingly slow by a couple of unexpected hitches. For some reason, Media Center no longer has an option to start automatically when the system restarts from cold or when a user logs on. I have no idea why this option was removed, and it seems from a web search that lots of people are annoyed about it and have found an equally large number of kludges to fix it, including creating a profile and using a batch file in the \ProgramData\Microsoft\Windows\Start Menu\Programs\Startup folder. However, another solution seems to be obvious. Create a scheduled task that runs at logon and executes the file %windir%\ehome\ehshell.exe, and set the taskbar to auto-hide.
But the most annoying quirk was that my custom screensaver that displays details of photos never appeared. All I got was a nausea-inducing scrolling, panning, and zooming screenfull of black and white photos with odd ones occasionally appearing in colour - despite the Lock screen slideshow being turned off and my screensaver properly configured in Windows Personalization settings. I played with this for ages before finally searching the web for solutions. Most of which are totally confusing because the say to turn on the slideshow and then turn off the option to "show the lock screen instead of turning off the screen".
I even followed the advice on one site to use gpedit to disable the Lock screen altogether, but it made absolutely no difference. After I finally gave up and went back to configuring Media Center I found the screensaver option within the Media Center interface. Which is helpfully turned on by default. The Lock screen slideshow I was trying to get rid of wasn't actually the Lock screen at all. No wonder I had problems! After turning the Media Center screensaver off my own screensaver works fine. Doh!
I'm still not sure I'd recommend Media Center as a replacement for a normal TV to my non-technical friends, but it really is a superb system if you know something about computers, are prepared to fiddle with it, and accept the few shortcomings such as the usual need for updates and other maintenance tasks. Even the smart TVs I've seen can't compete with the full range of capabilities and flexibility of a powerful computer driving a big wall screen.
But I have to run. Now that I've got the "Dave" channel back again, there's ten episodes of "The Professionals" from 1978 I need to watch...