Random Disconnected Diatribes of a p&p Documentation Engineer
At one time you had to work in a museum to be a curator, but the wonders of information technology mean that now we can all exhibit our technical grasp of complicated topics and elucidate the general population by identifying the optimum resources that help to answer even the most complex of questions.
I'm talking about the new Curah! website here. The idea is simple: a resource that gathers together the questions most commonly asked about computing topics; each with a carefully and lovingly crafted set of links to the most useful blogs, reference documents, tools, and other information that offers a solution to the question.
Anyone can register and create a curation, and the site is optimized for search engines to make it easy to find answers. It's still in beta as I write this, but already has hundreds of answers to common questions. The great thing is that the curations are not just a set of links like you'd get from the usual search engines, which tend to optimize the list based on keywords in the resources, the number of links to them from other pages, and the newness of the content. None of these factors can provide the same level of usefulness as a list compiled by an expert in the relevant topic area who regularly creates and uses information that provides the maximum benefits.
My interest in the Curah! site also comes about partly because I am part of the group that defined the original vision and got it started. I've also added a few curations of my own, which are centered on the topic area that I now seem to have been permanently assigned to - Windows Azure application design and deployment. My regular reader will probably have noticed this from the rambling posts on this blog in the past.
However, one point that concerned me was that, having created my own curations, I am now responsible for maintaining them. As I plan to create more in the future, I was beginning to wonder if I would end up spending all of every Monday just checking and updating them as the target resources move, disappear, or I discover new ones. What I needed was some type of automated tool that would make this job easier. So I built one.
The CurahCheck utility is a simple console-based utility that will check one or more views on the Curah! site by testing all of the links in each curation ID you specify. The curation title and the linked page titles can be displayed to ensure that it is valid and that all of the linked resources are still available. It can also be run interactively, or automatically from a scheduled task.
The utility generates a log file containing details of the checks and any errors found. It can also generate an HTML page for your website that shows the results of the most recent check and the contents of the log file. If you have access to an email server, the utility can send warning email messages when an error is detected in any of the views it scans.
If you are a Curah curationist you can download the utility from here, and use and modify it as you wish. The source project and code for Visual Studio 2012 is included. Before you use it, you'll need to edit the settings in the configuration file to suit your requirements - the file contains full details of the settings required and their effect on program behavior.
Of course, the usual terms and conditions about me not being responsible for any side-effects of using the program, such as your house falling down, your children being eaten by a dinosaur, or your computer bursting into flames, still apply...
No I'm not talking about the clever "please mend my computer" tools that you can run on Microsoft's website. I'm talking about my regular tasks trying to fix all the things that break in our house, seemingly one per day at the moment. It's usually a wide and assorted selection of tasks; this week comprising a table lamp, a DECT phone, my fishpond filter, SQL Database, and Windows 8.
Invariably my fixit jobs fall into two categories: "not worth the effort" and "fingers stuck together with superglue again." The second category tends to be associated with jewellery and ornament repairs (the latter typically being my fault), but the table lamp repair was in the first category - an IKEA lamp that originally cost twelve pounds new, but came from a jumble sale at a cost of two pounds. My wife had bought six new 12V halogen bulbs for it that cost more than the lamp did originally, but it refused to work. The transformer is a sealed unit, and the wiring is sealed inside the lamp. Do I spend three hours breaking it just to see what's wrong?
But I had better luck with the DECT phone, a couple of new triple-A rechargeable batteries brought it back to life. And I managed to fix my fishpond filter using the typical handyman technique of a few bits of bent wire. These are the satisfying kind of repair jobs where you can carry around a big toolbox, and look like you know what you're doing. It works even better if you can arrange it so you didn't shave for a couple of days before.
What's most annoying however, and has little capacity for appearing butch and manly, is the so-far-unmentioned third category of repair jobs: broken software. A year or so ago I parcelled up my locally-hosted websites and dispatched them to heaven - or, to be more precise, to live in the clouds of Windows Azure. Amongst them is our local village website, which is reasonably complex because it handles news, events, photos, and has a user registration facility.
Yes, I know I should have adapted it to use claims-based authentication and Windows Azure Active Directory, but I just never got round to it; instead it has an ageing "aspnetdb" database sitting in SQL Database. There's only one role instance, and it works fine. Well, almost. Yes I did do comprehensive testing after deployment before the site went live, checking that I could add and edit all the items on the site, sign in and change my password, view lists of registered users, and see the error lists in the admin section of the site.
I even made sure the site could create and remove users, and allow them to edit their details. But it turns out that my test coverage was a little less than perfect. For the first time since deployment I needed to use the functions of RoleManager to change the roles for a registered user. And everything broke. Even going into the SQL Database management portal and deleting the row in the data view of the table failed. As did executing a SQL DELETE statement in the query window.
It took some searching based on the error message about SQL collation to find the answer. And the fix is so simple that it should painted in six inch high letters on a big piece of wood and nailed to the Azure portal. Simply open the stored procedure named aspnet_UsersInRoles_AddUsersToRoles and insert the text COLLATE Latin1_General_CI_AS into the DECLARE @tbNames... line as shown here, and then do the same with the aspnet_UsersInRoles_RemoveUsersFromRoles stored procedure.
To get to the stored procedures, open the database management page from the main Windows Azure portal and choose the Manage icon at the bottom of the page. Sign into the SQL Database server and choose Design in the left-hand navigation bar, then choose Stored Procedures in the tabs near the top of the page. Choose the Edit icon next to the stored procedure in the list and do the edit. Then choose the Save icon on the toolbar. Repeat with the other stored procedure.
Meanwhile, ever since I upgraded my trusty Dell E4300 to Windows 8 I've been plagued by wandering cursor disease. I'll be typing away quite happily and suddenly the letters appears in the middle of the next paragraph, or halfway along the next line. It's amazing how much something likeum. this can screw up your finely crafted and perfectly formatted text. It really is a pain in the b
Of course, the answer is the same as most Windows 8 problems with older hardware. The jazzy new all-singing and all-dancing hardware drivers that come with Windows 8 don't always do the same as the wheezing and arthritic ones that came on a disk with the computer when you bought it. Thankfully, plenty of other people are having the same variable input position issue as me, and their posts led me to the updated Alps touchpad driver on Dell's website.
Not that it fixes the problem - my touchpad still seems to think it's morphed into an X-Box Kinect - if I wave a hand anywhere near it the insertion point cursor leaps madly around in my Word document. But the Dell driver can detect when you plug in a proper mouse, and disables the touchpad automatically. Problem solved.
Now I just need to prise my finger and thumb apart so I can mend a pair of my wife's earrings...
According to Readers Digest, there's a dyslexic agnostic insomniac out there somewhere who lies awake all night pondering on the meaning of dog. Thing is, it really should be "doG", not "dog". But it seems that, according to our most recent style guide here at Microsoft, capital letters are fast becoming obsolete, although it's probably not so that jokes like this will be more accurate.
The aims of the style guide updates are constructive and practical. We should provide help and guidance only where it will be useful, and phrase it in such a way that it appears friendly, open, and easy to assimilate. All very sensible aims, though I've yet to discover where simplifying content so that it makes it harder to use, or sprinkling it with exclamation marks and apostrophe-shortened words (such as "don't" and "shouldn't") is advantageous.
But behind all this is an undercurrent of practical mechanisms for actual word styles and structure. For example, in the previous paragraph I committed the sin of "excess-hyphenation" (or, to be more exact "excess hyphenation"). I'm sure Microsoft style gurus aren't aiming to purge documentation of all hyphens, though it sometimes feels like it. I now have to use "bidirectional" instead of "bi-directional", "rerouting" instead of "re-routing", and "cloud hosted" instead of "cloud-hosted". Though it seems I can still get away with "on-premises", as in "deployed to an on-premises server". Perhaps "onpremises" is just one step too far. Though I do like that they ban the use of "on-premise" because a premise is, of course, a proposition upon which an argument is based or from which a conclusion is drawn.
And then there's the race to remove capital letters from as much of the content as possible. Things that used to be a "Thing", such as "Windows Azure Storage" are now just a "thing", such as "Windows Azure storage". Tables are now tables, and Queues are now queues. I did think that "BLOB" might survive the cull because it's an acronym for Binary Large Object, but sadly no. It's now a "blob" in the same way as a patch of spilled custard is.
Mind you, there are other rules that make it hard to create guidance that reads well. I still haven't found a solution for the repetitiveness in phrases such as "stored in an Active Directory directory", or the fact that the Access Control Service (ACS) is no longer a service; it's just "Access Control" and can't even have "service" after it. So "authenticated by ACS" now becomes "authenticated by AC", which seems too vague so I end up with "authenticated by Windows Azure Access Control" and repetitive strain injury.
I get that over-capitalization and excess hyphenation can make the text harder to read and assimilate, and that we need to provide friendly guidance that uses familiar words and styles. And thankfully I have wonderful editors who apply all the rules to my randomly capitalized and hyphenated text (though, sadly, not to my blog posts). But I wonder if we'll soon need to start writing our guidance in txt speak. imho well mayb need to 4go sum rules b4 its 2 l8...
I discovered this week why builders always have a tube of Superglue in their pockets, and how daft our method of heating houses here in the UK is. It's all connected with the melee of activity that's just starting to take over our lives at chez Derbyshire as we finally get round to all of the modernization tasks that we've been putting off for the last few years.
I assume that builders don't generally glue many things together when building a house - or at least not using Superglue. More likely it's that "No More Nails" adhesive that sticks anything to anything, or just big nails and screws. However, the source of my alternative adhesive information was - rather surprisingly - a nurse at the Accident and Emergency department of our local hospital.
While doing the annual autumn tidying of our back garden I managed to tumble over and poke a large hole in my hand on a nearby fence post. As I'm typically accident prone, this wasn't an unexpected event, but this time the usual remedy of antiseptic and a big plaster dressing didn’t stop the bleeding so I reluctantly decided on a trip to A&E.
Being a Sunday I expected to be waiting for hours. However, within ten minutes I was explaining to the nurse what I'd done, and trying to look brave. Last time I did something similar, a great many years ago and involving a very sharp Stanley knife, I ended up with several stitches in my hand. However, this time she simply sprayed it with some magic aerosol and then lathered it with Superglue. Not what I expected.
But, as she patiently explained, they use this method now for almost all lacerations and surgery. It's quicker, easier, keeps the wound clean and dry, heals more quickly, and leaves less of a scar than stitches. She told me that builders and other tradesman often use the same technique themselves. Obviously I'll need to buy a couple of tubes for future use.
Meanwhile, back at the hive of building activity and just as the decorator has started painting the stairs, I discover that the central heating isn't working. For the third time in twelve years the motorized valve that controls the water flow to the radiators has broken. Another expensive tradesman visit to fix that, including all the palaver of draining the system, refilling it, and then patiently bleeding it to clear the airlocks.
Of course, two of the radiators are in the wrong place for the new kitchen and bathroom, so they need to be moved. Two days later I've got a different plumber here draining the system again, poking pipes into hollow walls, setting off the smoke alarms while soldering them together, randomly swearing, and then refilling and bleeding the system again.
But what on earth are we doing with all this pumping hot water around through big ugly lumps of metal screwed to the walls anyway? Isn’t it time we adopted the U.S. approach of blowing hot air to where it's needed from a furnace in the basement? When you see the mass of wires, pipes, valves, and other stuff needed for our traditional central heating systems you have to wonder.
Mind you, on top of all the expense, the worst thing is the lump on my arm the size of a tennis ball where the nurse decided I needed a Tetanus shot...
A couple of weeks ago I was ruminating on how somebody in our style guidance team here at Microsoft got a new Swiss army knife as a holiday-time gift, and instead of a tool for removing stones from horse's hooves it has one for removing capital letters and hyphens from documentation. Meanwhile the people in the development teams obviously got handkerchiefs or a pair of slippers instead because they are still furiously delivering capital letters whenever they get the chance.
As you will probably have noticed, the modern UI style for new products uses all small capital letters in top level navigation bars and menus. I guess your view of this is based on personal preference combined with familiarity with the old fashioned initial-capital style; I've seen a plethora of comments and they seem to be fairly balanced between like and dislike. Personally I quite like the new style, especially in interfaces such as the new Windows Azure Preview Management Portal. It looks clean and smart, and fits in really well.
Meanwhile my editor and I have been pondering on how we cope with this in our documentation. No doubt some official style guidance will soon surface to resolve our predicament, but in the meantime I've been experimenting with possibilities for our Hands-on Labs. I started out with the obvious approach that matches the way we currently document steps that refer to UI elements (bearing in mind the accessibility guidelines described in It Feels Like I've Been Snookered).
Choose +NEW, select CLOUD SERVICE, and then choose QUICK CREATE.
But written down on virtual paper that does look a bit awkward and "shouty". Perhaps I should just continue to use the initial capitalized form:
Choose New, select Cloud Service, and then choose Quick create.
However, that doesn't match the UI and one of the rules is that the text should reflect what the UI looks like to make it intuitive and easy for users. Maybe I can just use ordinary words instead, in a kind of chatty informal way, so that they don't actually need to match the UI:
Choose new, select cloud service, and then choose quick create.
But that looks wrong and may even be confusing. Perhaps I should just abandon any attempt to specify the actual options:
Create a new cloud service without a using custom template.
Though that just seems vague and unhelpful. Of course, you might assume that a user would already know how to create a new cloud service, so it's redundant anyway. But something more complicated may not be so obvious without more specific guidance about where to start from:
Open the management window for your Windows Azure SQL Database.
I did suggest to my editor that we simply run with something like:
Choose the part of the window that contains what appears to be some text that would say "cloud services" if it was all lowercase, and then...
Ahh, but wait! In a non-web-based application UI I can use shortcut keys, like this:
Press Alt-F, then N, then press P.
Oh dear, that violates the accessibility rules, and doesn't work in a web page anyway. Maybe I'll just go with:
Get the person sitting next to you to show you how to create a new cloud service.
And, as a bonus, this approach may even foster team cohesiveness and encourage agile paired programming. Though you probably can't call it guidance...
You may not remember, but when ASP.NET was in the early stages of development it was called ASP+. I wonder if we'll see history repeating itself so that, when it finally clambers out of beta, Google+ will actually be called Google.NET. Probably not. I guess there's too many hard-up lawyers around at the moment looking for work. But it does seem that, in many spheres of life, we never get the hang of the notion that history repeats itself.
In politics we go through regular cycles of left-leaning (run out of money) and right-leaning (no new taxes, maybe) government. With the environment we can't make up our mind whether we want nuclear (clean but dangerous) or fossil-fuelled (safe but global warming). As for the financial crisis, we're torn between printing money (more debt) and cutting spending (less growth). It's almost like history doesn't teach us anything.
So what about my own little corner of the world: technical guidance and developer documentation? It sometimes seems like the process for this is built around the concept of conveniently ignoring the lessons of history. Here at p&p, our goal is to provide guidance for developers, system architects, administrators, and decision makers using Microsoft technologies - our tag line is, after all, "proven practices for predictable results". But designing projects to achieve this sometimes seems to be history repeating itself. And not always in a good way.
The problem tends to center around how to figure out what guidance users require, and how to go about creating it. My simple take is that you just need to discover exactly what the users need, what you want to guide users towards (the technologies, scope, and technical level that is appropriate), and the format of the guidance (such as book, online, video, hands-on-labs, FAQ, and more). From that you can figure what the TOC looks like and what example code you need, or what frameworks or applications you must build to support this. So, as usual, it's all about scenarios and user stories.
In the majority of cases, this is exactly how we plan and then start work on our projects. But the problem is that it's very easy to start by throwing a bunch of highly skilled developers at a new technology and letting them play for a while. OK, so they need to explore the technology to find out how it works, and figure out what it can do. They go through the process of spiking to discover the capabilities and issues early on so that their findings can help to shape the decision process for designing the guidance and defining the scope.
However, it's very easy for developers to be influenced by the capabilities of the technology rather than the scenarios. As a writer, I ask questions such as "Is this feature useful; and, if so, when, where, and why?", "What kind of application or infrastructure will users have, and how do the technology features fit in with this?", and "Does the technology solve the problems they have, and add value?" In other words, are we looking at the technology from a user requirements point of view, or are we just reveling in the amazing new capabilities it offers?
Here's an example. When you read about Windows Azure Service Bus, it seems like an amazing technology that you could use for almost any communication requirement. It gives you the opportunity to do asynchronous and queued reliable messaging between on-premises and cloud-based applications, and supports an eventing publish/subscribe model. It can use a range of communication and messaging protocols and patterns to provide delivery assurance, can scale to accommodate varying loads, and can even be integrated with on-premises BizTalk Server artifacts.
The Microsoft BizTalk Server Roadmap talks about future integration with Windows Azure but it seems as though you could support many of the BizTalk scenarios with Azure right now using Azure Service Bus, as well as extending BizTalk to integrate with cloud-based applications. But what are the realistic scenarios for real-world users? Will developers try to retrofit Service Bus capabilities into existing applications, or does it make sense only when building new applications? Will they attempt to extend BizTalk applications using Service Bus, or aim to replace part or all of their BizTalk infrastructure with it?
And what Service Bus capabilities are most useful and practical to implement in real-world scenarios and on existing customer infrastructures? Are most developers desperate to implement a distributed publish/subscribe mechanism between on-premises and multiple cloud-based applications, or do they mainly want to implement queuing and reliable message communication? Will it mean completely reorganizing their network and domain to get code that works OK in development spikes to execute on their systems? Is there a danger that experimental development spikes not planned with specific scenarios in mind will end up being completely unrealistic in terms of being applied to today's typical application implementations and infrastructure?
I can see that playing with the technology is one way to find this out, but it's also an easy way to build example code and applications that don't reflect real-world scenarios and requirements. They aren't solutions; they can turn out to be just demonstrations of features and capabilities. This is why we expend a great deal of effort in p&p on vision and scope meetings, customer and advisory group feedback, and contact with product groups to get these kinds of decisions right from the start.
And let's face it, there are several thousand other people in EPX here at Microsoft writing product documentation, each focused on their own corner of the technology map and concentrating on their own specific product features and capabilities. So it's left to our merry little band here in a forgotten corner of the campus, and scattered across the world, to look at it from an outsider's point of view and discover the underlying scenarios that make the technology shine. Finding the real-world scenarios first can help to prevent the dreaded disease of feature creep, and the wild exuberance of over-complexity, from burying the guidance in a morass of unnecessary technological overkill. And that's before I can even start writing it...
And just in case, as a writer, you are feeling this pain here's some related scenario-oriented diatribes:
Once again I'm at one of those gloriously satisfying stages in my p&p working life when I'm trying to define the structure for a new guide. We know what technologies we want to cover, how we will present the guidance, and the kind of sample that we'll provide to demonstrate the all-encompassing wonderfulness of the technologies on offer. But after two weeks of watching videos, perusing technical documents, consulting experts, and RSI from repeated spells of vicious Visioing, I'm still floundering in a cloud of Azure confusion.
The target for the project is simple enough: explore the opportunities for building hybrid applications that run across the cloud/on-premises boundary, and provide good practice guidance on implementing such applications. It obviously centers on integration between the various remotely located bits, the customers and partners you interact with, and the stuff running in your own datacenter; and there is a veritable feast of technologies available in Azure targeted directly at this scenario.
So why is it so difficult to get started? Surely we can toss a few components such as Web and Worker roles, databases, applications, and services into a virtual food mixer and pour out a nice architectural schematic that shows how all the bits fit together. I wish. Even with bendy arrows and very small text I still can't fit the result onto a single Visio page.
Obviously you need a list of the technologies you want to use. In our case, the first things going into the plastic jug are ingredients such as Azure Service Bus (with its myriad and still growing set of capabilities), Azure Connect, Virtual Network Manager, Access Control Service, Data Sync, Business Intelligence, Data Market, and Azure Cache. Then add to that a pinch of frameworks such as Enterprise Library Extensions for Azure and Stream Insight.
Yet every connection between the parts begs different questions. Where do I put the databases (cloud or on-premises) to resemble real-world scenarios but still show technologies such as Connect and Data Sync in action? Do I use Service Bus Queues or Topics and Rules to communicate between the cloud application and the suppliers? If I use ACS for authentication, when and where do I match the unique customer ID with their data in the Customers database? What's the most realistic location for the stock database, and do I replicate it to SQL Azure or just cache the minimum required content in the cloud instances? Does SQL Federation fit my scenarios, or is that a whole different kettle of fish that deserves a separate recipe book?
And, most confusing of all, how do I cope with multiple geographical locations for the Azure datacenters and the warehouse partners who fulfill the orders? Do I allow customers to place orders that will be fulfilled from any warehouse (with the associated problem of delivery costs), or do I limit them to ordering only from their local warehouse? And if I take the second option (assuming I have a warehouse partner in both the East and West US), what happens if somebody in New York wants to place an order for delivery to California?
And after you decide that, look what happens when you factor in Azure Traffic Manager. If you use it to minimize response times and protect against failures, the customer in New York might end up being routed to the California datacenter. That's fine if they want the goods delivered to California, but most likely they'll want them delivered to New York and so the order needs to go to that warehouse. Unless, of course, the New York warehouse is out of stock but they have some in the California warehouse.
Of course, the whole concept of integrating applications and services is not new. Enterprise Application Integration (EAI) is a big money-spinner for many organizations, and everybody has their own favored theory accompanied by a different architectural layer model. And don't forget BPM (Business Process Management) and BPR (Business Process Reengineering). I read a dozen different reports and guides and none of them had the same layers, or recommended the same process.
And, in reality, building a hybrid application (or adapting an existing on-premises application into a hybrid application) is not EAI, BPM, BPR, or any of the myriad other TLAs. It's a communication and workflow thing. Surely the core questions focus on how you get service calls, messages, and management functions to work securely across the boundaries, and how you manage processes that require these service calls and messages to work in the correct order, and make decisions at each step of the process. Yes you can match these questions to layers in many of the EAI models, but that doesn't really help with the overall architecture.
What went wrong with the whole design process was that we started with a list of technologies rather than a business scenario that required a solution. We went down the route of trying to design an application that used all of the technologies in our list, but used each one only once (otherwise we'd be introducing unnecessary duplication). We'd effectively taken ingredients at random from the cupboard and expected the food mixer to turn them into a palatable, attractive, and satisfying beverage. It's obviously not going to work, especially if you keep the cat food in the same cupboard as the bananas.
In the real world people start out with a problem that the technologies can help to solve, not a predefined list of technologies chosen because they have tempting names and capabilities. If you want to build a public-facing website with an on-premises management and reporting application, you wouldn't start by buying 100 copies of Microsoft Train Simulator and a refrigerator. You'd design the application based on requirements analysis and recommended architectural styles, then order the stuff in the resulting Visio schematic. Somewhere along the line the choice of technologies would be based on the application requirements, rather than the other way round.
So at the moment we're tackling the issues from all three ends at once, and hoping for some central convergence. On our mental whiteboard there's a big circle containing the list of required technologies, another containing EAI and other TLA layer models, and a third containing the possible real-world scenarios. I'm just hoping that, like a Euler diagram, there will be a tiny triangle in the middle where they overlap.
But that's enough rambling. The pains in my fingers are starting to recede, so I need to get Visioing again. I reckon I've still got some bendy arrows left that I can squeeze in somewhere...
There's lots of comment at the moment about the "post-PC age". Seemingly everyone will just use some Internet tablet or device that installs the O/S and applications from the cloud, keeps all of the data in the cloud, and uses only services running in the cloud. No need for a fast processor, hard drive, or tons of memory because it's just a web browser and display for applications running somewhere else. The thin client for the 21st century.
However plenty of people dispute this assertion, citing the need to run powerful and complicated applications and to store data locally. And, of course, to maintain control. If your whole life is held by some huge and faceless cloud-based corporation (not mentioning any names), what happens when they accidently lose your account? Or decide you are no longer welcome and remove you from their system? Supposedly it's already happened to people who have made some unwelcome comment about their provider, or been mistakenly charged with being a hacker and forcibly ejected.
For most of these reasons, and others, I'm staying with my combination of PCs, servers, and various back-up devices. Yes I do keep a backup of my important data and photos in SkyDrive; though (no doubt due to my well-publicized paranoia) it's all in compressed PGP-encrypted files. But I reckon I've discovered not so much the "post-PC age", but a "same-PC age". Maybe this is as much a problem for PC suppliers as the flood of tablets and smartphones now swamping the world.
The "same-PC age" is a simple concept. Instead of buying the latest, greatest, fastest new machine every couple of years, you just keep the old one. In the past this hasn't really been an option unless you were prepared to turn it on the day before you wanted to use it, and stop for coffee each time you paged down in a document. But recently it's become clear that older PCs can just keep on working.
For example, my wife's four year old Dell XPS laptop with Vista was starting to show the signs of being ready for replacement with something a bit snappier. Yet a simple FDISK and a fresh install of Windows 7 brought it back to life so that it feels like a brand new machine. It's responsive, starts quickly, and handles everything she throws at it.
Even better, a friend's six year old Dell laptop (a huge and ugly beast that originally ran XP) was equally transformed by FDISK and Windows 7 into something that is a pleasure to use. My friend tells me that it's faster now than it was with XP, though I suspect he's being a little optimistic. Of course, it doesn't support Aero, but he never had that anyway so it's no loss. What he is mourning is the lack of scroll support for the trackpad - it seems there's no driver for it that works in Windows 7.
Update: After some experimentation, it turns out that the latest ALPS driver from the Dell website does work with Windows 7.
I suppose that's the problem. Dell is hardly likely to create Windows 7 drivers for a machine that was designed to run XP. It would be like expecting Ford to provide a fuel pipe to connect up a 3 litre BMW engine you shoe-horned into your Focus. And, anyway, my friend is less concerned now after I pointed out that there are Page Up and Page Down keys on the keyboard. I suspect that, until the hard drive dies or he graduates to a tablet, the laptop will continue to serve its "same-PC age" functions.
But the biggest "same-PC age" issue I have at the moment is with my working-day laptop. When I'm not trapped in front of the workstation and huge screens upstairs in the office I use a rather nice, four-year-old Dell Latitude laptop for everything work-related. Its fast, has a wonderful LCD-backlit matte screen, loads of disk space, a superb keyboard, excellent battery life, and still looks prettier than any other laptop out there (including the Apple ones). It runs every piece of complex software I need for my day job, including acting as my office telephone.
But it won't be long before I'm forced to do something about the O/S. Amazingly it's still running the original installation of Vista, but pretty soon company policy will remove Vista from the list of supported operating systems on the corporate VPN. At that point I'll need to make the decision on either Windows 7 or Windows [whatever Windows 8 will be called]. Ah, I hear you say, why not just do the same as with the other machines and hit it with the FDISK/Win7 thing now?
Well I'd love to, but there's a major problem here. To be allowed onto the company network in Windows 7, I have to enable Bit Locker. Yes, it's great idea, but the machine doesn't have a TPM module so it seems I'd need to plug in a thumb drive every time I log on. As the policies applied by the domain force the password-enabled screensaver after 10 minutes, this will be regularly throughout every day. If I leave the thumb drive plugged in I'm sure to break it and the socket at some point as I wander aimlessly around seeking guidance-creation inspiration. If I take it out every time, there's almost no doubt I'll spend the first hour of every day searching for it, or lose it altogether. Either way, I'm destined to regular cycles of FDISK and reinstall. Can I buy a plug-in TPM module I wonder?
Anyway, in preparation, I ran the Windows 7 Upgrade Advisor. It says all of my applications will work without problems! Great! However, it also listed all the devices and drivers that won't work in Windows 7. OK, the built-in camera never did work from new, but as I never use it that's not a problem. But when I ordered the machine I specified a built-in smart card reader and fingerprint reader. It even came with a proximity card reader. It's true I never managed to get the terrible clunky device setup software to recognize any of these devices (I assumed it was a Vista issue), and when I did find a driver for the smart card reader it just told me that my corporate smart card was "not a recognized format" so I've been using a separate plug-in card reader instead. And a separate plug-in fingerprint reader because the built-in one seems to be there only for decoration rather than for any functional reason.
So I suppose I shouldn't expect Windows 7 to work with any of these devices either. But I can't make up my mind which is the most annoying outcome of all this investigative effort. Is it that I'll end up junking an otherwise fully-usable machine that cost a lot of money (over 1500 pounds or 2000 dollars)? Or that I'll spend my remaining working days hunting for lost thumb drives and then reinstalling everything? Or, maybe most annoying of all, it reminded me that I paid good money for features that never worked?
If you'd bought a typical consumer device with all the bells and whistles and discovered that several of them didn't actually bell or whistle, you'd soon be back at the store with the box under your arm. How come we computer users accept that only part of the hugely expensive kit we buy will actually work? Perhaps, after all, there is a case for the ubiquitous Internet tablet or device that "just works"...