Random Disconnected Diatribes of a p&p Documentation Engineer
If an article I read in the paper this week is correct, you need to immediately uninstall Arial, Verdana, Calibri, and Tahoma fonts from your computer; and instead use Comic Sans, Boldini, Old English, Magneto, Rage Italic, or one of those semi-indecipherable handwriting-style script fonts for all of your documents. According to experts, it will also be advantageous to turn off the spelling checker; and endeavour to include plenty of unfamiliar words and a sprinkling of tortuous grammatical constructs.
It seems researchers funded by Princeton University have discovered that people are 14% less likely to remember things they read when they are written in a clean and easy-to-read font and use a simple grammatical style. By making material "harder to read and understand" they say you can "improve long term learning and retention." In particular, they suggest, reading anything on screen - especially on a device such as a Kindle or Sony Reader that provides a relaxing and easy to read display - makes that content instantly forgettable. In contrast, reading hand-written text, text in a non-standard font, and text that is difficult to parse and comprehend provides a more challenging experience that forces the brain to remember the content.
There's a lot more in the article about frontal lobes and dorsal fins (or something like that) to explain the mechanics of the process. As they say in the trade, "here comes the science bit". Unfortunately it was printed in a nice clear Times Roman font using unchallenging sentence structure and grammar, so I've forgotten most of what it said. Obviously the writer didn't practice what they preached.
But this is an interesting finding. I can't argue with the bit about stuff you read on screen being instantly forgettable. After all, I write a blog that definitely proves it - nobody I speak to can remember what I wrote about last week (though there's probably plenty of other explanations for that). However, there have been several major studies that show readers skip around and don't concentrate when reading text on the Web, often jumping from one page to another without taking in the majority of the content. It's something to do with the format of the page, the instant availability, and the fundamental nature of hyperlinked text that encourages exploration; whereas printed text on paper is a controlled, straight line, consecutive reading process.
From my own experience with the user manual for our new all-singing, all-dancing mobile phones, I can only concur. I was getting nowhere trying to figure out how to configure all of the huge range of options and settings for mail, messaging, synchronization, contacts, and more despite having the laptop next to me with the online user manual open. Instead, I ended up printing out all 200 pages in booklet form and binding them with old bits of string into something that is nothing like a proper manual - but is ten times more useful.
And I always find that proof-reading my own documents on screen is never as successful as when I print them out and sit down in a comfy chair, red pen in hand, to read them. Here at p&p we are actively increasing the amount of guidance content that we publish as real books so that developers and software architects can do the same (red pen optional). The additional requirements and processes required for hard-copy printed materials (such as graphic artists, indexers, additional proof readers, layout, and the nagging realization that you only have one chance to get it right) also seem to hone the material to an even finer degree.
So what about the findings of those University boffins? Is all this effort to get the content polished to perfection and printed in beautifully laid out text actually reducing its usefulness or memorability? We go to great lengths to make our content easy to assimilate, using language and phrasing defined in our own style manuals and passing it through multiple rigorous editing processes. Would it be better if we just tossed it together, didn't bother with any editing, and then photo-copied it using a machine that's nearly run out of toner? It should, in theory, produce a more useful product that you'd remember reading - though perhaps not for the desired reasons.
Taking an excerpt from some recent guidance I've created, let's see if it works. I wrote "You can apply styles directly within the HTML of each page, either in a style section in the head section of the page or by applying the class attribute to individual elements within the page content." However, before I went back and read through and edited it, it could well have said something like "The style section in the head section, or by decorating individual elements with the class attribute in the page, can be used to apply styles within the HTML or head of each page within the page content."
Is the second version likely to be more memorable? I know that my editor would suspect I'd acquired a drug habit or finally gone (even more) senile if I submitted the second version. She'd soon have it polished up and looking more like the first one. And, no doubt, apply one of the standard "specially chosen to be easily readable" fonts and styles to it - making readers less likely to recall the factual content it contains five minutes after they've read it.
But perhaps a typical example of the way that a convoluted grammar and structure makes content more memorable is with the well-known phrase taken from a mythical motor insurance claim form: "I was driving past the hole that the men were digging at over fifty miles per hour." So that could be the answer. Sentences that look right, but then cause one of those "Hmmm .... did I actually read that right?" moments.
At the end of the article, the writer mentioned that he asked Amazon for their thoughts on the research in terms of its impact on Kindle usage, but they were "unable to comment". Perhaps he sent the request in a nice big Arial font, and the press guy at Amazon immediately forgot he'd read it...
A few weeks ago I was trying to justify why software architects and developers, instead of politicians, should govern the world. Coincidently, I watched one of those programs about huge construction projects on TV this week, and it brought home even more the astonishing way that everything these days depends on computers and software. Even huge mechanical machines that seem to defy the realms of possibility.
In the program, P&H Mining Equipment was building a gigantic mechanical excavator. Much of the program focused on the huge tracks, the 100 ton main chassis, machining the massive gears for the transmission system, and erecting the jib that was as tall as a 20-storey building. Every component was incredibly solid and heavy, and it took almost superhuman effort to assemble (though you can't help wondering how much of the drama was down to judicious editing of the video).
However, according to the program the new excavator contains brand new computing technology that allows it to dig faster, avoid stalling in heavy conditions, and frees the operator from a raft of tasks and responsibilities (though they did manage to avoid using the phrase "as easy to drive as a family car"). Every part of the process of driving and digging is controlled by computers through miles of cable and hundreds of junction boxes and power distribution systems. It even automatically finds the truck it's supposed to be tipping into. I don't know about you, but I wouldn't want to be sitting in the truck when it automatically detects where I am and tips several hundred tons of rock. You never know if the operator has chosen that moment to switch it auto-pilot (or auto-dig) and wandered off for lunch.
And then later on, when it came time to test it and nothing seemed to work, there was no sign of a gang of oil-spattered brute force workmen - just a guy in shirt sleeves with laptop computer, and a couple of electricians. Getting it to finally work just required a geek to edit a single line of code in the central operating system. I guess it's a lot more satisfying when a successful debugging session results in some mammoth lump of machinery suddenly rumbling into action, compared to just getting a "Pass" message in your test runner.
Yet Ransomes & Rapier here in England built an equally huge excavator named Sundew way back in 1957, which you have to assume contained nothing even remotely resembling a computer. And it worked until 1984, including walking across country from one open cast mine to another 18 miles away (a journey that took nearly three months). I wonder if, in 40 years time, there will still be somebody around who knows how to debug programs in whatever language they used for the new excavator operating system. Or if, in the meantime, someone will figure a way to hack into it through Wi-Fi and make it run amok digging huge holes all over the place.
And do they have to connect it to the 'Net once a month to download the latest patches...?
Perhaps I can blame the Christmas spirit (both ethereal and in liquid form) for the fact that I seem to have unwarily drifted out of the warm and fuzzy confines of p&p, and into the stark and unfamiliar world of our EPX parent. A bit like a rabbit caught in the headlights, I guess. I keep looking round to see what's different from the more familiar world I'm used to, but - rather disconcertingly - it all seems to be much the same. I'm even anticipating somebody telling me what "EPX" actually stands for...
It turns out that the reason I've virtually and temporarily wandered north from Building 5 to some other (as yet unidentified) location on campus is to work on introductory guidance for new-to-the-web developers, based around the new version of Web Matrix. Of course, as I actually work from home here in Ye Olde England I'm not, physically, going anywhere - so the lack of building number precision is not a problem. And some would even say that, at my stage of life, I'm probably not "going anywhere" anyway.
Still, if you are a fan of classic rock music you'll no doubt be familiar with the Australian band AC-DC. I reckon their 1990 comeback album "The Razor's Edge" is their best work, and their greatest public appearance must be the Toronto Rocks concert in 2003. So what about Web Matrix? Coincidently, in the same year as the Toronto Rocks concert, I co-wrote a book called "Beginning Dynamic Websites with ASP.NET Web Matrix". Unfortunately, I can't find any corresponding event from 1990 that I can stretch into an allegory with Web Matrix. But I can confirm that the greatest public appearance for a brand new web development technology, currently code-named "Razor", is its on-stage performance within the latest version of Web Matrix.
OK, so perhaps I need to temper the enthusiasm a little, especially the use of contentious statements such as "brand new". In fact, at first, I rather tended to agree with a comment from someone on a blog discussing Razor when they said "You must be joking - are we going back to 1997?" So what is Razor? Quoting from the Web Matrix site, it is "the new inline syntax for ASP.NET Web pages". Just when I thought that we'd seen the back of inline code in web pages, and that everyone and their dog would be using MVC, MVVP, and other similar patterns - where even code-behind is considered a heinous crime against technology (if not humanity).
And there's more. What about the new Text Template Transformation Toolkit (T4) stuff in Visual Studio 2010? I began my web development career with that wonderful templating technology named IDC (the Internet Database Connector), which was the forerunner to Denali/Active Server Pages 1.0. In those days, we thought that templates and inline code were the height of sophistication, and even went to conferences in T-shirts bearing the slogan <%=life%> to prove that we knew how to get a life.
Of course, the world (and inline coding with VBScript and JScript) changed as Active Server Pages gradually morphed into COM+2 (I still have the set of engraved poker chips), then into ASP+ (I still have the baseball cap), and then - at PDC in Orlando in 2000 - into ASP.NET. And web developers gradually morphed from script kiddies into real developers. We even started to learn proper languages such as Visual Basic and C#.
So it was with some initial trepidation that I started to delve into the wonders of Razor and Web Matrix again. Yes, the version I've been using so far is the beta, but some of the annoying limitations of the IDE and quirks of the syntax are annoying. Yet, within a couple of weeks the team (or, to be more accurate, Christof and I) have completed a draft of the first stage of the guidance, and the beginnings of what will (when complete) be a fairly comprehensive sample application. It's based on one of the Web Matrix templates, and uses jQuery within the UI to provide some snazzy sliding/expanding sections.
What’s clear is that, even though I still tend to gravitate towards the pattern-based approach drilled into me after some years at p&p, Web Matrix and Razor really does make web development easy and quick for beginners. Yes, we were doing sliding and expanding UI stuff with reams of JScript and VBScript in 1997 in the book "Instant Dynamic HTML" for IE4 and Netscape Communicator (still available on Amazon!), and inline code with proper programming languages in 2002 ("Professional ASP.NET 1.0"). And then, of course, with tools such as the original Web Matrix and - later - Visual Web Developer (VWD); all the time moving further and further away from inline code and towards using a wider range of increasingly complex server-side controls.
Maybe it's just a reflection of the fact that, as the complexity of our programming technologies increases, the bar to entry and the steep learning curve may tend to inhibit newcomers to the field. Perhaps this is why PHP (also supported in Web Matrix) continues to be a significant web development technology. In the 90's we worried that inline code (especially script code) was inefficient, and searched for new ways to improve performance. Yet even the fully compiled approach of Visual Studio Web Classes (remember them?) failed to solve the issues. I can remember a web site built using these that required the web server to load a 750KB assembly to generate every page. You were lucky to get a response within 2 minutes some days as the server struggled to shuffle memory around and generate some output for IIS to deliver.
But I suppose that the power and available resources in modern servers means the clever technologies that lie behind Razor can generate pages without even blinking. In fact, I've long been of the opinion that the computer should be able to do all of the really difficult stuff and let us write the minimum code to specify what we want, without having to define every single aspect of how it should do it. Maybe, in terms of web development, Web Matrix is another step down that long and winding road...
I suppose I could start with a bad pun by saying that you've had your hols, and now it's time for some HOLs instead! Of course, this assumes you understand that "hols" is English shorthand for "holidays" and that "HOLs" is IT guidance shorthand for "Hands-On-Labs" (I wonder if people in the US have a shorthand word for "vacations", or if - seeing they are typed rather than written by hand these days - that should be a shorttype word, or even just a shortword).
Anyway, enough rambling, where I'm going with all this is that p&p just released the Hands-On-Labs for Windows Phone 7. This set of practical exercises and associated guidance is part of the patterns & practices Windows Phone 7 Developer Guide project, which resulted in a book (also available online) and associated guidance aimed at helping developers to create business applications for Windows Phone 7.
As part of this guidance, the team released a sample application that runs in Windows Azure and is consumed by Windows Phone 7. The application is based on a fictional corporation named Tailspin that carries out surveys for other companies by distributing them to consumers and then collating the results. It's a fairly complex application, but demonstrates a great many features of both Azure and Windows Phone 7. The Azure part came from a previous associated project (patterns & practices Windows Azure Guidance) with some added support for networking with the phone. The bits that run on the phone are (obviously) all new, and demonstrate a whole raft of capabilities including using the camera and microphone to collect picture and voice answers to questions.
What's been interesting to discover while working on the HOLs is how the design and structure of the application really does make it much easier to adapt and maintain it over time. When some of the reviewers first saw the mobile client application for the Windows Phone 7 Developer Guide project, they commented that it seemed over-engineered and over-complicated. It uses patterns such as Dependency Injection and Service Locator; as well as a strict implementation of the Model-View-ViewModel (MVVM) pattern that Silverlight developers love so much.
So, yes, it is complex. There is no (or only a tiny bit of) code-behind, and instead all interaction is handled through the view models. It uses components from the Prism Library for Windows Phone (see patterns & practices: Prism) to implement some of this interaction, such as wiring up application bar buttons and displaying non-interruptive notifications. Views are instantiated though a view model locator class, allowing them to be bound to a view but created through the dependency injection container. Likewise, services are instantiated and their lifetime managed by the container, and references to these services are injected into the constructor parameters of the view models and other types that require them. Changes to the way that the application works simply require a change to the container registrations.
In addition, there are custom types that wrap and abstract basic tasks such as storing data and application settings, managing activation and deactivation, and synchronizing data with the remote services. There is a lot in there to understand, and one of the tasks for the Hands-On-Labs was to walk developers through the important parts of the process, and even try to justify the complexity.
But where it really became clear that the architecture and design of the application was "right" was when we came to plan the practical step-by-step tasks for the Hands-On-Labs. We wanted to do something more than just "Hello Mobile World", and planned to use a modified version of the example Tailspin Surveys application as the basis for the user's tasks. So I asked the wizard dev guy (hi Jose) who was tasked with making it all work if we could do some really complicated things to extend and modify the application (bearing in mind that each task should take the user less that 30 minutes to complete):
Me: Can we add a new UI page to the application and integrate it with the DI and service locator mechanism? Jose: Easy. We can do the whole MVVM thing including integration with the DI mechanism and application bar buttons in a few steps.
Me: OK, so how about adding a completely new question type to the application, which uses a new and different UI control? And getting the answer posted to the service? Jose: No problem. The design of the application allows you to extend it with new question types, new question views, and new question view models. They'll all seamlessly integrate with the existing processes and services. Twenty minutes max.
Me: Wow. So how about changing the wire format of the submitted data, and perhaps encrypting it as well? Jose: Yep, do that simply by adding a new implementation of the proxy. Everything else will just work with it.
As you can see, good design pays dividends when you get some awkward customer like me poking about with the application and wanting it to do something different. It's a good thing real customers aren't like that...
It's customary to imagine that the most unpopular establishments in modern society are solicitors and estate agents (in the US, I guess the equivalent is lawyers and realtors; though I can't testify to their level of popularity from here). However, I reckon that the growing use and capabilities of mobile phones has paved the way for a whole new group of industrial charlatans. Aided, no doubt, by the possibilities offered by computerization and automation - something for which we, as developers, are partly responsible.
If you've tried to buy one of those fancy new smartphones recently, you've no doubt encountered some of the ingenious ways that the mobile phone service companies (the people who connect your amazing new device to the outside world) strive to confuse, obscure, bewilder, complicate, and generally befuddle us. It should be really easy: choose a phone, and then decide how much you want to pay each month for the specific matrix of services and allowances that best suit your requirements and usage patterns.
But, of course, it's not. You can choose to get the phone for free and pay a higher rate per month, or pay a bit towards the phone and pay a slightly lower monthly fee, or buy the phone outright and choose a package at much lower price. In theory, if you do the "free phone" thing with a fixed term contract, you'd think that the phone would be their responsibility throughout the life of the contract, just like if you rent a car or some other item. But that's not the case - if the phone goes wrong, it's your problem unless you pay extra for insurance. Yet you still have to pay the rest of the rental for the contract term.
It wouldn't be so bad, but all of the comparison web sites show that the free phone deals actually cost more over the contract term than buying the phone yourself. And, of course, the terms and conditions clearly state that the monthly fee and inclusive allowances for the contract can change during the term. And you can't replace or upgrade your phone until the end of the contract period either. That is, of course, if they can actually provide the phone and contract that you agreed to buy. Having spent three weeks wandering between delivery depots during the bad weather, my long-awaited Windows Phone finally arrived. Or rather, a package that I didn't order (and is, as you'd expect, a lot more expensive) arrived. Can they just "change it on the computer" to the one I ordered? No, it has to go back to them. Oh well, I suppose I can't actually go anywhere in this weather, so I don't really need a phone...
However, my wife (an intrepid traveller) just decided to equip herself with a new phone. She can't find anything like the old Motorola V8 that she loved so much, and instead went for a rather nice HTC Desire that does all the modern bells and whistles stuff. And rather than mess about trying to figure which service package she needed, and switch her number from our current provider, she just bought the phone outright to use with the SIM-only package we already have. Quick, easy, and pain free you'd think? No chance. First off, when you buy a "phone-only" package you have to, as the sales guy so eloquently explained, "have it on pay and go". That's what we want - pay for the phone and go home. Oh no, what it means is that you have to have a "pay and go" SIM card with it, and you have to buy at least ten pounds top-up to get the SIM card for free. Despite the fact that we don't want or need a SIM card (and didn't actually get one), we still had to pay the ten pounds charge for a "pay and go top-up" that we can't actually use.
The service package we have has a "Web Daily" inclusive feature, which - according to the web site blurb - "allows occasional access to data services for email and web browsing without paying for a fixed data allowance". Yes it does, but at 3 pounds ($ 4.50) per MB. OK, so the maximum charge per day is capped at one pound, but as you'll obviously access it every day when the phone automatically synchronizes your email, that's starting to look like an expensive deal. No problem: for five pounds per month you can add a data service to your package that has a 500 MB allowance. And you can add it over the web without having to listen to Greensleeves (or the latest hit from some unknown boy band) on the phone for half an hour.
I can understand that, when you add the data package, it removes the free "Web Daily" facility. However, it also removes the "Free Calls to Landlines" facility (where calls to non-mobile numbers are not subtracted from your "free minutes" allowance). Nobody I spoke to at the supplier can explain why paying more for the service means that you get less from it. Perhaps they imagine that everyone will send emails through the data package to people who they previously used to call, so they won't need to make non-mobile calls any more. Or maybe it's just another surreptitious way for them to make a bit more profit.
However, having got it all working, I'd have to admit that I'm amazed at what the phone can do. It took only a marginal effort to get it to work with our Wi-Fi, and talk to the hosted Exchange Server we use for email. It even synchronizes contacts with Outlook, and lets you upload and download music and photos from the phone as though it was a disk drive. As it contains an SD card, I suppose I shouldn't be surprised, but it's a refreshing change after fighting with the awful synchronization application that Motorola provided for the V3 (and which doesn't run on Windows Vista or Windows 7).
But, best of all, it actually sets the date and time automatically. My old Motorola V3 has never once managed that, so every time I turn it on it shows the date and time I last used it (which is sometimes a month or more ago). I wonder if I can justify buying a new phone just so I don't have to set the date and time manually...?