Writing ... or Just Practicing?

Random Disconnected Diatribes of a p&p Documentation Engineer

  • Writing ... or Just Practicing?

    Take Two Aspirins And Call Me In The Morning

    • 1 Comments

    I seem to have spent a large proportion of my time this month worrying about health. OK, so a week of that was spent in the US where, every time I turned on the TV, it scared me to death to see all the adverts for drugs to cure the incredible range of illnesses I suppose I should be suffering from. In fact, at one stage, I started making a list of all the amazing drugs I'm supposed to "ask my doctor about", but I figured if I was that ill I'd probably never have time to take them all. They even passed an "assisted suicide" law while I was there, and I can see why they might need it if everyone is so ill all of the time.

    And, of course, it rained and hailed most of the week as well. No surprise there. They even said I might be lucky enough to see some snow. Maybe it's all the drugs they've been asking their doctor about that makes snow seem like a fortuitous event. Still, I did get to see the US presidential election while I was there. Or, rather, I got to see the last week of the two year process. It seems like they got 80% turnout. Obviously, unlike Britain where we're lucky to get 40% turnout, they must think that voting will make a difference. Here in the People's Republic of Europe, we're all well aware that, if voting actually achieved anything, they'd make it illegal. I wonder if the result still stands if nobody actually turns up to vote?

    Mind you, it does seem surreal in so many ways. You have to watch four different news channels if you want to get a balanced opinion. And one of the morning newsreaders seemed to have a quite noticeable lisp so that I kept hearing about the progreth of the Republicanth and the Democratth. A bit like reading a Terry Pratchett novel. Or maybe it was just the rubbish TV in the hotel. And they didn't have enough voting machines so in some places people were queuing for four hours in the rain to cast their votes. Perhaps it's because there are around 30 questions on the ballot paper where you get to choose the president, some assorted senators and governors, a selection of judges, and decide on a couple of dozen laws you'd like to see passed. Obviously a wonderful example of democracy at work.

    Anyway, returning to the original topic of this week's vague expedition into the blogsphere, my concerns over health weren't actually connected to my own metabolic shortcomings. It was all to do with the Designed for Operations project that I've been wandering in and out of for some number of years. The organizers of the 2008 patterns & practices Summit had decided that I was to do a session about health modeling and building manageable applications. In 45 minutes I had to explain to the attendees what health models are, why they are important, and how you use them. Oh, and tell them about Windows Eventing 6.0 and configurable instrumentation helpers while you're at it. And put some jokes in because it's the last session of the day. And make sure you finish early 'cos you'll get a better appraisal. You can see that life is a breeze here at p&p...

    So what about health modeling? Do you do it? I've done this kind of session three or four times so far and never managed to get a single person to admit that they do. I'm not convinced that my wild ramblings, furious arm waving, and shower of psychedelically colored PowerPoint graphics (and yes, Dave, they do have pink arrows) ever achieve anything other than confirm to the audience that some strange English guy with a funny accent is about to start juggling, and then probably fall off the stage. Mind you, they were all still there at the end, and only one person fell asleep. I suppose as there was no other session to go to, they had no choice.

    What's interesting is trying to persuade people that it's not "all about exception handling". I have one slide that says "I don't care about divide by zero errors; I just want to know about the state changes of each entity". Perhaps it's no wonder that the developers in the audience thought they had been confronted by some insane evangelist of a long-lost technical religion. The previous session presented by some very clever people from p&p talked about looking for errors in code as being "wastage", and there I was on next telling people all about how they should be collecting, monitoring, and displaying errors.

    But surely making applications more manageable, reporting health information, and publishing knowledge that helps administrators to verify, fix, and validate operations post deployment is the key to minimizing TCO? An application that tells you when it's likely to fail, tells you what went wrong when it does fail, and provides information to help you fix it, has got to be cheaper and easier to maintain. One thing that came out in the questions afterwards was that, in large corporations, many developers never see the architect, and have no idea what network administrators and operators actually do other than sit round playing cards all day. Unless they all talk to each other, we'll probably never see much progress.

    At least they did seem to warm to the topic a little when I showed them the slide with a T-shirt that carried that well-worn slogan "I don't care if it works on your machine; we're not shipping your machine!" After I rambled on a bit about deployment issues and manageable instrumentation, and how you can allow apps to work in different trust levels and how you can expose extra debug information from the instrumentation, they seemed to perk up a bit. I suppose if I achieved nothing other than making them consider using configurable instrumentation helpers, it was all worthwhile.

    I even managed to squeeze in a plug for the Unity dependency injection stuff, thus gaining a few brownie points from the Enterprise Library team. In fact, they were so pleased they gave me a limited edition T-shirt. So my 10,000 mile round trip to Redmond wasn't entirely wasted after all. And, even better, if all goes to plan I'll be sitting on a beach in Madeira drinking cold beer while you've just wasted a whole coffee break reading this...

  • Writing ... or Just Practicing?

    Top 10 Tips for New or Nervous Computer Users

    • 0 Comments

    It may seem like this week's disjointed ramblings follows on from last week's topic, in some lexographically eerie and unexpected way. I can assure you that this wasn't intentional - the capability to avoid straying off topic during the course of a single short article has so far always eluded me, and I see no reason for that situation to have changed. After all, there's no sign yet that I'm actually getting the hang of this blogging thing. Still, at least I'm not frightened of computers, as are some people in my age group...

    No, what actually prompted this week's ramble was something I came across recently while researching obscure software design patterns. Part way through one document, I found this enlightening and interesting line: "Use a common data format to perform translations between two desperate data formats." Now, I'd have to admit that I don't do that much actual programming these days, but I do tend to deal with quite a lot of data. And, thankfully, none has yet come to my attention as being "desperate". I'd go with "wrong", "strange", and even "useless" as descriptions of the data I encounter, but none of it has shown any signs of criminality, or suicidal tendencies.

    However, maybe it's just that I don't notice. Perhaps there are some Int32s in there that are really, really keen to escape into 64 bits. Or lumps of serialized binary that just can't bear not being an XML document. Could it be that they actually mean "desperate" as in "criminal"? Is there a gangster-style DataSet roaming through the layers of my applications stealing rows and leaving a trail of empty tables? Or perhaps the author of the page I was looking at actually meant "disparate", not "desperate".

    But, coming back to people being frightened of computers, just image how much more frightened they'd be if they thought there really was desperate data flying round inside. So, to put such people's minds at rest, I've been touring the Internet and have assembled a list of the Top 10 things that you may not know about computers:

    1. Experts will tell you that you cannot "break the computer" just by pressing the wrong key. However, bear in mind that there is one key (the one with the funny "~"squiggle on it that nobody knows what to use for) that connects directly to the mains power supply and injects 5000 volts into the main computer chips when you press it. You should only use this key when a program stops responding.
    2. Computer keyboards work the same way as remote controls (as discussed last week). The harder you press the keys, the faster your computer will work. When it doesn't seem to be responding, and the "~" key makes no difference, the chances are that you haven't pressed the keys enough times, or you are not pressing hard enough. Mouse buttons also work this way.
    3. Your computer will occasionally download and install "updates" from the Internet. This is required because the programs you use most often wear out more quickly than those you don't use much, and so they need to be replaced. You can tell when a program is showing excessive signs of wear, and is ready for replacement, by looking at the window. Ones that are nearly worn out have the corners rounded off.
    4. Most computers have a "USB" socket. "USB" stands for "Unexpected System Behavior". When you plug something into this socket, the computer will display a series of helpful messages designed to lead you to believe that it knows what you plugged in. After a few minutes, it will tell you that it has found your elephant, and loaded the correct drivers. When you remove the plug, the computer will warn you that you should have turned your elephant off first, and that it may now have lost its memory.
    5. In computer terminology, the word "minute" (as used in the previous item) bears no relation to the 60-second periods measured by your watch or kitchen clock. "Minute" is an ethereal measure based on scientific principles related to internal clock speed, bus width, memory configuration, and the proliferation of highly-efficient CPU registers. To the layman, what this means is that a message such as "This make take a few minutes..." is actually a suggestion that you leave the computer turned on and come back in the morning.
    6. If you use a laptop computer, you will have noticed how hot the desk (or your lap) becomes after a while. This is because file names, especially those containing spaces, are particularly fragile and prone to melting at the high temperatures found inside a computer. After a while, they tend to seep out of the bottom of the computer and heat up the desk (or lap) below. This also explains why you cannot find the files that you saved yesterday.
    7. After a while, computers get full up of data. You can usually tell when they are full by looking to see if the sides have started to bulge out. Or, if you use a laptop, you may find that the lid does not close as easily as it used to. Be aware that, if you have purchased a new digital camera that has more than "6 megapixels", the pictures are very large and must be stored in the computer diagonally. This means that they take up a lot more space.
    8. After a few weeks, your computer will know everything about you. It does this by attracting dust onto the screen and then running a special program when it "boots up" that converts this dust into greasy fingerprints. It can then identify you from your fingerprints. You can tell this is true because, when you visit an online book store, it knows that you need to buy an electric toothbrush, a garden spade, and a DVD of the latest movie with Hugh Grant and Keira Knightley in it.
    9. Some years ago, the people who own the Internet made it illegal to have more than nine things in a list. This was because they discovered that over 30% of Internet traffic was caused by people reading "Top 10" lists of things. Since the ban, there has been a 37.48% reduction in traffic on the Internet. This is why, these days, it is so fast.

    You may like to print out this list and keep it somewhere handy ready for when your son or daughter gives their old computer to your mother. Or for when you become senile. Meanwhile, if you have any useful tips for new or nervous computer users, you can send them to us.

  • Writing ... or Just Practicing?

    Rusty 0s and Broken 1s

    • 0 Comments

    So I was on-site at a dev shop the other day watching three guys fighting with a printer. It seems they needed some particular project report to send to a customer, and the printer was refusing to play ball. I watched them try various combinations of "press-and-hold" buttons on the printer, check the network cable, try printing from another program, try printing from another computer, ping the printer, and reinstall the printer drivers a couple of times.

    Just then, the receptionist appeared and asked when the report would be ready. When told about the printer problem, she flipped open the side cover, pulled out the jammed sheet of paper, slammed the cover shut, and within a few seconds the report appeared. Meanwhile, she stood there with that "Are all men useless?" look, which - if you are a man - you will no doubt recognize.

    So why is it that men seem to be able to make even the simplest technical jobs seem complicated? Or is it just an IT industry thing? A friend of mine, who definitely falls into the "user-not-programmer" group, sometimes phones and asks me to go over and "look at the computer, and bring your toolbox". Now, he knows that the contents of my toolbox include a large hammer, a selection of bent screwdrivers, and several rusty spanners that never fit anything. But I'm sure he doesn't actually expect me to put a drip pan underneath the machine, clean the spark plugs, and adjust the tappets - yet it seems like he naturally assumes it's some mechanical problem. Even though he knows that a computer is just a box full of odd-shaped bits with wires sticking out.

    Likewise when the freezer in our kitchen stopped working last time, my wife made a pretty fair assumption that it wasn't a software glitch - it was down to me using a carving knife to scrape the ice off last time I defrosted it. And when our son phones to report yet another "in need of repair" scenario, he makes no distinction between a broken wardrobe door handle and the mobile phone he dropped into the bath. OK, so I have a hammer that fits the screws on his wardrobe door, but it's pretty unlikely that any of my spanners will fit his mobile phone.

    Maybe this isn't a comprehensive survey of behavior, but it does suggest that perhaps it's a "computer programmer" thing. Could it be that us geeks are so involved in the nebulous vaguarities of software we automatically assume the problem is complicated? For example, we use Media Center for our TV, and occasionally it throws a hissy fit so my wife can't watch Coronation Street. I know that the answer is BRST (big red switch time), and it generally sorts itself out after a reboot. Yet I still can't convince my wife that pressing the remote control buttons harder (a mechanical solution) will have any effect other than breaking the remote control.

    It's rather like that old story (stop me if you've heard it) about the physicist, chemist, and computer programmer touring Switzerland in a car. Halfway down a winding mountain road, the brakes fail and the car races down the road bouncing off rocks and crash barriers until it finally comes to rest at the bottom of the hill. Shaken but not hurt, the three guys get out and survey the situation. The physicist says "I reckon there's a problem with the braking system. We should measure the available braking force and calculate the resulting pressure on the calipers." The chemist disagrees, saying "No, I'm sure its problem with the brake pads. We should analyze the asbestos content, and compare it to industry recommendations." "I've got a better idea", says the computer programmer, "let's push it back to the top of the hill and see if it happens again."

    Isn't a bit worrying that we seem to automatically assume any problem with something more technical than a light bulb is most likely to be a software issue? Does it show how little faith we, who know about this stuff, really have in our art? I mean, when you think about it, mechanical stuff is surely the most likely culprit in most situations. Metal bits go rusty, wear, bend out of shape, and seize up if you forget to oil them. Plastic bits invariably snap, or melt when they get hot. Yet software is just 1s and 0s. I've yet to find any rusty 0s lying in the bottom of a broken computer, or broken 1s jamming the cooling fan.

    So here's a suggestion. Next time Microsoft Word gets confused, or Windows Explorer can't find your USB drive, take the lid off your computer and apply some mechanical diagnostics instead. I've got a large hammer you can borrow.

  • Writing ... or Just Practicing?

    Tragile Documentation

    • 0 Comments

    I just discovered last week that I'm supposed to be able to "move quickly and lightly", be "as sleek and agile as a gymnast", and be "fleet of foot". Either that or I'm supposed to be an X-ray and Gamma ray astronomical satellite belonging to the Italian Space Agency. Not much hope of any of these happening, I guess. Probably I shouldn't have decided to search the Web and see what "agile" actually means (and, in case you are wondering, the Italian satellite is called Astrorivelatore Gamma ad Immagini LEggero - see Carlo Gavazzi Space).

    All this comes about because one of the projects I'm on at the moment is effectively exploring the techniques for agile document creation. Agile is well-known in the software development arena, and it seems - from what I've seen here at p&p - to work well. OK, so it has its downsides from the documentation engineer's perspective (as I know only too well), but it does appear to produce great results very quickly, with fewer "issues" than the waterfall approach.

    So, can documentation be agile as well? A lot of the supposedly related stuff on the Web seems to be concerned mainly with creating dynamic content for Web sites, or creating content that you can display on different devices (such as mobile clients). However, there are a couple of people talking seriously about the issues. You'll find great articles from Scott Ambler (see Agile/Lean Documentation and Can Documentation Be Agile?) and Ron Jeffries (see Where's the Spec, the Big Picture, the Design?).

    Much of the focus appears to be on documentation in terms of the requirements of development teams (project documentation, such as requirements and design decisions) and operations (dependencies, system interactions, and troubleshooting). However, they do talk a bit about "customer" or "user" documentation. What's worrying is that all of the articles seem to recommend that writing user documentation should wait until the product stabilizes, be delayed as long as possible, and be written as close as possible to the product's release. Working on end-user documentation early in the project is, they suggest, just a waste of resources.

    Yet, if you don't start early and "run with" an agile dev team, how can you hope to have comprehensive user documentation available for the release? In theory, the team will capture all of the information required in models, sketches, and user stories that can be polished up to provide the user documentation. But is that enough? Should the whole design and structure of the documentation be based on developer's conversations and implementation specifications? Telling users that it "works like this" and "can do this" does not always produce useful guidance.

    For example, how many times have you opened up the help file for some complex server product and seen sections titled "Configuration", "Administration", and "Reference"? Maybe you want to do something simple, such as add a new account. You find the section in the administration topic called "Adding a new account" and it explains which buttons to click, which textboxes you need to fill in, and how to select an option for the "operations category". But it doesn't tell you why you need to do this stuff. Or what will happen if you do (or don't) select an operations category. What the heck is an "operations category" anyway? Maybe they are a fundamental new feature that got added to the product right at the end of the dev cycle, after the docs for creating a new account were completed. And so there wasn't time to do the fundamental reorganization of the content required to match the new conceptual approach you now need to take when creating an account?

    Agile development, they say, means that "working software is more important than comprehensive documentation". But all the articles and references seem to end up talking generally about documentation when what they really mean is "project documentation" - stuff for the team and project managers to use to monitor and manage the project. You continually get the impression that customer-focused documentation is neither a requirement, or important. Yet, without it, isn't there a chance that nobody will actually use the software? Or will never get the maximum benefit from it?

    Getting back to the question of whether you actually can do agile documentation for a software project, how might agile work? For a developer, a task might be something like rewriting an interception algorithm to improve performance, or changing part of the UI to make it easier to use. For a writer, a task is usually to produce a document that describes some feature. If that feature is sufficiently compartmentalized, such as describing how to use some part of the UI or explaining how to write some plug-in component, then it is an ideal candidate for agile. You can focus on just that task, and then move on to the next one.

    You can pair with another writer, or - even better - pair with the developer to help you understand the feature, and then use your "professional writer's abilities" (hopefully you have some) to get it down in a form that makes sense to users. And, because you don't have the same internal insight into the feature as the developer does, you can ask "user-level" questions and document the answers. Where it gets hard is when you are trying to grasp the "whole concept" view, especially from dev teams that know the code base inside out, instinctively understand the overall concepts of the product, and can't figure why, for example, anybody would find it hard to understand what an "operations category" is.

    I guess lots of people have carried out scientific investigations into all this, but my simple and humble take is:

    • Prioritization. If everything is gradual iterations towards the end goal, but the end goal is not clearly defined at the start, how do you plan the documentation requirements and prioritize the stages so you work on the right stuff at the right time? How do you know if what seems important today will no longer be important in a week's time, and might even disappear altogether in a month's time? To be effective when creating documentation, you need to be able to base it on an overall plan that has definable stages.
    • Concentration. Documentation development requires a similar approach, in terms of planning the architecture and linking the parts together, as code does. It also requires concentration, and is not a task that you can easily accomplish when paired, when time-limited, and when constantly shifting focus. Its fine for collating facts and information, but organizing and writing is still, I reckon, a "one-person" and "in a quiet room" task.
    • Flexibility. Trying to flit from one task to another is tough enough for developers, but it's often much more difficult for the documentation engineer where changes to the product are occurring all the time. There are no tools that allow you to automatically find and update references and content elsewhere in a documentation set.
    • Churn. While developers search for better ways to decouple their code components, in documentation you inevitably need close coupling. Topics can't link to some decoupled interface class - they have to link to a specific topic. Move, change, or delete the topic and it all breaks. Even a minor change to the functionality of a component in a large project can have a huge impact on the documentation. One way to tell a writer is to see if the labels have worn off the Page Up and Page Down keys on their keyboard.
    • Feedback. Inevitably the writer does not have the same level of insight into the product as the people who are writing the code, and in most cases doesn't have the same level of programming skill (if we did, would we be writers?). And, in most cases, not having the same insight is good because it means that the writer can act as the customer's advocate. But it does require constant feedback and input from the dev team, throughout the project if the documentation is being developed alongside the product.

    However, while these issues tend to arise when documenting a piece of physical software under development, the whole approach is different for the kind of task I'm currently engaged in. It involves producing guidance for technologies that are already released and are (generally) mainstream; for example, architectural guides and documentation that help you to use existing products. This is where you can actually work with something that is stable and finished (although they'll probably release an updated version just after you publish the guidance). Maybe agile works better here?

    What's interesting, and initially prompted this rant, is that I'm discovering how the traditional (i.e. not quite so agile) approach still seems to have some fundamental advantages. Documents I create from scratch to achieve a specific aim, working on my own in a quiet place, seem to come out better that those that get passed around from one person to another with everyone "adding their bit" but not seeing the whole picture. Being able to prioritize work myself means that I can spend time on the important tasks without being worried that there is some more urgent task waiting. And seeing the whole picture means I can write with more accuracy and less duplication of effort.

    I guess, in the end, I'm still struggling to see how you can use agile methods for end-user or consumer documentation. Maybe it needs to be some custom blend of techniques, concentrating on the features of agile and traditional methods that work best. Use the agile features of pairing for collating information, flexibility for maximizing effort, and feedback to keep it on target. Combine this with traditional approaches to prioritization, concentration, and an overall plan. Maybe it's a whole new paradigm? Can I trade-mark "tragile documentation"?

  • Writing ... or Just Practicing?

    On The Road To Nowhere?

    • 0 Comments

    Back in June, when I signed my life away and made my pact with the blue badge, it seemed like a good idea to restrain my exuberance in one or two areas. Wandering aimlessly around the world attending conferences, and pleading with Web site editors to buy my articles, were obvious first steps. And a quick sanitization of my own Web site seemed like a good idea - trying to tempt any unwary dev shop I could find to give me a job was probably not a good idea either. And, in particular, losing the PowerPoint presentation that grumbled about the lack of inspiration and direction in the Web world seemed like a really positive move. Especially as it was robust enough to need words with asterisks in.

    Then, at the p&p Summit a few weeks ago, I watched Billy Hollis do a wonderful presentation that echoed so many of my own subversive tendencies. His session, called "Drowning in Complexity", revealed through audience participation how few people actually know about, understand, and use the huge number of new technologies coming out of Microsoft. I mean, I work in a fairly small division and I have only a passing familiarity with many of our products, never mind the mass of stuff coming out of the many other divisions.

    But what really made me smile were his comments on the way we seem to be piling more and more stuff on top of a document collaboration mechanism that was already revealing its failings way back in the late 1990's. Yep, what on earth are we still doing playing with Web browsers when we're trying to implement rich, interactive, and accessible user interfaces for business applications? I have absolutely no intention of getting involved here in a discussion of Flash, Chrome, and the like. What I want to know is: where is the next Tim Berners-Lee?

    Over the years, I've bored almost to distraction a significant number of people by rambling on about how we need something new to decouple our dependence on HTML and Web browsers, where everything you want to do seems to involve inventing some new kludge, or several hundred lines of JavaScript. I mean, if you were given a choice of any language to use when developing your next major application, would you choose JavaScript? And run it inside an interface that stops you using many useful key combinations, allows the user to view a cached copy of the previous screen in your carefully crafted process flow any time they like, and prevents you using many of the obvious UI niceties you expect in a "proper" application?

    And where is the comprehensive managed security framework that allows you to properly interact with the hardware and the user? Or the mechanism to handle data locally in a sensible way? Or, and here comes that awful cliché, "write once, run anywhere". It seems that the only reason we battle on with Web browsers is because there is no other "write once" that does "run anywhere". Or is there? PDF seems to work fine, and it was developed by a single manufacturer. You can use a nice lightweight (and free) PDF reader such as Foxit, or you can use the full-blown (dare I say "overblown") real thing from Adobe. And there's no shortage of tools you can buy to create and edit PDFs, or do most anything else you want with them.

    So I guess this diatribe has finally reached the point where I can no longer avoid the "S" word. Here at Microsoft, we're hiding it behind the exciting concept of the Rich Internet Application (RIA). But at the heart of it are, of course, XAML and WPF. One of the announcements by Scott Guthrie at the Summit was the aim of continuing convergence of Silverlight and WPF to achieve the "write once, run anywhere" nirvana. WPF that runs on a desktop, on a mobile device, on a tablet, and everywhere else as well; and with full support for ink, stylus, touch, rich media, and interaction with all the bells and whistles of the hardware.

    Where I still have an issue is with Silverlight. Yes, it gives us a bridge to the ubiquitous Web browser to maximize reach. But we can do pure WPF and XAML in a host on Windows, just like we do with PDFs. Maybe the fact that WPF and XAML come from Microsoft will mean that it can't get the support from other platform manufacturers and the open source community that HTML did so many years ago. So are we forever condemned to using another browser plug-in? Another layer of "stuff" on top of the already inappropriate "stuff"?

    There's an old story about the guy driving through some small village in the middle of nowhere, and he stops to ask a local yokel the way to his planned destination. The yokel replies "If I was going there, I wouldn't start from here". I just hope that we haven't gone so far down the road to that dead-end village that we can't actually get to where we want to be.

Page 1 of 1 (5 items)