Writing ... or Just Practicing?

Random Disconnected Diatribes of a p&p Documentation Engineer

  • Writing ... or Just Practicing?

    Get Your Hands On Our Labs

    • 2 Comments

    I suppose I could start with a bad pun by saying that you've had your hols, and now it's time for some HOLs instead! Of course, this assumes you understand that "hols" is English shorthand for "holidays" and that "HOLs" is IT guidance shorthand for "Hands-On-Labs" (I wonder if people in the US have a shorthand word for "vacations", or if - seeing they are typed rather than written by hand these days - that should be a shorttype word, or even just a shortword).

    Anyway, enough rambling, where I'm going with all this is that p&p just released the Hands-On-Labs for Windows Phone 7. This set of practical exercises and associated guidance is part of the patterns & practices Windows Phone 7 Developer Guide project, which resulted in a book (also available online) and associated guidance aimed at helping developers to create business applications for Windows Phone 7.

    As part of this guidance, the team released a sample application that runs in Windows Azure and is consumed by Windows Phone 7. The application is based on a fictional corporation named Tailspin that carries out surveys for other companies by distributing them to consumers and then collating the results. It's a fairly complex application, but demonstrates a great many features of both Azure and Windows Phone 7. The Azure part came from a previous associated project (patterns & practices Windows Azure Guidance) with some added support for networking with the phone. The bits that run on the phone are (obviously) all new, and demonstrate a whole raft of capabilities including using the camera and microphone to collect picture and voice answers to questions.

    What's been interesting to discover while working on the HOLs is how the design and structure of the application really does make it much easier to adapt and maintain it over time. When some of the reviewers first saw the mobile client application for the Windows Phone 7 Developer Guide project, they commented that it seemed over-engineered and over-complicated. It uses patterns such as Dependency Injection and Service Locator; as well as a strict implementation of the Model-View-ViewModel (MVVM) pattern that Silverlight developers love so much.

    So, yes, it is complex. There is no (or only a tiny bit of) code-behind, and instead all interaction is handled through the view models. It uses components from the Prism Library for Windows Phone (see patterns & practices: Prism) to implement some of this interaction, such as wiring up application bar buttons and displaying non-interruptive notifications. Views are instantiated though a view model locator class, allowing them to be bound to a view but created through the dependency injection container. Likewise, services are instantiated and their lifetime managed by the container, and references to these services are injected into the constructor parameters of the view models and other types that require them. Changes to the way that the application works simply require a change to the container registrations.

    In addition, there are custom types that wrap and abstract basic tasks such as storing data and application settings, managing activation and deactivation, and synchronizing data with the remote services. There is a lot in there to understand, and one of the tasks for the Hands-On-Labs was to walk developers through the important parts of the process, and even try to justify the complexity.

    But where it really became clear that the architecture and design of the application was "right" was when we came to plan the practical step-by-step tasks for the Hands-On-Labs. We wanted to do something more than just "Hello Mobile World", and planned to use a modified version of the example Tailspin Surveys application as the basis for the user's tasks. So I asked the wizard dev guy (hi Jose) who was tasked with making it all work if we could do some really complicated things to extend and modify the application (bearing in mind that each task should take the user less that 30 minutes to complete):

    Me: Can we add a new UI page to the application and integrate it with the DI and service locator mechanism?
    Jose: Easy. We can do the whole MVVM thing including integration with the DI mechanism and application bar buttons in a few steps.

    Me: OK, so how about adding a completely new question type to the application, which uses a new and different UI control? And getting the answer posted to the service?
    Jose: No problem. The design of the application allows you to extend it with new question types, new question views, and new question view models. They'll all seamlessly integrate with the existing processes and services. Twenty minutes max.

    Me: Wow. So how about changing the wire format of the submitted data, and perhaps encrypting it as well?
    Jose: Yep, do that simply by adding a new implementation of the proxy. Everything else will just work with it.

    As you can see, good design pays dividends when you get some awkward customer like me poking about with the application and wanting it to do something different. It's a good thing real customers aren't like that...

  • Writing ... or Just Practicing?

    Battery Phishing?

    • 2 Comments

    If you're a Douglas Adams fan, you'll know all about the fabulously beautiful planet named Bethselamin. The ten billion tourists who visit it each year were causing so much erosion that they introduced a rule whereby any net imbalance between the amount you eat and the amount you excrete whilst on the planet is surgically removed from your bodyweight when you leave (and therefore, as Douglas mentioned in the book, it is vitally important to get a receipt every time you go to the lavatory).

    Yes, it's a well-worn quote, but I'm beginning to get nervous that it's starting to come true here on our own little blue-green planet (located, of course, in the uncharted backwaters of the unfashionable end of the western spiral arm of the Milky Way galaxy). And all because I needed a couple of new batteries for my uninterruptable power supplies.

    We have a wonderful battery supplier just a mile or three down the road from here who seems to stock every kind of battery that human endeavor has managed to create, including sealed batteries for every APC UPS made in the last 20 years. That just about covers all the various models I have scattered around my house and office. What's more, they are considerably cheaper than buying online, there's no shipping cost, and they take the old ones back for recycling.

    OK, so the APC ones do come with a label so you can ship them back to APC, but the address on the label is New Jersey USA. The nice man at our local post office worked out that it would be cheaper to fly over myself and take them back rather than sending them by post - though, of course, I'm not allowed to take batteries on a plane any more...

    So, anyway, there I am with a couple of dead RBC2 batteries and a bag of assorted other used AA and the like, when the young assistant asks me for my name, address, postal code, phone number, and where I obtained the original batteries. As he'd been very pleasant and helpful so far, telling him to mind his own business seemed a bit strong, so instead I politely inquired as to why he wanted to know. I had deposited on the counter a selection of coins and notes of the realm in way of payment, so surely all he needed was to grab the cash, stuff it in the till, and I'd be on my way?

    No, it seems that he needs to give me a receipt for the old batteries that are going for recycling. Maybe if there's not quite enough lead in them when they get round to breaking them up, they'll send me a bill for the balance? Or perhaps I'm only allowed to ecologically dispose of a specific quota of murky sulphuric acid each year and he's worried I'm approaching the limit? Aha! - more likely it's in case I stole them so I could sell them for the scrap value (which seems to a little perverse when I'm giving them away free for recycling).

    It's plainly all part of a secretive scheme connected with the move to weigh and analyse the contents of our dustbins, film and record the registration number of our cars when we go to the local rubbish tip, and remotely monitor our energy consumption every hour with the new smart electricity meters. It will all be fed into some huge computer that will send out letters once a month with demands for the requisite body parts to make up the imbalance between our resources input and recycled output.

    So it's probably a good idea to make sure you do get a receipt every time...

  • Writing ... or Just Practicing?

    When Did the Web Get a Small "w"?

    • 2 Comments

    Here in England, the well-known and much loved actress Emma Thompson recently started a debate about the use by kids of slang terms that only serve to make them sound stupid. She cites things such as "yeahbut", "like", "innit" (perhaps an abbreviated form of "I'm a nitwit"), and use of the word "well" in phrases such as "I'm well tipsy". Or even "I'm well ill". Somebody even wrote to the newspaper to say that, as his train was pulling into Sevenoaks station, one of a group of teenage girls sitting opposite asked "Is this like Sevenoaks station?" - to which he replied "Yes, though the amazing similarity is probably due to the fact that it IS Sevenoaks station..."

    But is does seem as though our vocabulary is changing at an increasing rate these days. At one time we had a "World Wide Web", and then we just had a "Web", and now - according to the latest version of the Microsoft Manual of Technical Style that all us technical writers have to abide by - we just have a "web". And they even make us miss out the spaces now, so we have a "website" rather than a "Web site". But I suppose it saves on paper and network bandwidth.

    And I guess it just mirrors the way that "the Internet" became "the 'Net", then just "the net". And "E-mail" morphed to "e-mail" and then to "email". It's happened in the past to things like "Biro" (invented by Ladislas Biro) and to "Hoover" (named after William H. Hoover, who started the company that perfected earlier designs). And then, once they get to be lower-cased and spaceless, we start creating our verb derivations - such as "emailing" and "hoovering". So how long will it be before we hear about people "webbing" and "netting". The fact that our wealth of newly-derived words often already have a proper meaning seems to be irrelevant these days.

    One wonderful line from some obscure poem I read ages ago, supposedly about the noises that emanate from an upstairs apartment, was "The moving hoover hooveth, and having hoov'd moves on".

    But what's interesting is if the real reason we had to get rid of "World Wide Web" was because it started with the wrong letters. Having to repeatedly say "doubleyew doubleyew doubleyew dot..." each time you read out a website URL was a pain. Though, thanks to George Dubya Bush, we can now generally get away with just "dub dub dub dot...". Though I always find that hearing it conjures up visions of a nursery rhyme.

    Perhaps if Tim Berners-Lee had started out by calling it the "Random Resources Repository", we'd still have a proper name for it. I mean, it would be easy to say "arrrrrrrgh dot...", and everyone would understand that the extended length of the arrrrrrrgh signified three of them. And I doubt that even our linguistic regulation people here at Microsoft would decide we had to call the stuff on servers "reposites". Plus, we could have arrrrrrrghlogs and arrrrrrrghmail and arrrrrrrghpplications (though that just sounds like somebody posh saying "application").

    But I suppose it would only encourage more pirate jokes (what did the doctor say to the pirate with a bad leg? "It looks like you've got arrrrrrrghthritis")...

  • Writing ... or Just Practicing?

    Invoking the Dark Side (using a cassette player)

    • 2 Comments

    I previously thought that the reason you used to see miles of cassette taped entwined in the bushes at the side of motorways was because the driver discovered that his or her Mother had accidently put a "James Last Plays Christmas Carols" tape back in the driver's Black Sabbath or Pink Floyd cassette box. Needless to say, that kind of disaster when hurtling along the M1 could obviously engender a malicious and wholly illegal "chuck it out of the window" process. But it seems I was mistaken.

    According to the Surrey Advertiser newspaper (a local rag available in the southern part of England) nefarious devil worshippers are intentionally recording dark and dangerous spells on cassette tape and then diligently encircling sections of the Queen's Highway with it to initiate a huge crop of accidents and other unfortunate occurrences. In particular, at junction 9 of the M25 they have prompted a noticeable increase in the number of motoring accidents and suicides from people jumping off the footbridge. A group of believers from the Pioneer Engage Church even organized regular vigils and prayer meetings to "cleanse the benighted interchange".

    Now I'm fairly open to wild new theories and improbable suggestions. I mean, I even accept that information technology can create a paperless office. And that sat-navs actually do use satellites circling way above the earth rather than reading the location from barcodes in the white lines on the road. But I'm finding it hard to believe that the magnetic properties of cassette tape are sufficient to overcome the Earth's residual magnetism at a level sufficient to affect human behavior. Especially as we're repeatedly told that holding a powerful microwave transmitter right next to your ear is not the least bit dangerous.

    As an aside, I just watched a program by Stephen Hawking about how time works that said the clocks in the GPS satellites need to be manually adjusted as they gain a billionth of a second every day. This is, it seems, because the effects of gravity make time run slower close to the Earth than at the satellites' altitude. Supposedly, if they didn't do it, your sat-nav would be out by six miles every day. Maybe the reason I still get lost some days is because the guy who winds the knobs took a day off sick.

    So, anyway, the Surrey Advertiser soon received several disbelieving responses to its article, including one from a reader who (perhaps through intimate knowledge of the practice) pointed out that most evil spirits these days use MP3 players because they are far smaller and more convenient to carry when crossing into the dark realm and back again. More likely is that, when our intrepid band of believers analyzed the contents of the tape, they accidently played it backwards and by sheer luck hit upon the bit in Led Zeppelin's Stairway to Heaven that supposedly includes satanic phrases recorded in reverse.

    But maybe it's the reason that people hang old CDs on strings to scare birds away from their vegetable plots. It probably works best with heavy rock music ones. I suppose I could subject the theory to scientific test by burying a hard drive containing all the satanic album tracks I've collected over the last forty or so years under the hedge at the front of our garden, and counting the number of people who fall over as they pass by...

  • Writing ... or Just Practicing?

    Blocking Malware Domains in ISA 2006

    • 2 Comments

    As in many households, several regular and occasional computer users take advantage of my connection to the outside world. I use ISA Server 2006 running as a virtual Hyper-V instance for firewalling and connection management (I'm not brave enough to upgrade to Forefront yet), and all incoming ports are firmly closed. But these days the risk of picking up some nasty infection is just as great from mistaken actions by users inside the network as from the proliferation of malware distributors outside.

    Even when running under limited-permission Windows accounts and with reasonably strict security settings on the browsers, there is a chance that less informed users may allow a gremlin in. So I decided some while ago to implement additional security by blocking all access to known malware sites. I know that the web browser does this to some extent, but I figured that some additional layer of protection - plus logging and alerts in ISA - would be a useful precaution. So far it seems to have worked well, with thankfully only occasional warnings that a request was blocked (and mostly to an ad server site).

    The problem is: where do you get a list of malware sites? After some searching, I found the Malware Domain Blocklist site. They publish malware site lists in a range of formats aimed at DNS server management and use in your hosts file. However, they also provide a simple text list of domains called JustDomains.txt that is easy to use in a proxy server or ISA. Blocking all web requests for the listed domains will provide some additional protection against ingress and the effects of malware that might inadvertently find its way into a machine otherwise; and you will see the blocked requests in your log files.

    They don’t charge for the malware domain lists, but you decide to use them please do as I did and make a reasonable donation. Also be aware that malware that connects using an IP address instead of a domain name will not be blocked when you use just domain name lists.

    To set it up in ISA 2006, you need the domain list file to be in the appropriate ISA-specific format. It's not available in this format, but a simple utility will convert it. You can download a free command line utility I threw together (the Visual Studio 2010 source project is included so you can check the code and recompile it yourself if you wish). It takes the text file containing the list of malware domain names and generates the required XML import file for ISA 2006 using a template. There's a sample supplied but you'll need to export your own configuration from the target node and edit that to create a suitable template for your system. You can also use a template to generate a file in any other format you might need.

    ISA 2006 Toolbox

    To configure ISA open the Toolbox list, find the Domain Name Sets node, right-click, and select New Domain Name Set. Call it something like "Malware Domains". Click Add and add a dummy temporary domain name to it, then click OK. The dummy domain will be removed when you import the list of actual domain names. Then right-click on your new Malware Domains node, click Export Selected, and save the file as your template for this node. Edit it to insert the placeholders the utility requires to inject the domain names into it as described in the readme file and sample template provided.


    Malware Domaind List

    After you generate your import file, right-click on your Malware Domains node, click Import to Selected, and locate the import file you just created from the list of domain names. Click Next, specify not to import server-specific information, and then click Finish. Open your Malware Domain set from the Toolbox and you should see the list of several thousand domain names.


     Now you can configure a firewall rule for the new domain name set. Right-click the Firewall Policy node in the main ISA tree view and click New Rule. Call it something recognizable such as "Malware Domains". In the Action tab select Deny and turn on the Log requests matching this rule option. In the Protocols tab, select All outbound traffic. In the From tab, click Add and add all of your local and internal networks. In the To tab click Add and add your Malware Domains domain name set. In the Content Types tab, select All content types. In the Users tab select All users, and in the Schedule tab select Always. Then click OK, click Apply in the main ISA window, and move the rule to the top of the list of rules.

    ISA Block Rule

    You can test your new rule by temporarily adding a dummy domain to the Domain Name Set list and trying to navigate to it. You should see the ISA server page indicating that the domain is blocked.

    If you wish, you can create a list of IP addresses of malware domains and add this set to your blocking rule as well so that malware requests that use an IP address instead of a domain name are also blocked. The utility can resolve each of the domain names in the input list and create a file suitable for importing into a Computer Set in ISA 2006. The process for creating the Computer Set and the template is the same as for the Domain Name Set, except you need to inject the domain name and IP address of each item into your import file. Again, a sample template that demonstrates how is included, but you must create your own version as described above.

    Be aware that some domains may resolve to internal or loopback addresses, which may affect operation of your network if blocked. The utility attempts to recognize these and remove them from the resolved IP address list, but use this feature with care and check the resolved IP addresses before applying a blocking rule.

    Another issue is the time it takes to perform resolution of every domain name, and investigations undertaken here suggest that only about one third of them actually have a mapped IP address. You'll need to decide if it's worth the effort, but you can choose to have the utility cache resolved IP addresses to save time and bandwidth resolving them all again (though this can result in stale entries). If you do create a Computer Set, you simply add it to the list in the To tab of your blocking rule along with your Domain Name Set. Of course, you need to regularly update the lists in ISA, but this just involves downloading the new list, creating the import file(s), and importing them into your existing Domain Name Set and Computer Set nodes in ISA.

  • Writing ... or Just Practicing?

    In England's White and Pleasant Land

    • 2 Comments

    I received an invitation to attend a "highly recommended" course this week on how to maximize use of Office Communicator, Live Meeting, and the Office Conferencing System. Specially timed, no doubt, to coincide with the rather interesting weather we are currently experiencing here in Britain. And obviously a really vital event to reduce the need for travel during this period of meteorological uncertainty.

    Great, I thought, really useful - must sign up and watch the remote presentation! Except that, oddly, it's only being held in London and Reading (both some 180 miles away from me) and is not being networked. So I have to travel to see a presentation about reducing the need to travel. I suppose they assume that, until you see it, you won't know how to view it remotely. Maybe these photos of our usually green and pleasant land will indicate why I'm likely to miss it...

    Derbyshire snow
    Derbyshire snow Derbyshire snow
    Derbyshire snow Derbyshire snow
    Derbyshire snow Derbyshire snow
  • Writing ... or Just Practicing?

    Can Writers Dance The Agile?

    • 2 Comments

    It's probably safe to say that only a very limited number of the few people who stroll past my blog each week were fans of the Bonzo Dog Doo Dah Band. Or even, while they might recall their 1968 hit single "I'm the Urban Spaceman" (which, I read somewhere, was produced by Paul McCartney under the pseudonym of Apollo C. Vermouth), are aware of their more ground-breaking works such as "The Doughnut In Granny's Greenhouse". So this week's title, based on their memorable non-hit "Can Blue Men Sing The Whites" is pretty much guaranteed to be totally indecipherable to the majority of the population. Except for the fact that the BBC just decided to use it as the title of a new music documentary.

    But, as usual, I'm wandering aimlessly away from the supposed topic of this week's verbal perambulation. Which is, once again, about agileness (agility?) in terms of documenting software under development. No, really, I have actually put some solid thought into this topic over the past several months, and even had one or two of the conclusions confirmed by more learned people than me, so they are not (wholly) the kind of wild stabs into the dark that more commonly make up the contents of these pages.

    Where's The Plan, Dan?

    One of the major factors in agile development is "stories". Users supposedly provide a list of features they would like to see in the product; the development team evaluates these; and draws up a final list of things that they think the software should include. They then sort the list by "nice-to-haveness" (taking into account feasibility and workload), and produce a development plan. But they don't know at that stage how far down the list they will actually get. Most products are driven by a planned (or even fixed) release date, so this approach means that the most important stuff will get done first, and the "nice to haves" will be included only if there is time.

    It would be interesting if they applied agile methods in other real world scenarios. Imagine taking your car to the dealer to get it serviced. Their worksheet says a service takes two hours, and there is a list of things they're supposed to look at. You'd like to think that if the mechanic doesn't get them all done in the allocated time they would actually finish them off (even if you had to pay a bit more) rather than leaving the wheel nuts loose or not getting round to putting oil in the engine. Or maybe not having time to check if you need new brake pads.

    Of course, every sprint during the dev cycle should produce a releasable product, so multiple revisions of the same section of code can often occur. So how do you plan documentation for such an approach? You can assume that some of the major features will get done, but you have no idea how far down the list they will get. Which means you can't plan the final structure or content of the docs until they get to the point where they are fed up fiddling with the code and decide to freeze for final testing. You end up continually reorganizing and reworking sections and topics as new features bubble to the surface.

    But whilst the code may just need some semi-mechanized refactoring and tidying up to accommodate new features, the effect on the docs may require updates to feature overviews, links, descriptions, technical details, tables of methods, schematics, code samples, and the actual text - often in multiple locations and multiple times. The burden increases when the doc set is complex, contains many links, or may need to support multiple output formats.

    What's the Story, Jackanory?

    Each feature in the list of requirements is a "story", so in theory you can easily document each one by simply reading what the developers and architects say it does. And you can look at the source code and unit tests to see the way it works and the outcomes of new features. Or, at least, you can if you can understand the code. Modern techniques such as dependency injection, patterns such as MVC, and language features such as extension methods and anonymous typing mean that - unless you know what you are looking for and where to find it - it can be really hard to figure what stuff actually does.

    In addition, the guys who write the unit tests don't have clarity and education as objectives - they write the most compact (and unrealistic in terms of "real world" application) code possible. OK, so you can often figure out what some feature does from the results it produces, but getting an answer to simple questions that are, in theory, part of the story is not always easy. I'm talking about things like "What does it actually do (in two sentences)?", "Why would I use this feature?", "How does it help users?", and "When and how would you recommend it be used".

    Even a demo or walkthrough of the code (especially from a very clever developer who understands all of the nuances and edge cases) can sometimes be only of marginal help - theory and facts whooshing over the top of the head is a common feeling in these cases. Yes, it showcases the feature, but often only from a technical implementation point of view. I guess, in true agile style, you should actually sit down next to the developer as they build the feature and continually ask inane questions. They might even let you press a few keys or suggest names for the odd variables, but it seems a less than efficient way to create documentation.

    And when did you last see a project where there were the same number of writers as developers? While each developer can concentrate on specific features, and doesn't really need to understand the nuances of other features, the writer has no option but to try and grasp them all. Skipping between features produces randomization of effort and workload, especially as feedback usually comes in after they've moved on to working on another feature.

    Is It Complete, Pete?

    One of the integral problems with documenting agile processes is the incompatible granularity of the three parts of the development process. When designing a feature, the architect or designer thinks high level - the "story". A picture of what's required, the constraints, the objectives, the overall "black boxes on a whiteboard" thing. Then the developer figures out how to build and integrate the feature into the product by breaking it down into components, classes, methods, and small chunks of complicated stuff. But because it's agile, everything might change along the way.

    So even if the original plan was saved as a detailed story (unlikely in a true agile environment), it is probably out of date and incorrect as the planned capabilities and the original nuances are moulded to fit the concrete implementations that are realistic. And each of the individual tasks becomes a separate technical-oriented work item that bears almost no direct relationship to the actual story. Yet, each has to be documented by gathering them all up and trying to reassemble them like some giant jigsaw puzzle where you lost the box lid with the picture on.

    And the development of each piece can be easily marked complete, and tested, because they are designed to fit into this process. But when, and how, do you test the documentation and mark it as complete? An issue I've come up against time and time again. If the three paragraphs that describe an individual new feature pass review and test, does that mean I'm done with it? How do I know that some nuance of the change won't affect the docs elsewhere? Or that some other feature described ten pages later no longer works like it used to? When you "break the build", you get sirens and flashing icons. But how do you know when you break the docs?

    Is It A Bug, Doug?

    So what about bugs? As they wax and wane during the development cycle, you can be pretty sure that the project management repository will gradually fill up with new ones, under investigation ones, fixed ones, and ones that are "by design". I love that one - the software seems to be faulty, or the behavior is unpredictable, but it was actually designed to be like that! Though I accept that, sometimes, this is the only sensible answer.

    Trouble is that some of these bugs need documenting. Which ones? The ones that users need to know about that won't get fixed (either because it's too difficult, too late, or they are "by design")? The ones that did get fixed, but change the behavior from a previous release? Those that are impossible to replicate by the developers, but may arise in some esoteric scenario? What about the ones that were fixed, yet don't actually change anything? Does the user need to be told that some bug they may never have come across has been fixed?

    And what if the bug was only there in pre-release or beta versions? Do you document it as fixed in the final release? Surely people will expect the beta to have bugs, and that these would be fixed for release. The ones you need to document are those that don't get fixed. But do you document them all and then remove them from the docs as they get fixed, or wait till the day before release and then document the ones they didn’t get time to fix? I know which seems to be the most efficient approach, but it's not usually very practical.

    Can I Reduce the Hurt, Bert?

    It's OK listing "issues", but a post like this is no help to anyone if it doesn't make some suggestions about reducing the hurt, in an attempt to get better docs for your projects. And, interestingly, experience shows that not only writers benefit from this, but also the test team and other non-core-dev members who need to understand the software. So, having worked with several agile teams, here's my take (with input from colleagues) on how you can help your writers to create documentation and guidance for your software:

    • Write it down and take photos. Make notes of the planned features, identifying the large ones that have a big impact on the docs (even if, from a dev perspective, they seem relatively simple). And while you're snapping those pictures of the team for your blog, take photos of the whiteboards as well.

       

    • Plan to do the large or high-impact tasks first. A task that has a huge impact on the docs, particularly for an update to an existing product, needs to be firmed up at least enough for the rewriting and reorganization that will take place to take into account the upcoming changes (and save doing all this twice or three times).

       

    • Create a simple high-level description for the major or high-impact tasks that at least answers the questions "What does it do?", "When, how, and why would customers use this feature?", and "What do I want customers to know about it (such as the benefits and constraints)?" A mass of technical detail, test code, change logs, or references to dozens of dev tasks, is generally less than useful.

       

    • Organize your task repository to simplify work for the test and doc people. Consider a single backlog item with links to all of individual related dev tasks, and a link to a single doc/test task that contains the relevant details about the feature (see previous bullet).

       

    • Annotate bugs based on the doc requirements. Those that won’t be fixed, and those from a previous public release that were fixed and change the behavior of the product, are "known issues" that must be documented. Others that were fixed from a previous public release and don’t affect behavior go into the "changes and fixes" section hidden away somewhere in the docs. Those that are only applicable during the dev cycle for this release (ones that are found in dev and fixed before release) do not need documenting. Help your writer differentiate between them.

       

    • Think about the user when you review. Docs and sample code need to be technically accurate, and at a level that suits the intended audience, but having them reviewed only by the really clever people who wrote the code in the first place may not ensure they are appropriate for newcomers to your product, or meet the needs of the "average" developers who just want to use the features. Especially if there is no "story" that helps the writer correctly position and describe the feature.

       

    • Make time at the end of the dev cycle. You can tweak code and add stuff right up to the time you need to build the installer and ship, but docs have a built-in delay that means they will always trail behind your latest code changes. They need reviewing, editing, testing, proof reading, indexing, and various production tasks. And stuff like videos and walk-throughs can often only be created once the software stops changing.

    While the core tenets of agile may work well in terms of getting better code out of the door, an inflexible "process over product" mentality doesn’t work well for user documentation. Relying on individuals and interactions over processes and tools, working software over comprehensive documentation, and responding to change over following a plan, can combine to make the task of documentation more difficult.

    Common effects are loss of fluidity and structure, and randomization. One colleague mentioned how, at a recent Scrum workshop for content producers, she came across the description: "We're in a small boat with fogging goggles riding in the wake of an ocean liner that is the engineering team".

    Was It A Success, Tess?

    I was discussing all these issues with a colleague only the other day, and he made an interesting comment. Some teams actually seem to measure their success, he said, by the way that they embraced agile methods during development. The question he asked was "Is the project only a success if you did agile development really well?" If analysis of all the gunk in your project management repository tells you that the agile development process went very well, but the software and the documentation sucks, is it still a triumph of software development?

    Or, more important, if you maybe only achieved 10% success in terms of agile development methods but everyone loves the resulting software, was the project actually a failure...?

  • Writing ... or Just Practicing?

    How to Avoid a Speeding Ticket

    • 2 Comments

    I was reading a story (a.k.a. urban myth) this week about an eminent quantum physicist who was stopped for speeding in his car. When told by the traffic cop that he was doing 63 miles per hour, he responded by asking if this was an accurate measurement. Being told that it was he explained that, therefore, they could not definitely determine if he was inside the 35 miles per hour zone at the time. Alternatively, if they were sure that he was within the zone, it was physically impossible - due to the fundamental laws of quantum mechanics - for the speed measurement to be accurate.

    OK, so the story was phrased a little more colloquially that this. When asked if he knew how fast he was travelling, the eminent physicist replied "No, but I know where I am". Of course, our eminent physicist was simply explaining that, according to the Heisenberg uncertainty principle, it is impossible to simultaneously determine the precise values of certain pairs of physical properties, such as position and momentum, of an object.

    It's an interesting theory; and you may want to ponder if, when you measure the transfer rate of your high speed Internet connection or the performance of a new super fast hard disk, the results actually apply to your neighbor's DSL line or your wife's laptop instead. Although in reality this effect only reveals itself where you are dealing with very small or very fast things. Perhaps if I write this blog post using 6 point Arial, and type fast, there's a danger that the text might end up in a different paragraph, or even a different document.

    And as for really small and fast things, you don't get much better than the electrons that make up the ones and zeros of your latest and greatest computer programs. So there's definitely a danger that, if you make your new enterprise application run too quickly, you won't be able to tell which server it's running on when you come to do performance testing.  Probably a good argument for installing a server farm.

    And does this uncertainty extend to other pairs of physical properties? Can I argue that my spelling is so bad because it's impossible to determine the actual alphabetic letter and its position within a word at the same time? Or insist that my BMI must be around 25 because it's impossible to simultaneously measure my weight and height? Perhaps if I was also running very fast at the time it would work.

    However, what's really disappointing is that Heisenberg's principle also puts paid to my idea for becoming a millionaire by patenting a device for removing the space junk that I keep reading is increasingly endangering satellites and space craft in Earth orbit. I'd just finished the design for a rocket fitted with a large magnet when I read that most of the junk is travelling at six miles per second - which probably means that I can't determine how big the chunks are. So how would I know what size magnet I'd need?

Page 2 of 41 (324 items) 12345»