Writing ... or Just Practicing?

Random Disconnected Diatribes of a p&p Documentation Engineer

  • Writing ... or Just Practicing?

    Can Writers Dance The Agile?

    • 2 Comments

    It's probably safe to say that only a very limited number of the few people who stroll past my blog each week were fans of the Bonzo Dog Doo Dah Band. Or even, while they might recall their 1968 hit single "I'm the Urban Spaceman" (which, I read somewhere, was produced by Paul McCartney under the pseudonym of Apollo C. Vermouth), are aware of their more ground-breaking works such as "The Doughnut In Granny's Greenhouse". So this week's title, based on their memorable non-hit "Can Blue Men Sing The Whites" is pretty much guaranteed to be totally indecipherable to the majority of the population. Except for the fact that the BBC just decided to use it as the title of a new music documentary.

    But, as usual, I'm wandering aimlessly away from the supposed topic of this week's verbal perambulation. Which is, once again, about agileness (agility?) in terms of documenting software under development. No, really, I have actually put some solid thought into this topic over the past several months, and even had one or two of the conclusions confirmed by more learned people than me, so they are not (wholly) the kind of wild stabs into the dark that more commonly make up the contents of these pages.

    Where's The Plan, Dan?

    One of the major factors in agile development is "stories". Users supposedly provide a list of features they would like to see in the product; the development team evaluates these; and draws up a final list of things that they think the software should include. They then sort the list by "nice-to-haveness" (taking into account feasibility and workload), and produce a development plan. But they don't know at that stage how far down the list they will actually get. Most products are driven by a planned (or even fixed) release date, so this approach means that the most important stuff will get done first, and the "nice to haves" will be included only if there is time.

    It would be interesting if they applied agile methods in other real world scenarios. Imagine taking your car to the dealer to get it serviced. Their worksheet says a service takes two hours, and there is a list of things they're supposed to look at. You'd like to think that if the mechanic doesn't get them all done in the allocated time they would actually finish them off (even if you had to pay a bit more) rather than leaving the wheel nuts loose or not getting round to putting oil in the engine. Or maybe not having time to check if you need new brake pads.

    Of course, every sprint during the dev cycle should produce a releasable product, so multiple revisions of the same section of code can often occur. So how do you plan documentation for such an approach? You can assume that some of the major features will get done, but you have no idea how far down the list they will get. Which means you can't plan the final structure or content of the docs until they get to the point where they are fed up fiddling with the code and decide to freeze for final testing. You end up continually reorganizing and reworking sections and topics as new features bubble to the surface.

    But whilst the code may just need some semi-mechanized refactoring and tidying up to accommodate new features, the effect on the docs may require updates to feature overviews, links, descriptions, technical details, tables of methods, schematics, code samples, and the actual text - often in multiple locations and multiple times. The burden increases when the doc set is complex, contains many links, or may need to support multiple output formats.

    What's the Story, Jackanory?

    Each feature in the list of requirements is a "story", so in theory you can easily document each one by simply reading what the developers and architects say it does. And you can look at the source code and unit tests to see the way it works and the outcomes of new features. Or, at least, you can if you can understand the code. Modern techniques such as dependency injection, patterns such as MVC, and language features such as extension methods and anonymous typing mean that - unless you know what you are looking for and where to find it - it can be really hard to figure what stuff actually does.

    In addition, the guys who write the unit tests don't have clarity and education as objectives - they write the most compact (and unrealistic in terms of "real world" application) code possible. OK, so you can often figure out what some feature does from the results it produces, but getting an answer to simple questions that are, in theory, part of the story is not always easy. I'm talking about things like "What does it actually do (in two sentences)?", "Why would I use this feature?", "How does it help users?", and "When and how would you recommend it be used".

    Even a demo or walkthrough of the code (especially from a very clever developer who understands all of the nuances and edge cases) can sometimes be only of marginal help - theory and facts whooshing over the top of the head is a common feeling in these cases. Yes, it showcases the feature, but often only from a technical implementation point of view. I guess, in true agile style, you should actually sit down next to the developer as they build the feature and continually ask inane questions. They might even let you press a few keys or suggest names for the odd variables, but it seems a less than efficient way to create documentation.

    And when did you last see a project where there were the same number of writers as developers? While each developer can concentrate on specific features, and doesn't really need to understand the nuances of other features, the writer has no option but to try and grasp them all. Skipping between features produces randomization of effort and workload, especially as feedback usually comes in after they've moved on to working on another feature.

    Is It Complete, Pete?

    One of the integral problems with documenting agile processes is the incompatible granularity of the three parts of the development process. When designing a feature, the architect or designer thinks high level - the "story". A picture of what's required, the constraints, the objectives, the overall "black boxes on a whiteboard" thing. Then the developer figures out how to build and integrate the feature into the product by breaking it down into components, classes, methods, and small chunks of complicated stuff. But because it's agile, everything might change along the way.

    So even if the original plan was saved as a detailed story (unlikely in a true agile environment), it is probably out of date and incorrect as the planned capabilities and the original nuances are moulded to fit the concrete implementations that are realistic. And each of the individual tasks becomes a separate technical-oriented work item that bears almost no direct relationship to the actual story. Yet, each has to be documented by gathering them all up and trying to reassemble them like some giant jigsaw puzzle where you lost the box lid with the picture on.

    And the development of each piece can be easily marked complete, and tested, because they are designed to fit into this process. But when, and how, do you test the documentation and mark it as complete? An issue I've come up against time and time again. If the three paragraphs that describe an individual new feature pass review and test, does that mean I'm done with it? How do I know that some nuance of the change won't affect the docs elsewhere? Or that some other feature described ten pages later no longer works like it used to? When you "break the build", you get sirens and flashing icons. But how do you know when you break the docs?

    Is It A Bug, Doug?

    So what about bugs? As they wax and wane during the development cycle, you can be pretty sure that the project management repository will gradually fill up with new ones, under investigation ones, fixed ones, and ones that are "by design". I love that one - the software seems to be faulty, or the behavior is unpredictable, but it was actually designed to be like that! Though I accept that, sometimes, this is the only sensible answer.

    Trouble is that some of these bugs need documenting. Which ones? The ones that users need to know about that won't get fixed (either because it's too difficult, too late, or they are "by design")? The ones that did get fixed, but change the behavior from a previous release? Those that are impossible to replicate by the developers, but may arise in some esoteric scenario? What about the ones that were fixed, yet don't actually change anything? Does the user need to be told that some bug they may never have come across has been fixed?

    And what if the bug was only there in pre-release or beta versions? Do you document it as fixed in the final release? Surely people will expect the beta to have bugs, and that these would be fixed for release. The ones you need to document are those that don't get fixed. But do you document them all and then remove them from the docs as they get fixed, or wait till the day before release and then document the ones they didn’t get time to fix? I know which seems to be the most efficient approach, but it's not usually very practical.

    Can I Reduce the Hurt, Bert?

    It's OK listing "issues", but a post like this is no help to anyone if it doesn't make some suggestions about reducing the hurt, in an attempt to get better docs for your projects. And, interestingly, experience shows that not only writers benefit from this, but also the test team and other non-core-dev members who need to understand the software. So, having worked with several agile teams, here's my take (with input from colleagues) on how you can help your writers to create documentation and guidance for your software:

    • Write it down and take photos. Make notes of the planned features, identifying the large ones that have a big impact on the docs (even if, from a dev perspective, they seem relatively simple). And while you're snapping those pictures of the team for your blog, take photos of the whiteboards as well.

       

    • Plan to do the large or high-impact tasks first. A task that has a huge impact on the docs, particularly for an update to an existing product, needs to be firmed up at least enough for the rewriting and reorganization that will take place to take into account the upcoming changes (and save doing all this twice or three times).

       

    • Create a simple high-level description for the major or high-impact tasks that at least answers the questions "What does it do?", "When, how, and why would customers use this feature?", and "What do I want customers to know about it (such as the benefits and constraints)?" A mass of technical detail, test code, change logs, or references to dozens of dev tasks, is generally less than useful.

       

    • Organize your task repository to simplify work for the test and doc people. Consider a single backlog item with links to all of individual related dev tasks, and a link to a single doc/test task that contains the relevant details about the feature (see previous bullet).

       

    • Annotate bugs based on the doc requirements. Those that won’t be fixed, and those from a previous public release that were fixed and change the behavior of the product, are "known issues" that must be documented. Others that were fixed from a previous public release and don’t affect behavior go into the "changes and fixes" section hidden away somewhere in the docs. Those that are only applicable during the dev cycle for this release (ones that are found in dev and fixed before release) do not need documenting. Help your writer differentiate between them.

       

    • Think about the user when you review. Docs and sample code need to be technically accurate, and at a level that suits the intended audience, but having them reviewed only by the really clever people who wrote the code in the first place may not ensure they are appropriate for newcomers to your product, or meet the needs of the "average" developers who just want to use the features. Especially if there is no "story" that helps the writer correctly position and describe the feature.

       

    • Make time at the end of the dev cycle. You can tweak code and add stuff right up to the time you need to build the installer and ship, but docs have a built-in delay that means they will always trail behind your latest code changes. They need reviewing, editing, testing, proof reading, indexing, and various production tasks. And stuff like videos and walk-throughs can often only be created once the software stops changing.

    While the core tenets of agile may work well in terms of getting better code out of the door, an inflexible "process over product" mentality doesn’t work well for user documentation. Relying on individuals and interactions over processes and tools, working software over comprehensive documentation, and responding to change over following a plan, can combine to make the task of documentation more difficult.

    Common effects are loss of fluidity and structure, and randomization. One colleague mentioned how, at a recent Scrum workshop for content producers, she came across the description: "We're in a small boat with fogging goggles riding in the wake of an ocean liner that is the engineering team".

    Was It A Success, Tess?

    I was discussing all these issues with a colleague only the other day, and he made an interesting comment. Some teams actually seem to measure their success, he said, by the way that they embraced agile methods during development. The question he asked was "Is the project only a success if you did agile development really well?" If analysis of all the gunk in your project management repository tells you that the agile development process went very well, but the software and the documentation sucks, is it still a triumph of software development?

    Or, more important, if you maybe only achieved 10% success in terms of agile development methods but everyone loves the resulting software, was the project actually a failure...?

  • Writing ... or Just Practicing?

    Suffering Suffixes, Batman

    • 0 Comments

    One of the features of working from home is that, if you aren't careful, you can suddenly find that you haven't been outside for several days. In fact, if you disregard a trip to the end of the drive to fetch the wheely bin, or across the garden to feed the goldfish, I probably haven't been outside for a month. I suppose this is why my wife, when she gets home from work each day, feels she has to appraise me of the current weather conditions.

    Usually this is something along the lines of "Wow, it's been scorching today!" or "Gosh, it's parky enough to freeze your tabs off!" (tip for those not familiar with the Derbyshire dialect: "parky" = "cold", "tabs" = "ears"). However, last week she caught me by suprise by explaining that "It's rather Autumnal, even Wintery". Probably it's my rather weird fascination with words that set me off wondering why it wasn't "Autumny and Wintery", or even "Autumnal and Winteral". OK, so we also say "Summery", but "Springy" doesn't seem right. And, of course, I soon moved on to wondering if Americans say "Fally" or "Fallal". I'll take a guess that they end up with one of the poor relations in the world of words (ones that have to use a hyphen) with "Fall-like", just as we say "Spring-like".

    Note: I just had a reply from a reader in the US North West who says that the only words they use for "Fall" are "rainy", "misty", "stormy", and "cloudy". I guess that agrees with my own experience of all the trips I've made there, including a few in "Summer".

    The suffix "al" comes from Latin, and means "having the character of". So Autumnal is obvious. But what about "lateral". Assuming the rules for the "al" prefix, it means "having the character of being late or later" (or, possibly, "dead"), whereas I've always assumed it meant something to do with sideways movement or thinking outside the box. Maybe I need to splash out $125 to buy Gary Miller's book "Latin Suffixal Derivatives in English: and Their Indo-European Ancestry". It sounds like a fascinating bedtime read. Though a more compact (and less expensive) option is the useful list of suffixes at http://www.orangeusd.k12.ca.us/yorba/suffixes.htm. This even includes a page full of rules to apply when deciding how to add different suffixes to words, depending on the number of syllables and the final letter(s).

    So, coming back to my occasional trips outdoors, why do I have a "wheely" bin instead of a "wheelal" bin or even a "wheelive" bin. The suffix "y" is supposedly of Anglo Saxon (and Greek) origin and means "having the quality of" or "somewhat like". Pretty much the equivalent of the "al" suffix, which also means "suitable for". The "ive" suffix means much the same, but - like "al" - derives from Latin. What I really have, however, is a "wheelable" or "wheelible" bin, using one of the Latin suffixes that means "capable of being (wheeled)". Is it any wonder that people find English a difficult language to learn? Though I suppose the definitive indication of this is "Ghoti" (see http://en.wikipedia.org/wiki/Ghoti).

    Perhaps I can make the code samples in the technical documentation I create more interesting by applying descriptive suffixes? The "ory" suffix means "place for" (as in "repository"), so my names for variables that hold references to things could easily be customerAccountory and someTemporaryStringory. And a class that extends the Product base class could be the Productive class. That should impress the performance testing guys.

    Or maybe I could even suggest to the .NET Framework team that they explore the possibility of suffixing variable types for upcoming new language extensions. An object that can easily be converted to a collection of characters should obviously be defined as of type Stringable, and something that's almost (but not quite) a whole number could be Intergeral, Intergeric, Intergerous ("characterized by"), or just Intergery.

  • Writing ... or Just Practicing?

    Unbound Generics: an Open and Closed Case

    • 0 Comments

    There's a well known saying that goes something like "Please engage brain before shifting mouth into gear". And another that says "If you can't stand the heat, get out of the kitchen". Yet, after what's now more than a year as a fulltime 'Softie, I've managed to avoid being flamed for any of my weekly diatribes; and neither has anybody pointed out (at least within earshot) how stupid I am. So I suppose it's time to remedy that situation. This week I'm going to throw caution to the winds and trample wildly across the green and pleasant pastures of generics, and all without a safety net.

    OK, I'll admit that it's probably not going to be a hugely technical venture, so if you are expecting to see lots of indecipherable code and complex derivations of generic functionality, you're likely to be disappointed. You might as well get your email client warmed up now ready to fire off some nicely toasted opinions. I mean, it's not like I actually know much about generic programming anyway. No, where I'm going with this is more slanted towards the requirements of my daily tasks - the terminology.

    You see, I'm of the opinion that when I write technical documents, they should be comprehensive enough to ensure that readers can follow the topic without having to dive off and consult an encyclopedia or dictionary every five minutes. While I don't want to reduce it to the "grandmother sucking eggs" level, which would just be boring to average developers and probably excruciatingly useless to the really clever people out there, I do need to make it reasonably comprehensive and self-sufficient. I mean, I really hate it when I'm reading something and have to keep stopping and re-reading bits to try and figure out exactly what the writer actually meant.

    You've probably been through this process, where you come across what seems like a simple word but the meaning in the current context is not familiar. The person who wrote it probably knows the topic inside out, and so the meaning is obvious to them. Yet not grasping what it really indicates in terms of the surrounding text can make the rest of the article pretty much meaningless. You need to know the topic before you can understand the bits that will help you to learn about it. Like: "Ensure you check the reverse osmosis defractor for discompulation before remounting the encapsulator". Does that mean you need to buy a new one if it's gone rusty? Who knows...

    Anyway, getting back to generics, I recently seem to have stirred up a discussion about the meaning of three fairly ordinary words without really trying. In fact, I suspect I've put our current project back by a week as all of the brains-the-size-of-a-planet people on the dev team try and figure out exactly which words mean what, and how we should use them. So what are these three amazingly complicated and unfathomable words? Would you believe "unbound", "open", and "closed". In fact, you can throw in "constructed" as well if you like.

    These are the definitions according to Microsoft's Glossary of .NET Framework terminology (see http://msdn.microsoft.com/en-us/library/6c701b8w(VS.80).aspx):

    • Constructed generic type: A generic type whose generic type parameters have been specified. A constructed type or method can be an open generic type if some of its type arguments are type parameters of enclosing types or methods; or a closed generic type if all of its type arguments are real types.
    • Open generic type: A constructed generic type in which one or more of the generic type arguments substituted for its generic type parameters is a type parameter of an enclosing generic type or method. Open generic types cannot be instantiated.
    • Closed generic type: A constructed generic type that has no unspecified generic type parameters, either of its own of or any enclosing types or methods. Closed generic types can be instantiated.

    It seems that most folks agree that a generic type can be unbound or constructed, and a constructed type can be open or closed. Microsoft seem a little shy about defining what an unbound generic type is, though in the Generics FAQ they mention that "the typeof operator can operate on unbound generic types (generic types that do not yet have specific type arguments)". And there are plenty of definitions elsewhere that firmly identify it as being a generic type where the types of the type parameters are not defined.

    Meanwhile, it seems like most people also agree on what a "closed" type is, so the main issue is: what's the difference between an "unbound" and an "open" generic type. If an open type is also a "constructed" type, does that mean that it has the type arguments actually populated - even though the code that's executing to achieve this only contains the placeholders? Or does "constructed" in the realm of generics mean something other than the more usual "built" or "assembled"? Does a generic type start out as "unbound" and then, as it becomes "constructed", does it move through "open" to "closed"?

    Perhaps there is a nanosecond or two, as the electrons fight their way through the layers of silicon in the CPU constructing and closing the type, that it's no longer unbound, but not quite closed yet? Smacks of Heisenberg's Theory of Uncertainty and Schrodinger's cat if you ask me. Maybe it's all wrapped up in particle and wave theory. Or could it be that unbound types don't actually exist at all? Maybe, as they aren't actually instantiated "things", they only exist in an ethereal context; or they only exist when you aren't looking at them. I suppose that, as the only thing you can instantiate is a closed type, we don't actually need to worry about the other types anyway.

    Trouble is, when working with a dependency injection mechanism (like what I'm doing now, as Ernie Wise would have said) you need to discuss registering types and type mappings for generic types that do not have all of the type arguments fully defined as concrete types. So we're trying to decide if these registrations are for "unbound" types or "open" types. When you define a registration or mapping in a configuration file, you use the .NET Framework type name without specifying the type parameters or type arguments; for example MyApps.MyTypes.MyGenericType`2 (where the "back-tick" and number define the number of type parameters in the type). When you use code to perform the registration, you also omit the type parameters; for example RegisterType(typeof(MyTypes.IMyInterface<,>)). As you saw earlier, Microsoft says that "the typeof operator can operate on unbound generic types (generic types that do not yet have specific type arguments)", and these sure look like they don't yet have specific type arguments.

    But could these registrations really be "open" rather than "unbound"? All the definitions of "open" talk about the types of the type arguments being provided by "an enclosing generic type or method". When I'm just creating a type registration, how can the type parameters be populated from an enclosing type? If I'm registering the generic child of a parent type, I can't specify what (who?) the parent will be. If I'm registering the parent, I can't prevent somebody adding a class that has a different child. And I can't register a method at all. So there is no way I can specify where the type arguments will come from.

    The problem is that almost everything else I read about using dependency injection mechanisms talks about only "open" and "closed" types. It's like "unbound" types are unclean, or unattractive, or maybe just not worthy of use in a shiny new container. Perhaps it's a conspiracy, and we'll discover that NASA actually buried them all in the Nevada desert so they can't be used to confuse clever programmers. And if I talk about unbound type registrations in my documentation, will people just make those "phhhhh" noises and toss it aside with a comment such as "...obviously doesn't have any idea what he's talking about...", "...waste of time reading that...", and "...who let him out again...?"

    Or, could it be that I'm the only one who knows the truth...?

  • Writing ... or Just Practicing?

    Come On In, The Water's Lovely...

    • 0 Comments

    A great many years ago, when I was fresh out of school and learning to be a salesman, we had a sales manager who proudly advertised that his office door was "always open". What he meant, obviously, was that we could drop in any time with questions, problems, and for advice on any sales-related issue that might arise. Forgotten what step five of "the seven steps to making a sale" is? Having problems framing your "double-positive questions"? Struggling to find "a response to overcome an objection"? Just sail in through that ever-open door and fire away. Except that the only response you ever got from him was "...you can always rely on one end of a swimming pool".

    Now, I was young and a bit damp behind the ears in those days - and probably not the sharpest knife in the box. So it was several months before I finally plucked up the courage to ask one of the older and more experienced members of our team what he actually meant. "Simple", said my wise colleague, "he means that you can always depend on a 'depend' (i.e. deep end). In other words, no matter what the question or situation, you can usually get the upper hand by prevaricating".

    I guess you only need to watch politicians on TV to understand that they all take advantage of this rule. But, strangely enough (and I don't want to give too many precious retailing secrets away here), it can be remarkably useful. Imagine the scene: your customer can't make up their mind about some particular type of plant food (OK, so I was working in the horticultural trade in those days). They ask "Will it be safe to use on my prize Dahlias?" You answer "Well I was talking to a chap last week who managed to grow double his usual crop of tomatoes!" Notice that I didn't say he was using any product he might have purchased from me, or that he actually used any fertilizer at all. Such is the power of vagueness...

    All I need do now is overcome the objection, and toss in a double-positive question: "I'm sure that the quality and the results you'll get will be far more important to you than the exceptionally low price we're offering it at this week", and "Shall I wrap it up for you to take away, or would you like us to deliver it tomorrow morning?" Amazing - seems I haven't lost the touch.

    So, having regaled you with rambling reminiscences for the last ten minutes, I probably need to try and drift back on-topic. Or, at least, to within striking distance. What prompted all this was reading the preface to our new Microsoft Application Architecture Guide 2nd Edition. In it, David Hill explains that you can always tell an architect because their answer to any question is invariably "it depends". He doesn't say if this behavior extends to things like "Can I get you a coffee?" or "Would you like a salary raise?", but I'll assume he is generally referring to architectural design topics.

    And, even though I'm not an architect (INAA?), I can see what he means. In the section on validation, for example, it quite clearly states that you should always validate input received from users. Makes a lot of sense. But what about if you have a layered application and you want to pass the data back from the presentation layer to the business layer or data layer? Should you validate it at the layer boundary? Well, it depends on whether you trust the presentation layer. You do? Well it also depends on whether any other applications will reuse the business layer. Ever. And it depends if you have a domain model, and if the entities contain their own validation code. And then it starts to talk about how you do the validation, depending on what you are validating and the rules you want to apply.

    You see, I thought at the start of this project that I could just learn the contents of the guide and instantly become "a proper architect". It's not likely to happen. What it does show is that, yes, you need to know the process, the important factors, and techniques that will help you to perform analysis and make good decisions about structure and design. And you need to understand about conflicting requirements, quality attributes, and crosscutting concerns. You have to understand the types of applications, and their individual advantages and liabilities. In fact, there is an almost endless stream of things that you could learn to help you "do architecture".

    And that, it seems, is where the guide really comes into its own. It explains the process and the important elements of design. It lists the vital requirements, and provides guidelines to help you make decisions. It tells you what to look for, where to look, and where to find out more. It even contains detailed references to things like components, layers, tiers, technologies, and design patterns (see http://msdn.microsoft.com/en-gb/library/dd673617.aspx).

    And, at almost every step, it proudly flaunts its "it depends" approach. Architecture is not, it seems, about learning rules. It's learning about software and applications, environments and infrastructure, requirements and conflicts, people and processes, techniques and technologies, and - best of all - having your say in how a small part of our technological world behaves. So, come on in - the water's lovely...

  • Writing ... or Just Practicing?

    Some Consolation...

    • 0 Comments

    Suddenly, here at chez Derbyshire, it's 1996 again all over again. Instead of spending my days creating electronic guidance and online documentation in its wealth of different formats and styles, I'm back to writing real books. Ones that will be printed on paper and may even have unflattering photos of me on the back. And there'll be professional people doing the layout and creating the schematics. It's almost like I've got a real job again. I'll be able to do that "move all your books to the front of the shelf" thing in all the book stores I visit, and look imploringly at people at conferences hoping they'll ask me to sign their copy.

    The reason is that we're producing a range (the actual number depends on how fast I can write) of books about the forthcoming version of Enterprise Library. I've been tasked with creating books that help people get to know EntLib and Unity, and get started using them in their architectural designs and application implementations. So the list of requirements includes making the books easy and enjoyable to read, simple enough to guide new users into understanding the basic features and their usage, deep enough to satisfy "average" developers who want to know more and learn best practices, and comprehensive enough to cover all of the major features that take over 1000 pages to describe in the online documentation.

    They say that a job's not worth doing if it doesn't offer a challenge, so I guess this job is definitely on the "worth doing" list. And getting it all into books of about 250 pages will certainly be interesting as well as challenging. Maybe I can write very small, or use "txt speak". Need help using the Message Queuing feature of the Logging block? How about: "2 cre8 a msg q u use mmc & put name in cfg file thn cre8 logentry & snd. 2 c if it wrkd look in q". I can see my editor and proofreader having no end of fun with that approach.

    But the hardest bit was actually deciding how (and whether) to provide sample code. Its likely there won't be room to print complete listings of all the code I use, which is probably an advantage in that readers can see and concentrate on the actual bits that are important. Of course, I need to build some examples to make sure what I'm writing actually does do what it's supposed to. So it seems sensible to offer the code to readers as well, like they do with most other programming books. But how should I present the code? As a pretty application with nice graphics and a delectable user interface? Maybe be all up to date by using WPF?

    I remember when I first started writing books about "Active Server Pages" how we had no end of problems creating sample code that users could easily install and run. There was no SQL Server Express with its auto-attach databases mechanism, no built-in Web Server in Visual InterDev (this was the days before Visual Studio as we know it now), and you couldn't even assume that users had a permanent Internet connection (the vast majority were on a dial-up connection). So you had to create complicated sets of scripts and a setup routine, even for ASP samples, that registered the components you needed and populated the database, as well as providing a long list of prerequisites.

    But things are much easier now. You can give people a zip file that they can just unzip and run. Even Windows Forms and WPF samples just work "out of the box". So, even though I haven't done much with WPF before, I thought I'd give it a try. I was doing OK with creating the form and adding some text and buttons, but then it came to adding a DataGrid. I thought maybe they'd forgotten to include it in the version of Visual Studio I'm using, but it seems that I wasn't prepared for the Wonderfully Perverse Features of WPF where you need to use Grids and ListViews and tons of stuff inside each one, then fight with Data Contexts and Binding Paths.

    In ASP.NET I just grab a GridView and drop it onto the page. So should I do the samples as an ASP.NET application? That adds complexity with postbacks and maintaining state, and makes it hard to show some features such as asynchronous data access or cache scavenging. And it hides the real functionality that I want to show. Besides which, EntLib is really a library for helping you manage your crosscutting concerns, not a development environment for Web sites. How about Silverlight? Well, then I'm faced with a combination of the issues from WPF and ASP.NET. Maybe I should just take the easy way out and use Windows Forms. As a technology, I know it well and it provides all the features I need to create glorious user interfaces.

    And then I remembered how long I've been writing code. Back in the early 80's, an attractive and intuitive user interface was one where you had a text menu on an 80 characters by 26 lines screen, and you could press any key you liked as long as it was one of those in the list. No need to mess about with sissy things like mice or trackballs, and no confusion about which things to click or where to wave your mouse pointer to see what things did. You knew where you were with applications in those days. And there's plenty of interest now in the old stuff such as text-based adventures and simple chunky graphic games like Asteroids and Space Invaders.

    So why not follow the trend? Simple example applications that run in a Console window, have simple menus to select the specific sample you want to run, and use simple code to display progress and the results of the process. No need to run Visual Studio, the database will auto-attach to SQL Server Express, and readers can easily find the appropriate code in a single "Program" file if they want to read it all. In fact, I've seen this approach used by the guys from the data team here at Microsoft when they do presentations, and it really does seem to make it easier to see what's going on.

    So that's where I'm going at the moment, at least until somebody else makes an alternative "executive decision" further down the line. What do you reckon? Do you want fancy interfaces for your code samples? Or are simple Console applications that focus on the "working parts" more useful and easier to understand? How about this for a flash-back to the past:

    Or I suppose I could just publish the code as text files...

  • Writing ... or Just Practicing?

    I Do Like To Be Beside The Seaside

    • 0 Comments

    Oh well, back to work, holidays over for another year. At least I managed to morph from a sickly shade of pale to a faint shade of tan, and without catching airplane 'flu or any other weird tropical disease (at least not one that's shown up so far). In fact, it was one of the most hassle-free and relaxing holidays we've had. I even managed to forego the doubtful pleasure of email for a whole six days without caving in and searching for an Internet cafe.

    Mind you, arriving home and deciding I should try to catch up with the mountain of emails that arrived while I was away was an interesting experience. It seems that my mailbox was upgraded to Exchange Server 2010 while I was away. While it doesn't affect using Outlook over HTTP, I'm battling vainly against the new version of Outlook Web Access. It seems to rummage around and collect every email it can find that vaguely corresponds to the one I'm trying to read, and then hides them all away in a tree view that you have to double-click to see the actual messages.

    I suppose it's useful having all of the related messages collected together, even if it includes ones I sent, and even ones I've deleted (helpfully shown crossed out in the tree view). But I reckon it's going to take me a good while to get used to this approach. I spent the first ten minutes trying to figure out how to delete a collection of different emails - it seems like you can't just highlight them all and press delete any more.

    And I can't find any options to go back to the old way either - in fact it took ages to find the option where I could turn off Out of Office Replies (which are now called "Tell people you're on vacation"). Maybe I'll just leave them turned on for good so I don't have to fight with OWA very often. Or just do what I ended up doing the evening when we got home and I was too tired to try and understand the new approach - use the cut-down "OWA Light" version instead. It's much like Hotmail (sorry, Windows Live Mail) - it just works like you'd expect.

    Aha! (added later) I just found out you need click the little arrow next to "Arrange by" and uncheck "Conversation".  

    Anyway, getting back to last week's trip, I still haven't figured why it all went so smoothly compared to the usual hassle of traveling anywhere by plane. Yes, we did have to set off for the airport at 3:00 AM; but it's nearby, parking close to the terminal was easy, the check-in queue consisted of two people, there was time for a leisurely coffee, and then through security in less than ten minutes. Half an hour in departures chatting to people we know who happened to be on the same flight, then in the air fifteen minutes ahead (!) of schedule and arrival in Malta half an hour early.

    With our taxi waiting at the airport, we were at the hotel within half an hour and ready to hit the beach! And it was just as easy coming home. Meanwhile, Malta is easy because they have real 250 volt electricity with UK-style sockets, and they drive on the proper side of the road (the left). Plus, all of the roads have meaningful road signs, even if every road you go down seems to take you back to the capital Valletta. Although the concept of giving way to others, even when they have priority, is more an option than a rule. I liked that all the roundabouts have a sign saying "Please obey the roundabout rules". I guess everyone does, in a roundabout way.

    We did borrow a sat-nav with the rental car, and I'm really glad I didn't buy one like it. The helpful lady inside it had a habit of reading out the directions one turn early, then suddenly screaming "Turn left, turn left, turn left" just after you passed the junction (even when you should have turned right). I think the one word we heard most over the whole week was "Recalculating". Of course, it didn't help that the paper map of the island we were using to decide where to go contained English place names, while the sat-nav only had the Maltese equivalents. She even directed us up one street that started out about a foot wider that the car and then got even narrower, culminating in a right-angle turn. I got a lot of reversing practice during the week.

    But if you are looking for somewhere relatively peaceful, pleasant, and full of history, Malta is worth a visit. Go in Spring or Autumn unless you like being burned alive, and - if you don't fancy driving - use the hop-on hop-off tour buses to see the sights. There are some lovely beaches, fabulous views, wonderful cathedrals and churches, a Roman villa, and an amazing walled city (Mdina) to see. It is a bit barren in places, and not the tidiest or best-maintained place I've ever been, but - hey - this is the Mediterranean. And everyone, everywhere, speaks good English. Here's some photos:

    Mosta Dome

    Valetta Cathedral

    Paradise Bay

    Marsaxloxx Harbour

    The Blue Grotto

    Golden Bay

    And, yes, I did find the house where I lived more than 40 years ago and even got to talk to the daughter of the guy who owned it back then! Meanwhile, I suspect we'll never be able to go on holiday again because it will never be this easy in future.

  • Writing ... or Just Practicing?

    Another Bowl of Peeled Grapes Please, Waiter

    • 0 Comments

    If all goes according to plan, I should be spread-eagled in a sun lounger on a foreign beach as you read this, with a copy of some second-rate espionage novel in one hand and a large and very cold beer in the other. Maybe even nodding to the passing waiter to bring another plate of canapés and a bowl of ready-peeled grapes, or passing the time of day with famous celebrities as they stroll slowly past splashing their feet in the warm clear blue water of the Mediterranean. I mean, we did book a really nice hotel; though - looking now at some photos posted on the Web by previous visitors of the construction site next door to it and the dilapidated street of half-demolished houses round the back - I'm not so sure.

    But, still, it will be a break from the hectic document engineering thing I do all the rest of the time. Having finally got round to taking a holiday this year, we decided on a week in Malta - somewhere we've been planning to go for some years. When I was young, we lived there for four years (my father was in the Royal Air Force) and it will be interesting to see how it's changed. Some friends who went there a few years ago set me a postcard of their hotel on the cliff at Golden Bay, pretty much where I remember there being a military firing range. We kids regularly used to dig up clips of shells on the beach below, to the horror of our parents. I wonder if the tin shack with its cool box full of ice-cream (the only facility on the beach at the time) is still there. I doubt it.

    To save the hassle of trying to coordinate flights and stuff, we just booked through a local travel agent. Let them earn their commission by doing all the hard work. But what's amazing is the volume of paperwork and the apparent complexity of organizing it all. When I book a trip to Redmond, I get a single PDF through email that contains all of the details of the flight, hotel, rental car, and other stuff. So far (and we hadn’t actually departed when I was writing this post) I've had over 40 pages of stuff from the travel agent for this trip. I've signed seven forms, and paid three different amounts on my credit card. There's so much bumph that they even send you a nice hard-backed folder to keep it all in. I wonder how much all that costs?

    And when I fly to Redmond, I just need to turn up at check-in and wave my passport. This trip, I've got at least three pieces of paper that list all the documents I need to have ready just to check in. They include a 24 page booklet that contains flight coupons, details of the hotel, accommodation terms, flight times and destination information, health warnings, travel advice, and - best of all - two vouchers for a free drink on the plane. I especially like that these have a picture of two intertwined champagne glasses on the front, and a stern warning on the back that they are "not valid for alcoholic drinks, including champagne".

    It also says I have to fill in a form with my name and home address, and the address of the place we'll be staying in Malta. Of course, they posted the form to me at my home address, and helpfully sent it along with a confirmation of the destination hotel address. You begin to wonder if it would have been less hassle just doing it all through Expedia from the start. Mind you, I was reading this week about a new ruling from the People's Republic of Europe that says if you are ill while on holiday, you can claim back your holiday and take it later. As you can generally rely on picking up some variation of airplane flu while travelling, maybe I'll be able to stay there until Christmas. They say the weather is nice there in the autumn.

  • Writing ... or Just Practicing?

    Generically Modified

    • 2 Comments

    Despite being a writer by profession, and regularly castigating my colleagues for being recalcitrant in reviewing stuff I write, I actually dislike doing reviews myself. When I was an independent author (before I signed my life away to Microsoft), I was often approach by companies offering to pay me to write reviews of their products for their Web sites and literature. Even taking into account the presumed integrity of the author, this type of review seems somehow to be tainted when compared to an independent review by someone who doesn't stand to gain from it.

    Yet I depend on reviews and reviewers, of both the technical and editorial kind, not only as part of my daily job creating guidance, but also when buying stuff generally. If I'm looking to buy a new ADSL modem or NAS drive, I'm likely to check out the reviews from real users to see if the one I fancy (the product not the reviewer) is actually any good. A typical example is when recently researching mobile Internet connection dongles and packages (which, from the majority of reviews, all seem to be equally useless). If I'm looking to buy a book, I'll read the independent reviews from readers. Only when I'm buying something as personal as music do I tend to avoid being swayed by the opinions of others. But that's mainly because, underneath this suave and intellectual exterior (?), I'm really still a heavy metal fanatic with a weird taste in classic rock music.

    So I always feel that I should do my bit by contributing reviews where I think I can add some useful feedback to the discussion. And there's no point in writing a review unless you tell the truth. OK, so my blog is not generally known for being exceptionally high in factual content, but I do try very hard to be fair and even handed. So, let me start by saying that the latest book I've been reading is not actually bad - in fact, in general, it's well-written, informative, useful, and I didn't find any glaring errors in it.

    And as you are obviously now waiting for the "but", here is comes. I bought "Accelerated VB 2008" (APress, ISBN 1590598741) based on reviews and the publisher's blurb in order to provide the equivalent training for VB as I undertook with Jon Skeet's "C# In Depth" book (see Syntactic Strain). According to the aforementioned blurb, it covers precisely what you need to know to use VB 2008 (a.k.a. VB 9.0) and .NET 3.5 effectively. This includes the newer and more advanced features such as generics, operator overloading, anonymous methods, and exception management techniques. And a whole paragraph of the description talks about the coverage of LINQ, extension methods, lambda expressions, and other VB 9.0 features.

    Yes, the book covers these. But the really new and exciting stuff only gets a very brief summary in the introduction (5 pages), and just the final chapter of 43 pages. And the new topics I'm interested in get about a page each, yet there are 14 pages on using LINQ with XML. It's not that the VB 9.0 stuff isn't covered at all, but it certainly feels like it was added as an afterthought. OK, so there is a whole chapter earlier in the book devoted to generics, which is really quite good, and there is certainly adequate coverage of other VB 8.0 features. But it feels like the book is actually "Accelerated VB 2005 updated to VB 2008". And having been an independent author in a previous life I know that this is what happens. As soon as a new version of a product is announced, the publisher is hounding you to update your previous book to the new version. In three months. And without them paying you much money.

    I guess this is the core difference between the two books I've been using. "C# In Depth" feels like it's telling you a story, and the features of the versions of the languages are partially intertwined throughout so you understand how each addition to the languages serves a specific purpose and simplifies or extends previous features. "Accelerated VB 2008" feels more like a tutorial that aims to cover advanced uses of Visual Basic without really explaining the evolution and purpose of the language. For example, there's a whole 45 page chapter devoted to threading, which seems to me to be a feature of the .NET Framework rather than a feature of Visual Basic.

    Perhaps I expected something different because I was looking for a book that covered the new language features in depth, whereas "Accelerated VB 2008" feels more like it is aimed at bringing VB programmers who basically still write like they are using VBScript into the real world. But it surfaces issues that I suppose I always recognized are part of the overarching view of programmers and programming languages (at least in the Microsoft world). It's like VB programmers have to be protected from reality; and must always be reminded how you define a class, use an interface, and handle exceptions - irrespective of the "advancedness" of the book.

    Again, I must repeat that this is not a bad book. It is really quite good, and will be a useful addition to the Visual Basic programmer's library - especially if (like me) you are still a bit vague about generics, delegates, lambdas, and similar topics. It also helped me more clearly see how the process of creating and updating documentation is a lot harder than it may at first seem. When I work on updates to the guidance for new versions of our deliverables here at p&p (such as Enterprise Library) I try really hard to interweave the new features with the existing content. In the previous two versions, for example, we've completely reordered the sections and topics, added new overviews and "how to" sections, and modified the structure to give the new features the appropriate precedence alongside the existing ones. And it really can be tough to do when there's already over 1,000 pages of it.

    And while Jon's "C# in Depth" book did wind me up with its repeated use of the term "syntactic sugar", "Accelerated VB 2008" also has one overarching feature that I found extremely annoying. Like so many other books, they insist on printing complete listings of the example code, even when it covers two or more pages, with the explanation only at the end of the listing. The result is much page flipping to understand what's going on. But worst of all, when there is a minor change to one line of code to illustrate a feature of the language, they print the entire code again with no highlighted line or indication of where the change is until you read the text after the listing. After a while I started just believing what the text said because it seemed too much effort to go back and try and find the changed line.

    So here's a challenge. Is there a book out there that covers the language features of Visual Basic 8.0 and 9.0 without describing how to declare variables, write a class, handle exceptions, and interact with the basic .NET Framework classes? One that explains in detail how features such as extensions, lambdas, LINQ, and generics work, and which makes it easy to understand their purpose and usage? Or am I just expecting publishers to commission books that focus only on stuff that I've been to idle to learn about so far? Maybe a market sector consisting of one person is not a viable business proposition...

Page 30 of 39 (306 items) «2829303132»