Writing ... or Just Practicing?

Random Disconnected Diatribes of a p&p Documentation Engineer

  • Writing ... or Just Practicing?

    To R2 Or Not To R2?

    • 1 Comments

    ...that is the question. Whether 'tis nobler in the server cabinet to suffer the outrageous lack of valuable new functionality, or to take arms against the powerful improvements to the core Windows Server operating system. And by opposing, manage without them? To sleep (or hibernate): perchance to dream of an easy upgrade. I guess you can see why I don't write poetry very often - it always seems to end up sounding like somebody else's.

    So the disks for Server 2008 R2 dropped through my letter box the other week, and since then I've pondered on whether to upgrade. It's less than a year since I spent a whole week crawling around inside the server cabinet installing two sparkly new servers running Windows Server 2008, upgraded the networking, set up four virtual machines on Hyper-V, and generally dragged my infrastructure screaming and cursing into the twenty-first century. And now it seems it was all to no avail. I'm out of date and running legacy systems all over again.

    OK, so I assumed that there would be a Windows Server 201x at some point, and that I'd once again fall by the wayside, but I never expected it to be this soon. While the hardware might not last out the next decade, I kind of hoped that I'd just have to drop the VMs onto a couple of new boxes when the existing ones decided it was time for the bits of bent wire and plastic to give up the ghost. But now it seems the ones and zeros need to be replaced as well. Maybe they're nearly worn out too.

    So I printed off all the stuff about fixing upgrade problems (with the fair assumption that - if they exist - I'm going to find them), read the release notes, and then tossed the disk into the drive of the standby machine. At least if I break that one I can reinstall from a backup without interrupting day-to-day service. Of course, it would also be an interesting test of my backup strategy, especially as I've not yet had the misfortune to need to resurrect a Windows 2008 box using the built-in backup and restore feature.

    After a few minutes rummaging about inside the machine, the installer produced its verdict. OK, so I did forget about domain prep (it's also the backup domain controller), but it also said it needed 18+ GB of free space on Drive C. Not something I was expecting. But I have 17GB free, so I could probably move the swap file to another drive (there's over 100GB available there), but would that break the upgrade? And the VMs have a lot less free disk space. I'll need to grow the partition for them, and then try and shrink it afterwards - otherwise it will take even longer to export backups. Hmmm, not such a simple decision now is it?

    One thing is clear, next time I order any machine I'm going to specify it with 4 x 1 terabyte drives. I seem to spend my life trying to find extra disk space, even though the current boxes have nearly 400 GB in them. And they spend 99.9% of their time with the performance counter showing 1% load. It's a good thing I'm not trying to do something enterprisy with them.

    So with it looking likely that I'll be confined to my legacy version of Windows 2008 for the foreseeable future, I decided to review what I'd be missing. Maybe it's only a facelift of the O/S, and there are just a few minor changes. Well, not if you look at the "What's New in Windows Server 2008 R2" page. There's tons of it. Pages and pages of wonderful new features that I can drool over. But do I need them? I guess the one area I'm most interested in is updates to Hyper-V, and that list seems - to say the least - a little sparse. I don't need live migration, and I'm definitely convinced that, with the minimal workload on my systems, I don't need enhanced processor support or bigger network frames. And dynamic virtual machine storage won't help unless I stuff the box with bigger disks.

    The one feature I would like is the ability to remove the redundant connections* that Hyper-V creates in the base O/S (see "Hyper-Ventilation, Act III"), but I guess I can live without that as well. So what happens if I don't upgrade? Will I become a pariah in the networking community? Will my servers fall apart in despair at not getting the latest and greatest version of the operating system? Will I be forever dreaming about the wonderful new applications that I can't run on my old fashioned O/S? Will I still be able to get updates to keep it struggling on until I get round to retiring?

    Or maybe a couple of bruisers from the Windows Server division will pop round with baseball bats to persuade me to upgrade...

    * Update: In Windows Server 2008 R2 you can untick the Allow management operating system to share this network adapter option in Virtual Network Manager to remove these duplicated connections from the base O/S so that updates and patches applied in the future do not re-enable them.

  • Writing ... or Just Practicing?

    Public Nopinion

    • 0 Comments

    I'm not quite sure how she did it, but this year my wife managed to convince me to follow the latest weekly pandering to public opinion that is "The X Factor" - our annual TV search here in Britain for the next major singing and recording star. I did manage to miss most of the early heats; except for those entrants so excruciatingly awful that my wife saved the recording so she could convince me that there's a faint possibility I don't actually have the worst singing voice in the world. Though I suspect it's a close-run thing.

    And I have to admit that some of the finalists do have solid performing capabilities. There's a couple of guys in the "over 25s" section that really look like they could actually make it as recording artists. Though, to really tell, you probably should watch (or rather, listen) with the screen turned off so you aren't distracted by the accompanying (and very talented) dancers, the audience madly waving banners, and the pyrotechnics and light shows that accompany every performance. I mean, it's supposed to be all about the voice.

    And here we come to the crux of the matter. After one particularly controversial decision where a young lady with probably the purest and most versatile singing voice was voted off, the show's owner Simon Cowell said that "he trusts public opinion" and that he "wouldn't organize a show like this if he didn't". Unlike what seems to be the majority of the baying public out there, I actually agreed with his decision. If the show is supposed to be about finding the best artist based on the opinions of the "Great British Public", then they should be allowed to make the decisions.

    What's worryingly clear, of course, is that the public don't actually vote based on the principles of the competition - they vote for their favorite. I suspect that the tall and attractive blonde lady gets a lot of votes from young men, and the teenage lad (who, to be honest, doesn't have the greatest voice) gets a lot of votes from teenage girls. Meanwhile my wife wanted the guy with the mad hairdo who looks a bit like Brian May from Queen (and is a very passable rock music singer) to win. Or maybe the one who wears funny hats.

    But more than anything, you have to assume that a great many people vote - mainly out of spite - for the act that Simon Cowell (currently Britain's most hated person) has been trying to get voted off for the past many weeks. Rather like the last TV-based ballroom dancing contest where the lumbering overweight TV reporter actually had to resign from the competition because it was clear that he was the worst every week, but the public kept voting him in.

    And here we come to the crux of the matter. Who is best placed to choose the optimum outcome for any activity that involves choice? The principles of democracy suggest that allowing everyone to have their say, and choosing the most popular outcome, is the way to achieve not only popularity, but success as well. It's based on the assumption that everyone will make a logical choice based on their situation, and the resulting policy will therefore satisfy the largest number of people and achieve the optimum outcome. Though that doesn't appear to be the way that the People's Republic of Europe works, where the public gets no choice, but that's a whole other ballgame.

    In our world of technology development generally, and particularly here in p&p, we rely on public opinion a great deal. We use advisory boards and public votes to figure out the future direction and features for our offerings, and to provide an overall guide for the optimum direction of our efforts. In theory, this gives us a solid view of the public opinion of our products, and ensures that we follow the path of improvement in line with need and desires.

    But is this actually the case? If 6.5 million people watched "X Factor", but only a few thousand voted, is the result valid? Could it be that most people (like me) have an opinion on who should win, but have neither the professional ability to make a properly informed decision on their real talent, or who just can't be bothered picking up the phone? In a similar way, if only 35% of the population actually vote in a general election, is the result actually valid? Is it only the opinionated or those with an axe to grind who influence the final outcome?

    Its worrying if this trend also applies to software development. When we run a poll, send out questionnaires, or consult an advisory group, are we actually getting the appropriate result? If the aim is to widen the user base for a product, is asking people who already use it (and, in many cases, are experts) what they want to see the best way to broaden the reach and improve general user satisfaction? No doubt there's been plenty of study in this area from polling organizations and others associated with statistical modelling, but it's hard to see how you adjust numbers to make up for the lack of input from a very large proportion of the population.

    In particular, if you are trying to make a product or service more open to newcomers, and widen the user base to include a broader spectrum of capabilities, how do you get to the people who have never heard of your product? Is asking the existing expert user base, and perhaps those already interested in learning about it, the best way? And, if not, how else can you garner a wider range of non-biased opinion?

    Mind you, I reckon the Geordie with the big teeth will win...

  • Writing ... or Just Practicing?

    Ready for Hibernation

    • 0 Comments

    Usually the only time I feel like digging a big hole and climbing in is when I make some inappropriate remark at an important social event, or tell a rather too risqué joke during a posh dinner party. However, since I never get invited to posh dinner parties, and extremely rarely have the opportunity to attend any "cream of society" gatherings, I've so far avoided the need to invest in a new shovel. And, not being a polar bear, I don't have a tendency to view large holes in the snow as suitable resting places for the winter either. In fact, even though I'm quite adept at sleeping, it turns out I'm a rather late convert to the notion of hibernation.

    As we've probably already reached the "what on earth is he rambling on about this week" moment, perhaps I need to mention that I'm rambling on about my recent epiphany in terms of turning off the computer at the end of a working day. Maybe it's because of the many years working with operating systems where you could quite safely just pull the plug or do the BRST (Big Red Switch Time) thing, yet be confident that the whole caboodle would happily start up again fully refreshed and ready to go the next day as soon as you applied some volts to it.

    None of my old home computers, Amstrad PCWs, or DOS-based boxes ever minded an abrupt termination of electricity (except you lost whatever you forgot to save), and the Wii and other more consumer-oriented stuff we have also seems to cope with being abruptly halted. But not Windows Vista (and, I assume, Windows 7). You get those nagging reminders that you've been naughty, and a threat that it will spend the next four hours rummaging round your system just to see if you did any damage. I suspect this is probably just a long "wait" loop that prints random stuff on the screen, designed to teach you a lesson.

    Of course, when XP was king, we were offered the chance to "Hibernate", and sometimes even "Sleep" rather than turning the thing off. I don't know if anyone actually made this work reliably - it never did on any of the laptops I've owned, and the XP-based Media Center box we had up till recently only managed it through some extra bolt-on hardware and software. Even then, I had to reboot it at least once a week to let it catch up again. So I've been extremely reticent about anything other than a proper "shut down" at the end of each session.

    But recently I've noticed that colleagues seem to be able to shut the lid and briefcase their laptop, yet have it spring almost instantly into life when they open it up again; and without burning a hole in the side of the bag, as my tiny Dell XPS tried to do last time I attempted this. Aha! It turns out they are running Vista. Maybe the time it takes to actually get started from cold (even if you hadn't been naughty last time you turned it off) was a contributing factor to their hibernation behavior...

    For me, matters came to a head with the machine we use to view the signal from the IP camera that my wife uses to watch night-time wildlife (foxes, badgers, etc.) in our garden. Like most software written by companies that actually specialize in hardware, the viewer application is quirky - and that's being kind. It won't remember connection details, has no automation facilities, and accepts no startup parameters. The only way to get it running is to mousily fiddle with the UI controls. There aren't even any shortcut keys or a recognizable tab order, so my usual kludge of using a program that generates key presses won't work either.

    This means that, even though I can enable auto login for Windows (it's not part of my domain), I can't get the **** viewer to connect automatically at startup. It was only after a lot of fiddling about that I decided to try hibernating the machine with the "login when waking from sleep" option disabled, so you only have to close the lid to turn it off, then hit the power button to get back to watching wildlife. And, amazingly (to me at least) it seems to work flawlessly. The only time it actually gets turned off is when it needs to reboot for an update patch.

    Suitably impressed, I enabled Sleep mode on the new Media Center box; which runs Vista Home Premium Edition. I managed to get the screensaver I adapted some while back (see The Screensaver Ate My Memory) to run on Vista. It shows assorted photos from our collection for a specified time and then terminates, allowing the machine to go to sleep. Yet it reliably wakes up in time to record scheduled TV programs, collect guide data, and do all the other complicated stuff that Media Center seems to require (to see how many things it does, just take a look in Task Scheduler).

    So, somewhat late to the party, I'm now a confirmed sleeper and hibernater. My laptop is happily slumbering away (though not in a large hole) as I write this - on another machine obviously. And the incredible thing is that it comes back to life faster than my (somewhat dated) mobile phone does. In fact, it takes the Wii box, the consumer DVD player, and the TV longer to get going than my laptop. I've even got my wife's tiny Vista-based laptop set up to hibernate so she can get to her vital email inbox more quickly. Maybe we're at last reaching "consumer-ready" status for computers? Though I'd have to say that I haven't needed to reboot my phone three times in the last month to install updates.

    And while we're on the subject of screensavers (yes we are), I still can't figure why I had to completely rebuild the one that worked fine on XP to make it run on Vista. The problem was that it has a series of configuration settings, which include the path to the root folder containing the pictures to display. It saves these using the VB built-in persistence mechanism, which quite happily remembers the settings each time you open the configuration window. But when Vista fires up the screensaver after the requisite period of inactivity, it suddenly forgets them all again.

    At first I thought it was to do with the weird path mappings Vista uses for the Public Pictures folder, but no amount of twiddling would make it work (have you ever tried debugging a screensaver?). And I can't find any sample code that shows how you get to it using environment variables. However, after a lot of poking about in the code, it seems that Vista may actually run the screensaver under a different account or context from the current user context (though I haven't been able to confirm this), so the user-specific settings you make in the configuration window can't be located. Finally, after applying my usual large-hammer-based approach to writing code (I made it store and read the settings from a simple text file in the C:\Temp folder), it works again.

    At last I can sleep (and hibernate) easy...

  • Writing ... or Just Practicing?

    Can Writers Dance The Agile?

    • 2 Comments

    It's probably safe to say that only a very limited number of the few people who stroll past my blog each week were fans of the Bonzo Dog Doo Dah Band. Or even, while they might recall their 1968 hit single "I'm the Urban Spaceman" (which, I read somewhere, was produced by Paul McCartney under the pseudonym of Apollo C. Vermouth), are aware of their more ground-breaking works such as "The Doughnut In Granny's Greenhouse". So this week's title, based on their memorable non-hit "Can Blue Men Sing The Whites" is pretty much guaranteed to be totally indecipherable to the majority of the population. Except for the fact that the BBC just decided to use it as the title of a new music documentary.

    But, as usual, I'm wandering aimlessly away from the supposed topic of this week's verbal perambulation. Which is, once again, about agileness (agility?) in terms of documenting software under development. No, really, I have actually put some solid thought into this topic over the past several months, and even had one or two of the conclusions confirmed by more learned people than me, so they are not (wholly) the kind of wild stabs into the dark that more commonly make up the contents of these pages.

    Where's The Plan, Dan?

    One of the major factors in agile development is "stories". Users supposedly provide a list of features they would like to see in the product; the development team evaluates these; and draws up a final list of things that they think the software should include. They then sort the list by "nice-to-haveness" (taking into account feasibility and workload), and produce a development plan. But they don't know at that stage how far down the list they will actually get. Most products are driven by a planned (or even fixed) release date, so this approach means that the most important stuff will get done first, and the "nice to haves" will be included only if there is time.

    It would be interesting if they applied agile methods in other real world scenarios. Imagine taking your car to the dealer to get it serviced. Their worksheet says a service takes two hours, and there is a list of things they're supposed to look at. You'd like to think that if the mechanic doesn't get them all done in the allocated time they would actually finish them off (even if you had to pay a bit more) rather than leaving the wheel nuts loose or not getting round to putting oil in the engine. Or maybe not having time to check if you need new brake pads.

    Of course, every sprint during the dev cycle should produce a releasable product, so multiple revisions of the same section of code can often occur. So how do you plan documentation for such an approach? You can assume that some of the major features will get done, but you have no idea how far down the list they will get. Which means you can't plan the final structure or content of the docs until they get to the point where they are fed up fiddling with the code and decide to freeze for final testing. You end up continually reorganizing and reworking sections and topics as new features bubble to the surface.

    But whilst the code may just need some semi-mechanized refactoring and tidying up to accommodate new features, the effect on the docs may require updates to feature overviews, links, descriptions, technical details, tables of methods, schematics, code samples, and the actual text - often in multiple locations and multiple times. The burden increases when the doc set is complex, contains many links, or may need to support multiple output formats.

    What's the Story, Jackanory?

    Each feature in the list of requirements is a "story", so in theory you can easily document each one by simply reading what the developers and architects say it does. And you can look at the source code and unit tests to see the way it works and the outcomes of new features. Or, at least, you can if you can understand the code. Modern techniques such as dependency injection, patterns such as MVC, and language features such as extension methods and anonymous typing mean that - unless you know what you are looking for and where to find it - it can be really hard to figure what stuff actually does.

    In addition, the guys who write the unit tests don't have clarity and education as objectives - they write the most compact (and unrealistic in terms of "real world" application) code possible. OK, so you can often figure out what some feature does from the results it produces, but getting an answer to simple questions that are, in theory, part of the story is not always easy. I'm talking about things like "What does it actually do (in two sentences)?", "Why would I use this feature?", "How does it help users?", and "When and how would you recommend it be used".

    Even a demo or walkthrough of the code (especially from a very clever developer who understands all of the nuances and edge cases) can sometimes be only of marginal help - theory and facts whooshing over the top of the head is a common feeling in these cases. Yes, it showcases the feature, but often only from a technical implementation point of view. I guess, in true agile style, you should actually sit down next to the developer as they build the feature and continually ask inane questions. They might even let you press a few keys or suggest names for the odd variables, but it seems a less than efficient way to create documentation.

    And when did you last see a project where there were the same number of writers as developers? While each developer can concentrate on specific features, and doesn't really need to understand the nuances of other features, the writer has no option but to try and grasp them all. Skipping between features produces randomization of effort and workload, especially as feedback usually comes in after they've moved on to working on another feature.

    Is It Complete, Pete?

    One of the integral problems with documenting agile processes is the incompatible granularity of the three parts of the development process. When designing a feature, the architect or designer thinks high level - the "story". A picture of what's required, the constraints, the objectives, the overall "black boxes on a whiteboard" thing. Then the developer figures out how to build and integrate the feature into the product by breaking it down into components, classes, methods, and small chunks of complicated stuff. But because it's agile, everything might change along the way.

    So even if the original plan was saved as a detailed story (unlikely in a true agile environment), it is probably out of date and incorrect as the planned capabilities and the original nuances are moulded to fit the concrete implementations that are realistic. And each of the individual tasks becomes a separate technical-oriented work item that bears almost no direct relationship to the actual story. Yet, each has to be documented by gathering them all up and trying to reassemble them like some giant jigsaw puzzle where you lost the box lid with the picture on.

    And the development of each piece can be easily marked complete, and tested, because they are designed to fit into this process. But when, and how, do you test the documentation and mark it as complete? An issue I've come up against time and time again. If the three paragraphs that describe an individual new feature pass review and test, does that mean I'm done with it? How do I know that some nuance of the change won't affect the docs elsewhere? Or that some other feature described ten pages later no longer works like it used to? When you "break the build", you get sirens and flashing icons. But how do you know when you break the docs?

    Is It A Bug, Doug?

    So what about bugs? As they wax and wane during the development cycle, you can be pretty sure that the project management repository will gradually fill up with new ones, under investigation ones, fixed ones, and ones that are "by design". I love that one - the software seems to be faulty, or the behavior is unpredictable, but it was actually designed to be like that! Though I accept that, sometimes, this is the only sensible answer.

    Trouble is that some of these bugs need documenting. Which ones? The ones that users need to know about that won't get fixed (either because it's too difficult, too late, or they are "by design")? The ones that did get fixed, but change the behavior from a previous release? Those that are impossible to replicate by the developers, but may arise in some esoteric scenario? What about the ones that were fixed, yet don't actually change anything? Does the user need to be told that some bug they may never have come across has been fixed?

    And what if the bug was only there in pre-release or beta versions? Do you document it as fixed in the final release? Surely people will expect the beta to have bugs, and that these would be fixed for release. The ones you need to document are those that don't get fixed. But do you document them all and then remove them from the docs as they get fixed, or wait till the day before release and then document the ones they didn’t get time to fix? I know which seems to be the most efficient approach, but it's not usually very practical.

    Can I Reduce the Hurt, Bert?

    It's OK listing "issues", but a post like this is no help to anyone if it doesn't make some suggestions about reducing the hurt, in an attempt to get better docs for your projects. And, interestingly, experience shows that not only writers benefit from this, but also the test team and other non-core-dev members who need to understand the software. So, having worked with several agile teams, here's my take (with input from colleagues) on how you can help your writers to create documentation and guidance for your software:

    • Write it down and take photos. Make notes of the planned features, identifying the large ones that have a big impact on the docs (even if, from a dev perspective, they seem relatively simple). And while you're snapping those pictures of the team for your blog, take photos of the whiteboards as well.

       

    • Plan to do the large or high-impact tasks first. A task that has a huge impact on the docs, particularly for an update to an existing product, needs to be firmed up at least enough for the rewriting and reorganization that will take place to take into account the upcoming changes (and save doing all this twice or three times).

       

    • Create a simple high-level description for the major or high-impact tasks that at least answers the questions "What does it do?", "When, how, and why would customers use this feature?", and "What do I want customers to know about it (such as the benefits and constraints)?" A mass of technical detail, test code, change logs, or references to dozens of dev tasks, is generally less than useful.

       

    • Organize your task repository to simplify work for the test and doc people. Consider a single backlog item with links to all of individual related dev tasks, and a link to a single doc/test task that contains the relevant details about the feature (see previous bullet).

       

    • Annotate bugs based on the doc requirements. Those that won’t be fixed, and those from a previous public release that were fixed and change the behavior of the product, are "known issues" that must be documented. Others that were fixed from a previous public release and don’t affect behavior go into the "changes and fixes" section hidden away somewhere in the docs. Those that are only applicable during the dev cycle for this release (ones that are found in dev and fixed before release) do not need documenting. Help your writer differentiate between them.

       

    • Think about the user when you review. Docs and sample code need to be technically accurate, and at a level that suits the intended audience, but having them reviewed only by the really clever people who wrote the code in the first place may not ensure they are appropriate for newcomers to your product, or meet the needs of the "average" developers who just want to use the features. Especially if there is no "story" that helps the writer correctly position and describe the feature.

       

    • Make time at the end of the dev cycle. You can tweak code and add stuff right up to the time you need to build the installer and ship, but docs have a built-in delay that means they will always trail behind your latest code changes. They need reviewing, editing, testing, proof reading, indexing, and various production tasks. And stuff like videos and walk-throughs can often only be created once the software stops changing.

    While the core tenets of agile may work well in terms of getting better code out of the door, an inflexible "process over product" mentality doesn’t work well for user documentation. Relying on individuals and interactions over processes and tools, working software over comprehensive documentation, and responding to change over following a plan, can combine to make the task of documentation more difficult.

    Common effects are loss of fluidity and structure, and randomization. One colleague mentioned how, at a recent Scrum workshop for content producers, she came across the description: "We're in a small boat with fogging goggles riding in the wake of an ocean liner that is the engineering team".

    Was It A Success, Tess?

    I was discussing all these issues with a colleague only the other day, and he made an interesting comment. Some teams actually seem to measure their success, he said, by the way that they embraced agile methods during development. The question he asked was "Is the project only a success if you did agile development really well?" If analysis of all the gunk in your project management repository tells you that the agile development process went very well, but the software and the documentation sucks, is it still a triumph of software development?

    Or, more important, if you maybe only achieved 10% success in terms of agile development methods but everyone loves the resulting software, was the project actually a failure...?

  • Writing ... or Just Practicing?

    Suffering Suffixes, Batman

    • 0 Comments

    One of the features of working from home is that, if you aren't careful, you can suddenly find that you haven't been outside for several days. In fact, if you disregard a trip to the end of the drive to fetch the wheely bin, or across the garden to feed the goldfish, I probably haven't been outside for a month. I suppose this is why my wife, when she gets home from work each day, feels she has to appraise me of the current weather conditions.

    Usually this is something along the lines of "Wow, it's been scorching today!" or "Gosh, it's parky enough to freeze your tabs off!" (tip for those not familiar with the Derbyshire dialect: "parky" = "cold", "tabs" = "ears"). However, last week she caught me by suprise by explaining that "It's rather Autumnal, even Wintery". Probably it's my rather weird fascination with words that set me off wondering why it wasn't "Autumny and Wintery", or even "Autumnal and Winteral". OK, so we also say "Summery", but "Springy" doesn't seem right. And, of course, I soon moved on to wondering if Americans say "Fally" or "Fallal". I'll take a guess that they end up with one of the poor relations in the world of words (ones that have to use a hyphen) with "Fall-like", just as we say "Spring-like".

    Note: I just had a reply from a reader in the US North West who says that the only words they use for "Fall" are "rainy", "misty", "stormy", and "cloudy". I guess that agrees with my own experience of all the trips I've made there, including a few in "Summer".

    The suffix "al" comes from Latin, and means "having the character of". So Autumnal is obvious. But what about "lateral". Assuming the rules for the "al" prefix, it means "having the character of being late or later" (or, possibly, "dead"), whereas I've always assumed it meant something to do with sideways movement or thinking outside the box. Maybe I need to splash out $125 to buy Gary Miller's book "Latin Suffixal Derivatives in English: and Their Indo-European Ancestry". It sounds like a fascinating bedtime read. Though a more compact (and less expensive) option is the useful list of suffixes at http://www.orangeusd.k12.ca.us/yorba/suffixes.htm. This even includes a page full of rules to apply when deciding how to add different suffixes to words, depending on the number of syllables and the final letter(s).

    So, coming back to my occasional trips outdoors, why do I have a "wheely" bin instead of a "wheelal" bin or even a "wheelive" bin. The suffix "y" is supposedly of Anglo Saxon (and Greek) origin and means "having the quality of" or "somewhat like". Pretty much the equivalent of the "al" suffix, which also means "suitable for". The "ive" suffix means much the same, but - like "al" - derives from Latin. What I really have, however, is a "wheelable" or "wheelible" bin, using one of the Latin suffixes that means "capable of being (wheeled)". Is it any wonder that people find English a difficult language to learn? Though I suppose the definitive indication of this is "Ghoti" (see http://en.wikipedia.org/wiki/Ghoti).

    Perhaps I can make the code samples in the technical documentation I create more interesting by applying descriptive suffixes? The "ory" suffix means "place for" (as in "repository"), so my names for variables that hold references to things could easily be customerAccountory and someTemporaryStringory. And a class that extends the Product base class could be the Productive class. That should impress the performance testing guys.

    Or maybe I could even suggest to the .NET Framework team that they explore the possibility of suffixing variable types for upcoming new language extensions. An object that can easily be converted to a collection of characters should obviously be defined as of type Stringable, and something that's almost (but not quite) a whole number could be Intergeral, Intergeric, Intergerous ("characterized by"), or just Intergery.

  • Writing ... or Just Practicing?

    Unbound Generics: an Open and Closed Case

    • 0 Comments

    There's a well known saying that goes something like "Please engage brain before shifting mouth into gear". And another that says "If you can't stand the heat, get out of the kitchen". Yet, after what's now more than a year as a fulltime 'Softie, I've managed to avoid being flamed for any of my weekly diatribes; and neither has anybody pointed out (at least within earshot) how stupid I am. So I suppose it's time to remedy that situation. This week I'm going to throw caution to the winds and trample wildly across the green and pleasant pastures of generics, and all without a safety net.

    OK, I'll admit that it's probably not going to be a hugely technical venture, so if you are expecting to see lots of indecipherable code and complex derivations of generic functionality, you're likely to be disappointed. You might as well get your email client warmed up now ready to fire off some nicely toasted opinions. I mean, it's not like I actually know much about generic programming anyway. No, where I'm going with this is more slanted towards the requirements of my daily tasks - the terminology.

    You see, I'm of the opinion that when I write technical documents, they should be comprehensive enough to ensure that readers can follow the topic without having to dive off and consult an encyclopedia or dictionary every five minutes. While I don't want to reduce it to the "grandmother sucking eggs" level, which would just be boring to average developers and probably excruciatingly useless to the really clever people out there, I do need to make it reasonably comprehensive and self-sufficient. I mean, I really hate it when I'm reading something and have to keep stopping and re-reading bits to try and figure out exactly what the writer actually meant.

    You've probably been through this process, where you come across what seems like a simple word but the meaning in the current context is not familiar. The person who wrote it probably knows the topic inside out, and so the meaning is obvious to them. Yet not grasping what it really indicates in terms of the surrounding text can make the rest of the article pretty much meaningless. You need to know the topic before you can understand the bits that will help you to learn about it. Like: "Ensure you check the reverse osmosis defractor for discompulation before remounting the encapsulator". Does that mean you need to buy a new one if it's gone rusty? Who knows...

    Anyway, getting back to generics, I recently seem to have stirred up a discussion about the meaning of three fairly ordinary words without really trying. In fact, I suspect I've put our current project back by a week as all of the brains-the-size-of-a-planet people on the dev team try and figure out exactly which words mean what, and how we should use them. So what are these three amazingly complicated and unfathomable words? Would you believe "unbound", "open", and "closed". In fact, you can throw in "constructed" as well if you like.

    These are the definitions according to Microsoft's Glossary of .NET Framework terminology (see http://msdn.microsoft.com/en-us/library/6c701b8w(VS.80).aspx):

    • Constructed generic type: A generic type whose generic type parameters have been specified. A constructed type or method can be an open generic type if some of its type arguments are type parameters of enclosing types or methods; or a closed generic type if all of its type arguments are real types.
    • Open generic type: A constructed generic type in which one or more of the generic type arguments substituted for its generic type parameters is a type parameter of an enclosing generic type or method. Open generic types cannot be instantiated.
    • Closed generic type: A constructed generic type that has no unspecified generic type parameters, either of its own of or any enclosing types or methods. Closed generic types can be instantiated.

    It seems that most folks agree that a generic type can be unbound or constructed, and a constructed type can be open or closed. Microsoft seem a little shy about defining what an unbound generic type is, though in the Generics FAQ they mention that "the typeof operator can operate on unbound generic types (generic types that do not yet have specific type arguments)". And there are plenty of definitions elsewhere that firmly identify it as being a generic type where the types of the type parameters are not defined.

    Meanwhile, it seems like most people also agree on what a "closed" type is, so the main issue is: what's the difference between an "unbound" and an "open" generic type. If an open type is also a "constructed" type, does that mean that it has the type arguments actually populated - even though the code that's executing to achieve this only contains the placeholders? Or does "constructed" in the realm of generics mean something other than the more usual "built" or "assembled"? Does a generic type start out as "unbound" and then, as it becomes "constructed", does it move through "open" to "closed"?

    Perhaps there is a nanosecond or two, as the electrons fight their way through the layers of silicon in the CPU constructing and closing the type, that it's no longer unbound, but not quite closed yet? Smacks of Heisenberg's Theory of Uncertainty and Schrodinger's cat if you ask me. Maybe it's all wrapped up in particle and wave theory. Or could it be that unbound types don't actually exist at all? Maybe, as they aren't actually instantiated "things", they only exist in an ethereal context; or they only exist when you aren't looking at them. I suppose that, as the only thing you can instantiate is a closed type, we don't actually need to worry about the other types anyway.

    Trouble is, when working with a dependency injection mechanism (like what I'm doing now, as Ernie Wise would have said) you need to discuss registering types and type mappings for generic types that do not have all of the type arguments fully defined as concrete types. So we're trying to decide if these registrations are for "unbound" types or "open" types. When you define a registration or mapping in a configuration file, you use the .NET Framework type name without specifying the type parameters or type arguments; for example MyApps.MyTypes.MyGenericType`2 (where the "back-tick" and number define the number of type parameters in the type). When you use code to perform the registration, you also omit the type parameters; for example RegisterType(typeof(MyTypes.IMyInterface<,>)). As you saw earlier, Microsoft says that "the typeof operator can operate on unbound generic types (generic types that do not yet have specific type arguments)", and these sure look like they don't yet have specific type arguments.

    But could these registrations really be "open" rather than "unbound"? All the definitions of "open" talk about the types of the type arguments being provided by "an enclosing generic type or method". When I'm just creating a type registration, how can the type parameters be populated from an enclosing type? If I'm registering the generic child of a parent type, I can't specify what (who?) the parent will be. If I'm registering the parent, I can't prevent somebody adding a class that has a different child. And I can't register a method at all. So there is no way I can specify where the type arguments will come from.

    The problem is that almost everything else I read about using dependency injection mechanisms talks about only "open" and "closed" types. It's like "unbound" types are unclean, or unattractive, or maybe just not worthy of use in a shiny new container. Perhaps it's a conspiracy, and we'll discover that NASA actually buried them all in the Nevada desert so they can't be used to confuse clever programmers. And if I talk about unbound type registrations in my documentation, will people just make those "phhhhh" noises and toss it aside with a comment such as "...obviously doesn't have any idea what he's talking about...", "...waste of time reading that...", and "...who let him out again...?"

    Or, could it be that I'm the only one who knows the truth...?

  • Writing ... or Just Practicing?

    Come On In, The Water's Lovely...

    • 0 Comments

    A great many years ago, when I was fresh out of school and learning to be a salesman, we had a sales manager who proudly advertised that his office door was "always open". What he meant, obviously, was that we could drop in any time with questions, problems, and for advice on any sales-related issue that might arise. Forgotten what step five of "the seven steps to making a sale" is? Having problems framing your "double-positive questions"? Struggling to find "a response to overcome an objection"? Just sail in through that ever-open door and fire away. Except that the only response you ever got from him was "...you can always rely on one end of a swimming pool".

    Now, I was young and a bit damp behind the ears in those days - and probably not the sharpest knife in the box. So it was several months before I finally plucked up the courage to ask one of the older and more experienced members of our team what he actually meant. "Simple", said my wise colleague, "he means that you can always depend on a 'depend' (i.e. deep end). In other words, no matter what the question or situation, you can usually get the upper hand by prevaricating".

    I guess you only need to watch politicians on TV to understand that they all take advantage of this rule. But, strangely enough (and I don't want to give too many precious retailing secrets away here), it can be remarkably useful. Imagine the scene: your customer can't make up their mind about some particular type of plant food (OK, so I was working in the horticultural trade in those days). They ask "Will it be safe to use on my prize Dahlias?" You answer "Well I was talking to a chap last week who managed to grow double his usual crop of tomatoes!" Notice that I didn't say he was using any product he might have purchased from me, or that he actually used any fertilizer at all. Such is the power of vagueness...

    All I need do now is overcome the objection, and toss in a double-positive question: "I'm sure that the quality and the results you'll get will be far more important to you than the exceptionally low price we're offering it at this week", and "Shall I wrap it up for you to take away, or would you like us to deliver it tomorrow morning?" Amazing - seems I haven't lost the touch.

    So, having regaled you with rambling reminiscences for the last ten minutes, I probably need to try and drift back on-topic. Or, at least, to within striking distance. What prompted all this was reading the preface to our new Microsoft Application Architecture Guide 2nd Edition. In it, David Hill explains that you can always tell an architect because their answer to any question is invariably "it depends". He doesn't say if this behavior extends to things like "Can I get you a coffee?" or "Would you like a salary raise?", but I'll assume he is generally referring to architectural design topics.

    And, even though I'm not an architect (INAA?), I can see what he means. In the section on validation, for example, it quite clearly states that you should always validate input received from users. Makes a lot of sense. But what about if you have a layered application and you want to pass the data back from the presentation layer to the business layer or data layer? Should you validate it at the layer boundary? Well, it depends on whether you trust the presentation layer. You do? Well it also depends on whether any other applications will reuse the business layer. Ever. And it depends if you have a domain model, and if the entities contain their own validation code. And then it starts to talk about how you do the validation, depending on what you are validating and the rules you want to apply.

    You see, I thought at the start of this project that I could just learn the contents of the guide and instantly become "a proper architect". It's not likely to happen. What it does show is that, yes, you need to know the process, the important factors, and techniques that will help you to perform analysis and make good decisions about structure and design. And you need to understand about conflicting requirements, quality attributes, and crosscutting concerns. You have to understand the types of applications, and their individual advantages and liabilities. In fact, there is an almost endless stream of things that you could learn to help you "do architecture".

    And that, it seems, is where the guide really comes into its own. It explains the process and the important elements of design. It lists the vital requirements, and provides guidelines to help you make decisions. It tells you what to look for, where to look, and where to find out more. It even contains detailed references to things like components, layers, tiers, technologies, and design patterns (see http://msdn.microsoft.com/en-gb/library/dd673617.aspx).

    And, at almost every step, it proudly flaunts its "it depends" approach. Architecture is not, it seems, about learning rules. It's learning about software and applications, environments and infrastructure, requirements and conflicts, people and processes, techniques and technologies, and - best of all - having your say in how a small part of our technological world behaves. So, come on in - the water's lovely...

  • Writing ... or Just Practicing?

    Some Consolation...

    • 0 Comments

    Suddenly, here at chez Derbyshire, it's 1996 again all over again. Instead of spending my days creating electronic guidance and online documentation in its wealth of different formats and styles, I'm back to writing real books. Ones that will be printed on paper and may even have unflattering photos of me on the back. And there'll be professional people doing the layout and creating the schematics. It's almost like I've got a real job again. I'll be able to do that "move all your books to the front of the shelf" thing in all the book stores I visit, and look imploringly at people at conferences hoping they'll ask me to sign their copy.

    The reason is that we're producing a range (the actual number depends on how fast I can write) of books about the forthcoming version of Enterprise Library. I've been tasked with creating books that help people get to know EntLib and Unity, and get started using them in their architectural designs and application implementations. So the list of requirements includes making the books easy and enjoyable to read, simple enough to guide new users into understanding the basic features and their usage, deep enough to satisfy "average" developers who want to know more and learn best practices, and comprehensive enough to cover all of the major features that take over 1000 pages to describe in the online documentation.

    They say that a job's not worth doing if it doesn't offer a challenge, so I guess this job is definitely on the "worth doing" list. And getting it all into books of about 250 pages will certainly be interesting as well as challenging. Maybe I can write very small, or use "txt speak". Need help using the Message Queuing feature of the Logging block? How about: "2 cre8 a msg q u use mmc & put name in cfg file thn cre8 logentry & snd. 2 c if it wrkd look in q". I can see my editor and proofreader having no end of fun with that approach.

    But the hardest bit was actually deciding how (and whether) to provide sample code. Its likely there won't be room to print complete listings of all the code I use, which is probably an advantage in that readers can see and concentrate on the actual bits that are important. Of course, I need to build some examples to make sure what I'm writing actually does do what it's supposed to. So it seems sensible to offer the code to readers as well, like they do with most other programming books. But how should I present the code? As a pretty application with nice graphics and a delectable user interface? Maybe be all up to date by using WPF?

    I remember when I first started writing books about "Active Server Pages" how we had no end of problems creating sample code that users could easily install and run. There was no SQL Server Express with its auto-attach databases mechanism, no built-in Web Server in Visual InterDev (this was the days before Visual Studio as we know it now), and you couldn't even assume that users had a permanent Internet connection (the vast majority were on a dial-up connection). So you had to create complicated sets of scripts and a setup routine, even for ASP samples, that registered the components you needed and populated the database, as well as providing a long list of prerequisites.

    But things are much easier now. You can give people a zip file that they can just unzip and run. Even Windows Forms and WPF samples just work "out of the box". So, even though I haven't done much with WPF before, I thought I'd give it a try. I was doing OK with creating the form and adding some text and buttons, but then it came to adding a DataGrid. I thought maybe they'd forgotten to include it in the version of Visual Studio I'm using, but it seems that I wasn't prepared for the Wonderfully Perverse Features of WPF where you need to use Grids and ListViews and tons of stuff inside each one, then fight with Data Contexts and Binding Paths.

    In ASP.NET I just grab a GridView and drop it onto the page. So should I do the samples as an ASP.NET application? That adds complexity with postbacks and maintaining state, and makes it hard to show some features such as asynchronous data access or cache scavenging. And it hides the real functionality that I want to show. Besides which, EntLib is really a library for helping you manage your crosscutting concerns, not a development environment for Web sites. How about Silverlight? Well, then I'm faced with a combination of the issues from WPF and ASP.NET. Maybe I should just take the easy way out and use Windows Forms. As a technology, I know it well and it provides all the features I need to create glorious user interfaces.

    And then I remembered how long I've been writing code. Back in the early 80's, an attractive and intuitive user interface was one where you had a text menu on an 80 characters by 26 lines screen, and you could press any key you liked as long as it was one of those in the list. No need to mess about with sissy things like mice or trackballs, and no confusion about which things to click or where to wave your mouse pointer to see what things did. You knew where you were with applications in those days. And there's plenty of interest now in the old stuff such as text-based adventures and simple chunky graphic games like Asteroids and Space Invaders.

    So why not follow the trend? Simple example applications that run in a Console window, have simple menus to select the specific sample you want to run, and use simple code to display progress and the results of the process. No need to run Visual Studio, the database will auto-attach to SQL Server Express, and readers can easily find the appropriate code in a single "Program" file if they want to read it all. In fact, I've seen this approach used by the guys from the data team here at Microsoft when they do presentations, and it really does seem to make it easier to see what's going on.

    So that's where I'm going at the moment, at least until somebody else makes an alternative "executive decision" further down the line. What do you reckon? Do you want fancy interfaces for your code samples? Or are simple Console applications that focus on the "working parts" more useful and easier to understand? How about this for a flash-back to the past:

    Or I suppose I could just publish the code as text files...

Page 31 of 40 (317 items) «2930313233»