Random Disconnected Diatribes of a p&p Documentation Engineer
I can't honestly say that I've ever been much of a patron of the dark arts. Mind you, a few years ago I was fascinated to see a chapter for a book on ADO.NET that I'd written come back from review with fifteen paragraphs about devil worship in the middle of it. I was about half way through editing this when I suddenly realized it sounded unfamiliar, and seemed to have little to do with asynchronous data access and stored procedures. I assume that the reviewer had got their Ctrl-somethings mixed up, and I still can't help wondering if there is a Web site out there somewhere that has a detailed description of the behavior of a DataReader in the middle of an article about witchcraft and sorcery.
Anyway, it seems that I have a friend and colleague who actually is a "dark arts" expert. At least he is when the dark art in question is Cascading Style Sheets (CSS). OK, so I long ago accepted that we needed a way of separating style from content in Web pages, and I don't know of any other technology that accomplishes this as well as CSS does. I mean, you can even do dynamic styling in response to UI events and all kinds of clever stuff with it. I'm still amazed at sites like Zen Garden where changing the style sheet actually makes you believe you navigated to a different page.
Yet all my attempts to use CSS to achieve a design that doesn't look like a 1985 Web site (with everything centered and in Times Roman font) seem to result in a page that only works on a 42" screen, or requires you to scroll a mile and a half downwards then read it with your head on one side and one eye closed. It's like they designed the language to be impenetrable to mere humans. I mean, I can fix DNS servers, edit the Active Directory, administer Group Policy, understand design patterns, and I even know a fair bit about enterprise application design and development. But I can't even get margins or padding to work most times in CSS (probably 'cos I don't know which I should be using), and end up with nbsp's and transparent GIFs all over the place. Or (horror), tables for layout...
So when I discovered that a site I manage for the local village residents group was broken in IE8 (and, obviously, had always been broken in Firefox), I put off trying to fix it for as long as possible. The site is based on the Microsoft ASP.NET Club Starter site, and a glance at the stylesheet with its myriad of clear thises and float thats meant I'd probably need to stock up with a month's worth of coffee and cold pizza. After a couple of hours randomly changing stuff (the usual geek's approach to fixing things you don't understand) I'd reached the point where the entire site was totally incomprehensible.
So I emailed my pal Dave Sussman, who has spent the last several years of his life doing clever Web stuff with CSS and other complicated technologies. I know he's good at this kind of thing because he hasn't phoned me for ages to complain about rounded corners and designers generally. And, you know what? Within ten minutes I got the answer. Just take out a clear something or other, or change a margin this to a float that, at it would "just work". And he was, of course, absolutely correct.
Mind you, he admitted he'd resorted to using one of his dark art tools - a wicked device called "Firebug", which does sound like something used by wizards or witches. I'm not sure if he dances around the fire naked at the same time, but I'm too polite to ask...
...that is the question. Whether 'tis nobler in the server cabinet to suffer the outrageous lack of valuable new functionality, or to take arms against the powerful improvements to the core Windows Server operating system. And by opposing, manage without them? To sleep (or hibernate): perchance to dream of an easy upgrade. I guess you can see why I don't write poetry very often - it always seems to end up sounding like somebody else's.
So the disks for Server 2008 R2 dropped through my letter box the other week, and since then I've pondered on whether to upgrade. It's less than a year since I spent a whole week crawling around inside the server cabinet installing two sparkly new servers running Windows Server 2008, upgraded the networking, set up four virtual machines on Hyper-V, and generally dragged my infrastructure screaming and cursing into the twenty-first century. And now it seems it was all to no avail. I'm out of date and running legacy systems all over again.
OK, so I assumed that there would be a Windows Server 201x at some point, and that I'd once again fall by the wayside, but I never expected it to be this soon. While the hardware might not last out the next decade, I kind of hoped that I'd just have to drop the VMs onto a couple of new boxes when the existing ones decided it was time for the bits of bent wire and plastic to give up the ghost. But now it seems the ones and zeros need to be replaced as well. Maybe they're nearly worn out too.
So I printed off all the stuff about fixing upgrade problems (with the fair assumption that - if they exist - I'm going to find them), read the release notes, and then tossed the disk into the drive of the standby machine. At least if I break that one I can reinstall from a backup without interrupting day-to-day service. Of course, it would also be an interesting test of my backup strategy, especially as I've not yet had the misfortune to need to resurrect a Windows 2008 box using the built-in backup and restore feature.
After a few minutes rummaging about inside the machine, the installer produced its verdict. OK, so I did forget about domain prep (it's also the backup domain controller), but it also said it needed 18+ GB of free space on Drive C. Not something I was expecting. But I have 17GB free, so I could probably move the swap file to another drive (there's over 100GB available there), but would that break the upgrade? And the VMs have a lot less free disk space. I'll need to grow the partition for them, and then try and shrink it afterwards - otherwise it will take even longer to export backups. Hmmm, not such a simple decision now is it?
One thing is clear, next time I order any machine I'm going to specify it with 4 x 1 terabyte drives. I seem to spend my life trying to find extra disk space, even though the current boxes have nearly 400 GB in them. And they spend 99.9% of their time with the performance counter showing 1% load. It's a good thing I'm not trying to do something enterprisy with them.
So with it looking likely that I'll be confined to my legacy version of Windows 2008 for the foreseeable future, I decided to review what I'd be missing. Maybe it's only a facelift of the O/S, and there are just a few minor changes. Well, not if you look at the "What's New in Windows Server 2008 R2" page. There's tons of it. Pages and pages of wonderful new features that I can drool over. But do I need them? I guess the one area I'm most interested in is updates to Hyper-V, and that list seems - to say the least - a little sparse. I don't need live migration, and I'm definitely convinced that, with the minimal workload on my systems, I don't need enhanced processor support or bigger network frames. And dynamic virtual machine storage won't help unless I stuff the box with bigger disks.
The one feature I would like is the ability to remove the redundant connections* that Hyper-V creates in the base O/S (see "Hyper-Ventilation, Act III"), but I guess I can live without that as well. So what happens if I don't upgrade? Will I become a pariah in the networking community? Will my servers fall apart in despair at not getting the latest and greatest version of the operating system? Will I be forever dreaming about the wonderful new applications that I can't run on my old fashioned O/S? Will I still be able to get updates to keep it struggling on until I get round to retiring?
Or maybe a couple of bruisers from the Windows Server division will pop round with baseball bats to persuade me to upgrade...
* Update: In Windows Server 2008 R2 you can untick the Allow management operating system to share this network adapter option in Virtual Network Manager to remove these duplicated connections from the base O/S so that updates and patches applied in the future do not re-enable them.
I'm not quite sure how she did it, but this year my wife managed to convince me to follow the latest weekly pandering to public opinion that is "The X Factor" - our annual TV search here in Britain for the next major singing and recording star. I did manage to miss most of the early heats; except for those entrants so excruciatingly awful that my wife saved the recording so she could convince me that there's a faint possibility I don't actually have the worst singing voice in the world. Though I suspect it's a close-run thing.
And I have to admit that some of the finalists do have solid performing capabilities. There's a couple of guys in the "over 25s" section that really look like they could actually make it as recording artists. Though, to really tell, you probably should watch (or rather, listen) with the screen turned off so you aren't distracted by the accompanying (and very talented) dancers, the audience madly waving banners, and the pyrotechnics and light shows that accompany every performance. I mean, it's supposed to be all about the voice.
And here we come to the crux of the matter. After one particularly controversial decision where a young lady with probably the purest and most versatile singing voice was voted off, the show's owner Simon Cowell said that "he trusts public opinion" and that he "wouldn't organize a show like this if he didn't". Unlike what seems to be the majority of the baying public out there, I actually agreed with his decision. If the show is supposed to be about finding the best artist based on the opinions of the "Great British Public", then they should be allowed to make the decisions.
What's worryingly clear, of course, is that the public don't actually vote based on the principles of the competition - they vote for their favorite. I suspect that the tall and attractive blonde lady gets a lot of votes from young men, and the teenage lad (who, to be honest, doesn't have the greatest voice) gets a lot of votes from teenage girls. Meanwhile my wife wanted the guy with the mad hairdo who looks a bit like Brian May from Queen (and is a very passable rock music singer) to win. Or maybe the one who wears funny hats.
But more than anything, you have to assume that a great many people vote - mainly out of spite - for the act that Simon Cowell (currently Britain's most hated person) has been trying to get voted off for the past many weeks. Rather like the last TV-based ballroom dancing contest where the lumbering overweight TV reporter actually had to resign from the competition because it was clear that he was the worst every week, but the public kept voting him in.
And here we come to the crux of the matter. Who is best placed to choose the optimum outcome for any activity that involves choice? The principles of democracy suggest that allowing everyone to have their say, and choosing the most popular outcome, is the way to achieve not only popularity, but success as well. It's based on the assumption that everyone will make a logical choice based on their situation, and the resulting policy will therefore satisfy the largest number of people and achieve the optimum outcome. Though that doesn't appear to be the way that the People's Republic of Europe works, where the public gets no choice, but that's a whole other ballgame.
In our world of technology development generally, and particularly here in p&p, we rely on public opinion a great deal. We use advisory boards and public votes to figure out the future direction and features for our offerings, and to provide an overall guide for the optimum direction of our efforts. In theory, this gives us a solid view of the public opinion of our products, and ensures that we follow the path of improvement in line with need and desires.
But is this actually the case? If 6.5 million people watched "X Factor", but only a few thousand voted, is the result valid? Could it be that most people (like me) have an opinion on who should win, but have neither the professional ability to make a properly informed decision on their real talent, or who just can't be bothered picking up the phone? In a similar way, if only 35% of the population actually vote in a general election, is the result actually valid? Is it only the opinionated or those with an axe to grind who influence the final outcome?
Its worrying if this trend also applies to software development. When we run a poll, send out questionnaires, or consult an advisory group, are we actually getting the appropriate result? If the aim is to widen the user base for a product, is asking people who already use it (and, in many cases, are experts) what they want to see the best way to broaden the reach and improve general user satisfaction? No doubt there's been plenty of study in this area from polling organizations and others associated with statistical modelling, but it's hard to see how you adjust numbers to make up for the lack of input from a very large proportion of the population.
In particular, if you are trying to make a product or service more open to newcomers, and widen the user base to include a broader spectrum of capabilities, how do you get to the people who have never heard of your product? Is asking the existing expert user base, and perhaps those already interested in learning about it, the best way? And, if not, how else can you garner a wider range of non-biased opinion?
Mind you, I reckon the Geordie with the big teeth will win...
Usually the only time I feel like digging a big hole and climbing in is when I make some inappropriate remark at an important social event, or tell a rather too risqué joke during a posh dinner party. However, since I never get invited to posh dinner parties, and extremely rarely have the opportunity to attend any "cream of society" gatherings, I've so far avoided the need to invest in a new shovel. And, not being a polar bear, I don't have a tendency to view large holes in the snow as suitable resting places for the winter either. In fact, even though I'm quite adept at sleeping, it turns out I'm a rather late convert to the notion of hibernation.
As we've probably already reached the "what on earth is he rambling on about this week" moment, perhaps I need to mention that I'm rambling on about my recent epiphany in terms of turning off the computer at the end of a working day. Maybe it's because of the many years working with operating systems where you could quite safely just pull the plug or do the BRST (Big Red Switch Time) thing, yet be confident that the whole caboodle would happily start up again fully refreshed and ready to go the next day as soon as you applied some volts to it.
None of my old home computers, Amstrad PCWs, or DOS-based boxes ever minded an abrupt termination of electricity (except you lost whatever you forgot to save), and the Wii and other more consumer-oriented stuff we have also seems to cope with being abruptly halted. But not Windows Vista (and, I assume, Windows 7). You get those nagging reminders that you've been naughty, and a threat that it will spend the next four hours rummaging round your system just to see if you did any damage. I suspect this is probably just a long "wait" loop that prints random stuff on the screen, designed to teach you a lesson.
Of course, when XP was king, we were offered the chance to "Hibernate", and sometimes even "Sleep" rather than turning the thing off. I don't know if anyone actually made this work reliably - it never did on any of the laptops I've owned, and the XP-based Media Center box we had up till recently only managed it through some extra bolt-on hardware and software. Even then, I had to reboot it at least once a week to let it catch up again. So I've been extremely reticent about anything other than a proper "shut down" at the end of each session.
But recently I've noticed that colleagues seem to be able to shut the lid and briefcase their laptop, yet have it spring almost instantly into life when they open it up again; and without burning a hole in the side of the bag, as my tiny Dell XPS tried to do last time I attempted this. Aha! It turns out they are running Vista. Maybe the time it takes to actually get started from cold (even if you hadn't been naughty last time you turned it off) was a contributing factor to their hibernation behavior...
For me, matters came to a head with the machine we use to view the signal from the IP camera that my wife uses to watch night-time wildlife (foxes, badgers, etc.) in our garden. Like most software written by companies that actually specialize in hardware, the viewer application is quirky - and that's being kind. It won't remember connection details, has no automation facilities, and accepts no startup parameters. The only way to get it running is to mousily fiddle with the UI controls. There aren't even any shortcut keys or a recognizable tab order, so my usual kludge of using a program that generates key presses won't work either.
This means that, even though I can enable auto login for Windows (it's not part of my domain), I can't get the **** viewer to connect automatically at startup. It was only after a lot of fiddling about that I decided to try hibernating the machine with the "login when waking from sleep" option disabled, so you only have to close the lid to turn it off, then hit the power button to get back to watching wildlife. And, amazingly (to me at least) it seems to work flawlessly. The only time it actually gets turned off is when it needs to reboot for an update patch.
Suitably impressed, I enabled Sleep mode on the new Media Center box; which runs Vista Home Premium Edition. I managed to get the screensaver I adapted some while back (see The Screensaver Ate My Memory) to run on Vista. It shows assorted photos from our collection for a specified time and then terminates, allowing the machine to go to sleep. Yet it reliably wakes up in time to record scheduled TV programs, collect guide data, and do all the other complicated stuff that Media Center seems to require (to see how many things it does, just take a look in Task Scheduler).
So, somewhat late to the party, I'm now a confirmed sleeper and hibernater. My laptop is happily slumbering away (though not in a large hole) as I write this - on another machine obviously. And the incredible thing is that it comes back to life faster than my (somewhat dated) mobile phone does. In fact, it takes the Wii box, the consumer DVD player, and the TV longer to get going than my laptop. I've even got my wife's tiny Vista-based laptop set up to hibernate so she can get to her vital email inbox more quickly. Maybe we're at last reaching "consumer-ready" status for computers? Though I'd have to say that I haven't needed to reboot my phone three times in the last month to install updates.
And while we're on the subject of screensavers (yes we are), I still can't figure why I had to completely rebuild the one that worked fine on XP to make it run on Vista. The problem was that it has a series of configuration settings, which include the path to the root folder containing the pictures to display. It saves these using the VB built-in persistence mechanism, which quite happily remembers the settings each time you open the configuration window. But when Vista fires up the screensaver after the requisite period of inactivity, it suddenly forgets them all again.
At first I thought it was to do with the weird path mappings Vista uses for the Public Pictures folder, but no amount of twiddling would make it work (have you ever tried debugging a screensaver?). And I can't find any sample code that shows how you get to it using environment variables. However, after a lot of poking about in the code, it seems that Vista may actually run the screensaver under a different account or context from the current user context (though I haven't been able to confirm this), so the user-specific settings you make in the configuration window can't be located. Finally, after applying my usual large-hammer-based approach to writing code (I made it store and read the settings from a simple text file in the C:\Temp folder), it works again.
At last I can sleep (and hibernate) easy...
It's probably safe to say that only a very limited number of the few people who stroll past my blog each week were fans of the Bonzo Dog Doo Dah Band. Or even, while they might recall their 1968 hit single "I'm the Urban Spaceman" (which, I read somewhere, was produced by Paul McCartney under the pseudonym of Apollo C. Vermouth), are aware of their more ground-breaking works such as "The Doughnut In Granny's Greenhouse". So this week's title, based on their memorable non-hit "Can Blue Men Sing The Whites" is pretty much guaranteed to be totally indecipherable to the majority of the population. Except for the fact that the BBC just decided to use it as the title of a new music documentary.
But, as usual, I'm wandering aimlessly away from the supposed topic of this week's verbal perambulation. Which is, once again, about agileness (agility?) in terms of documenting software under development. No, really, I have actually put some solid thought into this topic over the past several months, and even had one or two of the conclusions confirmed by more learned people than me, so they are not (wholly) the kind of wild stabs into the dark that more commonly make up the contents of these pages.
Where's The Plan, Dan?
One of the major factors in agile development is "stories". Users supposedly provide a list of features they would like to see in the product; the development team evaluates these; and draws up a final list of things that they think the software should include. They then sort the list by "nice-to-haveness" (taking into account feasibility and workload), and produce a development plan. But they don't know at that stage how far down the list they will actually get. Most products are driven by a planned (or even fixed) release date, so this approach means that the most important stuff will get done first, and the "nice to haves" will be included only if there is time.
It would be interesting if they applied agile methods in other real world scenarios. Imagine taking your car to the dealer to get it serviced. Their worksheet says a service takes two hours, and there is a list of things they're supposed to look at. You'd like to think that if the mechanic doesn't get them all done in the allocated time they would actually finish them off (even if you had to pay a bit more) rather than leaving the wheel nuts loose or not getting round to putting oil in the engine. Or maybe not having time to check if you need new brake pads.
Of course, every sprint during the dev cycle should produce a releasable product, so multiple revisions of the same section of code can often occur. So how do you plan documentation for such an approach? You can assume that some of the major features will get done, but you have no idea how far down the list they will get. Which means you can't plan the final structure or content of the docs until they get to the point where they are fed up fiddling with the code and decide to freeze for final testing. You end up continually reorganizing and reworking sections and topics as new features bubble to the surface.
But whilst the code may just need some semi-mechanized refactoring and tidying up to accommodate new features, the effect on the docs may require updates to feature overviews, links, descriptions, technical details, tables of methods, schematics, code samples, and the actual text - often in multiple locations and multiple times. The burden increases when the doc set is complex, contains many links, or may need to support multiple output formats.
What's the Story, Jackanory?
Each feature in the list of requirements is a "story", so in theory you can easily document each one by simply reading what the developers and architects say it does. And you can look at the source code and unit tests to see the way it works and the outcomes of new features. Or, at least, you can if you can understand the code. Modern techniques such as dependency injection, patterns such as MVC, and language features such as extension methods and anonymous typing mean that - unless you know what you are looking for and where to find it - it can be really hard to figure what stuff actually does.
In addition, the guys who write the unit tests don't have clarity and education as objectives - they write the most compact (and unrealistic in terms of "real world" application) code possible. OK, so you can often figure out what some feature does from the results it produces, but getting an answer to simple questions that are, in theory, part of the story is not always easy. I'm talking about things like "What does it actually do (in two sentences)?", "Why would I use this feature?", "How does it help users?", and "When and how would you recommend it be used".
Even a demo or walkthrough of the code (especially from a very clever developer who understands all of the nuances and edge cases) can sometimes be only of marginal help - theory and facts whooshing over the top of the head is a common feeling in these cases. Yes, it showcases the feature, but often only from a technical implementation point of view. I guess, in true agile style, you should actually sit down next to the developer as they build the feature and continually ask inane questions. They might even let you press a few keys or suggest names for the odd variables, but it seems a less than efficient way to create documentation.
And when did you last see a project where there were the same number of writers as developers? While each developer can concentrate on specific features, and doesn't really need to understand the nuances of other features, the writer has no option but to try and grasp them all. Skipping between features produces randomization of effort and workload, especially as feedback usually comes in after they've moved on to working on another feature.
Is It Complete, Pete?
One of the integral problems with documenting agile processes is the incompatible granularity of the three parts of the development process. When designing a feature, the architect or designer thinks high level - the "story". A picture of what's required, the constraints, the objectives, the overall "black boxes on a whiteboard" thing. Then the developer figures out how to build and integrate the feature into the product by breaking it down into components, classes, methods, and small chunks of complicated stuff. But because it's agile, everything might change along the way.
So even if the original plan was saved as a detailed story (unlikely in a true agile environment), it is probably out of date and incorrect as the planned capabilities and the original nuances are moulded to fit the concrete implementations that are realistic. And each of the individual tasks becomes a separate technical-oriented work item that bears almost no direct relationship to the actual story. Yet, each has to be documented by gathering them all up and trying to reassemble them like some giant jigsaw puzzle where you lost the box lid with the picture on.
And the development of each piece can be easily marked complete, and tested, because they are designed to fit into this process. But when, and how, do you test the documentation and mark it as complete? An issue I've come up against time and time again. If the three paragraphs that describe an individual new feature pass review and test, does that mean I'm done with it? How do I know that some nuance of the change won't affect the docs elsewhere? Or that some other feature described ten pages later no longer works like it used to? When you "break the build", you get sirens and flashing icons. But how do you know when you break the docs?
Is It A Bug, Doug?
So what about bugs? As they wax and wane during the development cycle, you can be pretty sure that the project management repository will gradually fill up with new ones, under investigation ones, fixed ones, and ones that are "by design". I love that one - the software seems to be faulty, or the behavior is unpredictable, but it was actually designed to be like that! Though I accept that, sometimes, this is the only sensible answer.
Trouble is that some of these bugs need documenting. Which ones? The ones that users need to know about that won't get fixed (either because it's too difficult, too late, or they are "by design")? The ones that did get fixed, but change the behavior from a previous release? Those that are impossible to replicate by the developers, but may arise in some esoteric scenario? What about the ones that were fixed, yet don't actually change anything? Does the user need to be told that some bug they may never have come across has been fixed?
And what if the bug was only there in pre-release or beta versions? Do you document it as fixed in the final release? Surely people will expect the beta to have bugs, and that these would be fixed for release. The ones you need to document are those that don't get fixed. But do you document them all and then remove them from the docs as they get fixed, or wait till the day before release and then document the ones they didn’t get time to fix? I know which seems to be the most efficient approach, but it's not usually very practical.
Can I Reduce the Hurt, Bert?
It's OK listing "issues", but a post like this is no help to anyone if it doesn't make some suggestions about reducing the hurt, in an attempt to get better docs for your projects. And, interestingly, experience shows that not only writers benefit from this, but also the test team and other non-core-dev members who need to understand the software. So, having worked with several agile teams, here's my take (with input from colleagues) on how you can help your writers to create documentation and guidance for your software:
While the core tenets of agile may work well in terms of getting better code out of the door, an inflexible "process over product" mentality doesn’t work well for user documentation. Relying on individuals and interactions over processes and tools, working software over comprehensive documentation, and responding to change over following a plan, can combine to make the task of documentation more difficult.
Common effects are loss of fluidity and structure, and randomization. One colleague mentioned how, at a recent Scrum workshop for content producers, she came across the description: "We're in a small boat with fogging goggles riding in the wake of an ocean liner that is the engineering team".
Was It A Success, Tess?
I was discussing all these issues with a colleague only the other day, and he made an interesting comment. Some teams actually seem to measure their success, he said, by the way that they embraced agile methods during development. The question he asked was "Is the project only a success if you did agile development really well?" If analysis of all the gunk in your project management repository tells you that the agile development process went very well, but the software and the documentation sucks, is it still a triumph of software development?
Or, more important, if you maybe only achieved 10% success in terms of agile development methods but everyone loves the resulting software, was the project actually a failure...?
One of the features of working from home is that, if you aren't careful, you can suddenly find that you haven't been outside for several days. In fact, if you disregard a trip to the end of the drive to fetch the wheely bin, or across the garden to feed the goldfish, I probably haven't been outside for a month. I suppose this is why my wife, when she gets home from work each day, feels she has to appraise me of the current weather conditions.
Usually this is something along the lines of "Wow, it's been scorching today!" or "Gosh, it's parky enough to freeze your tabs off!" (tip for those not familiar with the Derbyshire dialect: "parky" = "cold", "tabs" = "ears"). However, last week she caught me by suprise by explaining that "It's rather Autumnal, even Wintery". Probably it's my rather weird fascination with words that set me off wondering why it wasn't "Autumny and Wintery", or even "Autumnal and Winteral". OK, so we also say "Summery", but "Springy" doesn't seem right. And, of course, I soon moved on to wondering if Americans say "Fally" or "Fallal". I'll take a guess that they end up with one of the poor relations in the world of words (ones that have to use a hyphen) with "Fall-like", just as we say "Spring-like".
Note: I just had a reply from a reader in the US North West who says that the only words they use for "Fall" are "rainy", "misty", "stormy", and "cloudy". I guess that agrees with my own experience of all the trips I've made there, including a few in "Summer".
The suffix "al" comes from Latin, and means "having the character of". So Autumnal is obvious. But what about "lateral". Assuming the rules for the "al" prefix, it means "having the character of being late or later" (or, possibly, "dead"), whereas I've always assumed it meant something to do with sideways movement or thinking outside the box. Maybe I need to splash out $125 to buy Gary Miller's book "Latin Suffixal Derivatives in English: and Their Indo-European Ancestry". It sounds like a fascinating bedtime read. Though a more compact (and less expensive) option is the useful list of suffixes at http://www.orangeusd.k12.ca.us/yorba/suffixes.htm. This even includes a page full of rules to apply when deciding how to add different suffixes to words, depending on the number of syllables and the final letter(s).
So, coming back to my occasional trips outdoors, why do I have a "wheely" bin instead of a "wheelal" bin or even a "wheelive" bin. The suffix "y" is supposedly of Anglo Saxon (and Greek) origin and means "having the quality of" or "somewhat like". Pretty much the equivalent of the "al" suffix, which also means "suitable for". The "ive" suffix means much the same, but - like "al" - derives from Latin. What I really have, however, is a "wheelable" or "wheelible" bin, using one of the Latin suffixes that means "capable of being (wheeled)". Is it any wonder that people find English a difficult language to learn? Though I suppose the definitive indication of this is "Ghoti" (see http://en.wikipedia.org/wiki/Ghoti).
Perhaps I can make the code samples in the technical documentation I create more interesting by applying descriptive suffixes? The "ory" suffix means "place for" (as in "repository"), so my names for variables that hold references to things could easily be customerAccountory and someTemporaryStringory. And a class that extends the Product base class could be the Productive class. That should impress the performance testing guys.
Or maybe I could even suggest to the .NET Framework team that they explore the possibility of suffixing variable types for upcoming new language extensions. An object that can easily be converted to a collection of characters should obviously be defined as of type Stringable, and something that's almost (but not quite) a whole number could be Intergeral, Intergeric, Intergerous ("characterized by"), or just Intergery.
There's a well known saying that goes something like "Please engage brain before shifting mouth into gear". And another that says "If you can't stand the heat, get out of the kitchen". Yet, after what's now more than a year as a fulltime 'Softie, I've managed to avoid being flamed for any of my weekly diatribes; and neither has anybody pointed out (at least within earshot) how stupid I am. So I suppose it's time to remedy that situation. This week I'm going to throw caution to the winds and trample wildly across the green and pleasant pastures of generics, and all without a safety net.
OK, I'll admit that it's probably not going to be a hugely technical venture, so if you are expecting to see lots of indecipherable code and complex derivations of generic functionality, you're likely to be disappointed. You might as well get your email client warmed up now ready to fire off some nicely toasted opinions. I mean, it's not like I actually know much about generic programming anyway. No, where I'm going with this is more slanted towards the requirements of my daily tasks - the terminology.
You see, I'm of the opinion that when I write technical documents, they should be comprehensive enough to ensure that readers can follow the topic without having to dive off and consult an encyclopedia or dictionary every five minutes. While I don't want to reduce it to the "grandmother sucking eggs" level, which would just be boring to average developers and probably excruciatingly useless to the really clever people out there, I do need to make it reasonably comprehensive and self-sufficient. I mean, I really hate it when I'm reading something and have to keep stopping and re-reading bits to try and figure out exactly what the writer actually meant.
You've probably been through this process, where you come across what seems like a simple word but the meaning in the current context is not familiar. The person who wrote it probably knows the topic inside out, and so the meaning is obvious to them. Yet not grasping what it really indicates in terms of the surrounding text can make the rest of the article pretty much meaningless. You need to know the topic before you can understand the bits that will help you to learn about it. Like: "Ensure you check the reverse osmosis defractor for discompulation before remounting the encapsulator". Does that mean you need to buy a new one if it's gone rusty? Who knows...
Anyway, getting back to generics, I recently seem to have stirred up a discussion about the meaning of three fairly ordinary words without really trying. In fact, I suspect I've put our current project back by a week as all of the brains-the-size-of-a-planet people on the dev team try and figure out exactly which words mean what, and how we should use them. So what are these three amazingly complicated and unfathomable words? Would you believe "unbound", "open", and "closed". In fact, you can throw in "constructed" as well if you like.
These are the definitions according to Microsoft's Glossary of .NET Framework terminology (see http://msdn.microsoft.com/en-us/library/6c701b8w(VS.80).aspx):
It seems that most folks agree that a generic type can be unbound or constructed, and a constructed type can be open or closed. Microsoft seem a little shy about defining what an unbound generic type is, though in the Generics FAQ they mention that "the typeof operator can operate on unbound generic types (generic types that do not yet have specific type arguments)". And there are plenty of definitions elsewhere that firmly identify it as being a generic type where the types of the type parameters are not defined.
Meanwhile, it seems like most people also agree on what a "closed" type is, so the main issue is: what's the difference between an "unbound" and an "open" generic type. If an open type is also a "constructed" type, does that mean that it has the type arguments actually populated - even though the code that's executing to achieve this only contains the placeholders? Or does "constructed" in the realm of generics mean something other than the more usual "built" or "assembled"? Does a generic type start out as "unbound" and then, as it becomes "constructed", does it move through "open" to "closed"?
Perhaps there is a nanosecond or two, as the electrons fight their way through the layers of silicon in the CPU constructing and closing the type, that it's no longer unbound, but not quite closed yet? Smacks of Heisenberg's Theory of Uncertainty and Schrodinger's cat if you ask me. Maybe it's all wrapped up in particle and wave theory. Or could it be that unbound types don't actually exist at all? Maybe, as they aren't actually instantiated "things", they only exist in an ethereal context; or they only exist when you aren't looking at them. I suppose that, as the only thing you can instantiate is a closed type, we don't actually need to worry about the other types anyway.
Trouble is, when working with a dependency injection mechanism (like what I'm doing now, as Ernie Wise would have said) you need to discuss registering types and type mappings for generic types that do not have all of the type arguments fully defined as concrete types. So we're trying to decide if these registrations are for "unbound" types or "open" types. When you define a registration or mapping in a configuration file, you use the .NET Framework type name without specifying the type parameters or type arguments; for example MyApps.MyTypes.MyGenericType`2 (where the "back-tick" and number define the number of type parameters in the type). When you use code to perform the registration, you also omit the type parameters; for example RegisterType(typeof(MyTypes.IMyInterface<,>)). As you saw earlier, Microsoft says that "the typeof operator can operate on unbound generic types (generic types that do not yet have specific type arguments)", and these sure look like they don't yet have specific type arguments.
But could these registrations really be "open" rather than "unbound"? All the definitions of "open" talk about the types of the type arguments being provided by "an enclosing generic type or method". When I'm just creating a type registration, how can the type parameters be populated from an enclosing type? If I'm registering the generic child of a parent type, I can't specify what (who?) the parent will be. If I'm registering the parent, I can't prevent somebody adding a class that has a different child. And I can't register a method at all. So there is no way I can specify where the type arguments will come from.
The problem is that almost everything else I read about using dependency injection mechanisms talks about only "open" and "closed" types. It's like "unbound" types are unclean, or unattractive, or maybe just not worthy of use in a shiny new container. Perhaps it's a conspiracy, and we'll discover that NASA actually buried them all in the Nevada desert so they can't be used to confuse clever programmers. And if I talk about unbound type registrations in my documentation, will people just make those "phhhhh" noises and toss it aside with a comment such as "...obviously doesn't have any idea what he's talking about...", "...waste of time reading that...", and "...who let him out again...?"
Or, could it be that I'm the only one who knows the truth...?
A great many years ago, when I was fresh out of school and learning to be a salesman, we had a sales manager who proudly advertised that his office door was "always open". What he meant, obviously, was that we could drop in any time with questions, problems, and for advice on any sales-related issue that might arise. Forgotten what step five of "the seven steps to making a sale" is? Having problems framing your "double-positive questions"? Struggling to find "a response to overcome an objection"? Just sail in through that ever-open door and fire away. Except that the only response you ever got from him was "...you can always rely on one end of a swimming pool".
Now, I was young and a bit damp behind the ears in those days - and probably not the sharpest knife in the box. So it was several months before I finally plucked up the courage to ask one of the older and more experienced members of our team what he actually meant. "Simple", said my wise colleague, "he means that you can always depend on a 'depend' (i.e. deep end). In other words, no matter what the question or situation, you can usually get the upper hand by prevaricating".
I guess you only need to watch politicians on TV to understand that they all take advantage of this rule. But, strangely enough (and I don't want to give too many precious retailing secrets away here), it can be remarkably useful. Imagine the scene: your customer can't make up their mind about some particular type of plant food (OK, so I was working in the horticultural trade in those days). They ask "Will it be safe to use on my prize Dahlias?" You answer "Well I was talking to a chap last week who managed to grow double his usual crop of tomatoes!" Notice that I didn't say he was using any product he might have purchased from me, or that he actually used any fertilizer at all. Such is the power of vagueness...
All I need do now is overcome the objection, and toss in a double-positive question: "I'm sure that the quality and the results you'll get will be far more important to you than the exceptionally low price we're offering it at this week", and "Shall I wrap it up for you to take away, or would you like us to deliver it tomorrow morning?" Amazing - seems I haven't lost the touch.
So, having regaled you with rambling reminiscences for the last ten minutes, I probably need to try and drift back on-topic. Or, at least, to within striking distance. What prompted all this was reading the preface to our new Microsoft Application Architecture Guide 2nd Edition. In it, David Hill explains that you can always tell an architect because their answer to any question is invariably "it depends". He doesn't say if this behavior extends to things like "Can I get you a coffee?" or "Would you like a salary raise?", but I'll assume he is generally referring to architectural design topics.
And, even though I'm not an architect (INAA?), I can see what he means. In the section on validation, for example, it quite clearly states that you should always validate input received from users. Makes a lot of sense. But what about if you have a layered application and you want to pass the data back from the presentation layer to the business layer or data layer? Should you validate it at the layer boundary? Well, it depends on whether you trust the presentation layer. You do? Well it also depends on whether any other applications will reuse the business layer. Ever. And it depends if you have a domain model, and if the entities contain their own validation code. And then it starts to talk about how you do the validation, depending on what you are validating and the rules you want to apply.
You see, I thought at the start of this project that I could just learn the contents of the guide and instantly become "a proper architect". It's not likely to happen. What it does show is that, yes, you need to know the process, the important factors, and techniques that will help you to perform analysis and make good decisions about structure and design. And you need to understand about conflicting requirements, quality attributes, and crosscutting concerns. You have to understand the types of applications, and their individual advantages and liabilities. In fact, there is an almost endless stream of things that you could learn to help you "do architecture".
And that, it seems, is where the guide really comes into its own. It explains the process and the important elements of design. It lists the vital requirements, and provides guidelines to help you make decisions. It tells you what to look for, where to look, and where to find out more. It even contains detailed references to things like components, layers, tiers, technologies, and design patterns (see http://msdn.microsoft.com/en-gb/library/dd673617.aspx).
And, at almost every step, it proudly flaunts its "it depends" approach. Architecture is not, it seems, about learning rules. It's learning about software and applications, environments and infrastructure, requirements and conflicts, people and processes, techniques and technologies, and - best of all - having your say in how a small part of our technological world behaves. So, come on in - the water's lovely...