Random Disconnected Diatribes of a p&p Documentation Engineer
After the "hot stuff" article of a few weeks ago, I thought I might as well shift focus towards another similarly inane topic, like showers. You see, one thing they seem really good at in the U.S. is doing showers (the bathroom type, not the weather type - though Redmond does seem to have an equal share of both). Even when I stay in relatively down-market hotels, the rooms always seem to have a good shower. In fact, in one I used a while ago, I actually get a wet room; though my wife would probably suggest that any bathroom I use is a wet room after I'm finished.
So how difficult is it to provide a good shower? Let's face it, you only need three components and one remote service: a pipe in the wall for the water to come out of, a hole in the floor for it to run away, and a knob that controls the temperature. As for a remote service, someone has to provide hot and cold water, but I guess if you intend to install a shower that's a given anyway. Yet, here in Ye Olde England, we don't seem to have grasped the technology quite so well. Our house has what the builder referred to as "a top quality shower" installed. What this means is that the tray doesn't sag and crack when you stand on it (even if you do need to climb a 10 inch high step to get in), the cabinet panels are real toughened glass (instead of plastic), and the shower itself is a top of the range electric thing from one of the major manufacturers.
So why is it so useless? My wife always knows when I'm in the shower from the swearing and clattering noises because it's so small I keep banging my elbows against the side. And if you drop the soap, you have to turn off the shower and get out to pick it up because there isn't room to bend down. Not only that, the flow in wintertime is so poor you have to run around to get wet (or you would have to if there were room). But the worst part is that you spend the first ten minutes of your shower trying to get the temperature right, even if it was perfect yesterday, and then you get scalded when someone turns a tap on elsewhere in the house.
Now, I'm not a professional UI designer or an expert on domestic plumbing, but reckon the only thing you are really interested in with a shower is the temperature of the water coming out of the pipe in the wall (I'm assuming here it's not that hard to provide a hole in the floor for the water to run away). Yet the "top quality" thing installed in our bathroom has two user input devices: a three-position switch marked "high", "medium", and "low", and a knob labelled "flow" that goes from "low" to "high". Notice no mention of the important requirement "temperature". The idea is that you randomly fiddle with these two controls until the temperature is about right, and then hope nobody turns on a tap. Of course, there is a built-in delay while the two controls mutually interact with each other, so that any adjustment you make takes a minute or two to affect the water temperature. And the final temperature depends on the current water pressure and the incoming water temperature, so you can guarantee it will be different every time.
Just think if we built our software applications like this. Instead of Windows having a volume control in the taskbar, it would have a selector for choosing the particular integrated circuit on the sound card you want to use, and a slider for changing the voltage you send to it; and any changes you make would have no effect until the next song started playing. Or your enterprise application would have two controls where you entered the price of an order: a set of option buttons where you specify the number of zeros in the amount, and a button you click repeatedly until it randomly shows the actual value you want.
So, getting back to the wet stuff, we decided to have the shower replaced by something more usable. Of course, the main problem here is that this process involves a plumber - especially as we had to have the heating radiator moved to make room for a bigger shower cabinet. I don't know what it's like in other places, but here it tends to resemble the Flanders & Swann "The Gasman Cometh" affair. The process basically involves:
Mind you, what really amazed me is that the original builder and the plumber who is installing the new one seem to have come from different centuries. To save making holes in the wall, he suggested just having a single pipe hanging down from the ceiling and a "remote management console" fixed to the wall. While I initially had visions of an MMC snap in running on my domain controller, it seems that all it needs is some magic box hidden up in the attic and a neat little keypad thing with a built-in LCD display stuck on the wall inside the shower.
Wow, is this 21st century or what? Turns out that it's all wireless and automatic. You just dial the temperature you want, press the green button, wait until it beeps, and jump into the cubicle. Of course, being a confirmed technophobe, I wasn't fully convinced. Will it have Ctrl-Alt-Del buttons? Will I need to edit the registry when it goes wrong? And what happens when my wireless router finds it - will I get scalded every time my wife sends an email? Somehow, you just know that the reality will be different from the advertised nirvana.
However, the plumber then mentioned to my wife that it comes with a second "slave" remote unit that she can put by the bed, so she can turn on the shower before she gets up in the morning. Or even keep it in the car so she can have the shower running and ready when she pulls up in the driveway after a hard day at work. At this point, any influence I may have had in the decision-making process was lost.
Wouldn't it be great if persuading clients to buy your latest and greatest software creation were that easy...?
A couple of years ago I (somewhat inadvertently) got involved in learning more about software design patterns than I really wanted to. It sounded like fun in the beginning, in a geeky kind of way, but soon - like so many of my "I wonder" ideas - spiralled out of control.
I was daft enough to propose a session about design patterns to several conference organizers, and to my surprise they actually went for it. In a big way. So I ended up having to keep doing it, even though I soon realized that I was digging myself into the proverbial hole. Best way to get flamed at a conference or when writing articles? Do security or design patterns. And, since I suffered first degree burns with the first topic some years ago, I can't imagine how I drifted so aimlessly into the second one.
Mind you, people seemed to like the story about when you are all sitting in a design meeting for a new product, figuring out the architecture and development plan, and some guy at the back with a pony tail, beard, and sandals suggests using the Atkinson-Miller Reverse Osmosis pattern for the business layer. Everyone smiles and nods knowingly; then they sneak back to their desk to search for it on the Web and see what this guy was talking about. And, of course, discover that there is no such pattern. Thing is, you never know. There are so many out there, and they even seem to keep creating new ones. Besides which, design patterns are scary things to many people (including me). Especially if UML seems to be just an Unfathomable Mixture of Lines.
Of course, design patterns are at the core of best practice software development, and part of most things we do at p&p. So I now find myself on a project that involves documenting architectural best practice. And, since somebody accidently tripped over one of my online articles, they decided I should get the job of documenting the most common and useful software patterns. No problem, I know most of the names and there is plenty of material out there I can use for research. And we have a team of incredibly bright dev and architect guys to advise me, so it's just a matter of applying the usual documentation engineering process to the problem. Gather the material, cross reference it, analyze it, review it, and document the outcome.
Ha! Would that it were that easy. I mean, take a simple pattern like Singleton. GOF (the Gang of Four in their book Design Patterns: Elements of Reusable Object-Oriented Software) define the definition of the pattern and the intended outcome. To quote: "Ensure a class only has one instance, and provide a global point of access to it." So documenting the intentions of the pattern is easy. The problem comes when you try and document the implementation.
It starts with the simple approach of making the constructor private and exposing a method that creates an instance on the first call, then returns it every time afterwards. But what about thread safety when creating the instance? No problem; put a lock around the bit that checks for an instance and creates one if there is no existing instance. But then you lock the thread every time you access the method. So start with the test for existence, and only lock if you need to create the instance. But then you need to check for an existing instance again after you lock the thread in case you weren't quick enough and another thread snuck in while you weren't looking. OK so far, but some compilers (including C# but not Visual Basic) optimise the code and will remove the second lock as there is nothing in the routine that can change the value of the instance variable between the two lock statements. So you need to mark the variable as volatile.
Now that you've actually got the "best" implementation of the pattern, you discover that the recommended approach in .NET is to allow the framework to create the instance as a shared variable automatically, which they say is guaranteed to be safe in a multi-user environment. However, this means that you don't get lazy initialization - the framework creates it when the program starts. But you can create it as a child class and let .NET create the instance only on demand. So which is best? What should I recommend? I guess the trick is to document both (as several Web sites already do). Problem solved? Not quite. Now you need to explain where and when use of the pattern is most appropriate.
At this point, I discovered I'd proverbially put my head into a hornet's nest. Out there in the real world, there seems to be a 50:50 split between people saying using Singleton is a worse sin than GOTO, and those who swear by it as a useful tool in the programmer's arsenal. In fact, I spent a hour reading more than 100 posts in one thread that (between flamings) never did provide any useful resolution. Instead of Singleton, they say, use a shared or global variable. As much of the online stuff seems to describe Singleton only as an ideal way to implement a global counter, I can see that reasoning. However, I've used it for things like exposing read-only data from an XML disk file, and it worked fine. The application only instantiates it on the few occasions that the data is required, but it's available quickly and repeatedly afterwards to all the code that needs it. I suppose that's one of the joys of the lazy initialization approach.
And then, having fought my way through all this stuff, I remembered the last project I worked on. If you use a dependency injection framework or container (such as the Unity Application Block) you have a built-in mechanism for creating and managing singletons. It even allows you to manage singletons using weak references where another process originally generated the instance. And you automatically get lazy initialization for your own classes as well as singleton behavior - even if you didn't originally define the class to include singleton capabilities. So I guess I need to document that approach as well.
And then there are sixty or so other patterns to tackle. And some of them might actually be complicated...
I suppose most people have a "natural" language. I pride myself on the fact that I speak three languages: English, American, and Shouting (used in all other situations). However, while the majority of us geeks are probably mono-dialectic or bi-dialectic in terms of spoken languages, we do tend to be multi-syntactic in terms of computer languages. In fact there can’t be many older members of the geek fraternity who don't have a passing knowledge of some dialect of BASIC. It might be GW Basic, Commodore Basic, or some variety of Visual Basic. Of course, these days, many refrain from admitting this, especially if they spent time working with what they see as "proper" languages (and I'm thinking C++ here).
Over time, the languages we've all used have changed. Some that looked especially promising in a "I've just got to learn that one" sort of way, such as Objective Camel, have dropped by the wayside. Others flourish and improve with each release. In the Microsoft world, the language that seems to be growing faster than any other is, of course, C#. I guess the acceptance as an ISO standard and the availability of the platform-agnostic CLI has helped. And, in most circumstances, the syntax is sufficiently simple compared to C++ that it is relatively easy to learn...
I often wonder where I stand in this changing world of languages. I learnt BASIC, FORTH, and assembly language programming on several home computers. I learnt Pascal, ADA, a little LISP, some COBOL, and a smattering of FORTRAN during university courses. I learnt the rudiments of RPG2 at one of my employers (even though I didn't work in the IT department, and had to sneak in when the boss was away). Then I took up the Microsoft cause and learnt Visual Basic, Access Basic, WordBasic, and any number of other myriad variations.
I remember reading somewhere a long while ago that any competent programmer can learn how to use a new language in a day, and master it in a week. There were many arguments on that bulletin board thread (see, I said it was a while ago), but I tend to believe that it's true. Of course, few languages are really "new". Knowledge of Pascal, ADA, or JScript makes learning C# and Java relatively easy. VBScript makes Visual Basic easier, and vice versa. The core programming concepts are mostly the same, and it’s just a matter of learning the syntax and keywords.
So where am I going with this rambling visit to computing language history? Well, it comes about because I increasingly fall out with the dev teams about how they write code. Yes, I know it's a wild assumption that a writer might know anything about real programming, but I don’t want to change how they write code - I just want to change how they decide on names and how they think about users of other languages.
A perfect example of the battles I seem to fight over and over again is the new Unity Application Block. The development team write in C#, and the original name of the method to retrieve object instances was "Get", which is - of course - a Visual Basic keyword. Thankfully, after I grumbled at them, it was changed to "Resolve". However, the real issues are things like variable names used in the code. I've struggled with this problem for many years, in articles, books, and documentation where the equivalent code is listed in more than one language; yet it seems I am getting no nearer to changing attitudes.
The technique I call "C++ naming" that many C# developers adopt is to simply lower-case or camel-case the class name, which means that Visual Basic has to prefix the variable name with something else for the code to remain remotely similar. And generally that "something" is the hated underscore (which doesn’t show up well in listings). In fact, the patterns & practices coding guidelines actually say "it is important to differentiate member variables from properties when the only difference in C# is capitalization". I hate to think how many hours I waste renaming variables in listings just to get the code to look the same in all the languages. And is it actually important? Do people ever read more than one language listing?
I don’t know about you, but I hate to see listings that differ between languages where this is not actually necessary. We tend to list only C# and Visual Basic, and in a few cases (such as anonymous delegates and methods), you do need to change the code between C# and Visual Basic. Plus, in cases where the developers use "C# best practice" or take advantage of features available only in C# syntax (which thankfully are few), I have to accept that I'll need to do some work to get to a Visual Basic translation - though automated tools do help a great deal.
Due to a combination of wild assumption and striking incompetence, I recently ended up repeating a long and pointless journey and overnight stay in the following week. I'm pleased to say that only the wild assumption was on my behalf - I assumed that an email containing details of a definite appointment meant that I was supposed to turn up at the specified time and place - whereas the striking incompetence became apparent when there was nobody else there. I knew that things were turning fruit dimensional (pear shaped) when the receptionist searched in vain for my name in three folders and a ring binder, then started making random phone calls.
I guess you've been through this experience yourself at some time and will recognize the symptoms. However, besides the usual pondering on what shape pears are that haven’t gone wrong, what really got me thinking is how the costing policy of the (rather less than salubrious) hotel works. For the first trip, I booked a Sunday night stay about three weeks prior and it cost around 40 US dollars room-only for one night. For the second trip, I booked four days ahead and it cost something nearer to 100 US dollars. OK, so maybe I'm going to get a penthouse with wall-to-wall grand pianos, hot and cold running servants, and silk sheets. Or perhaps they included for a banquet meal and a West End show.
Turns out that I got the same room, the same level of non-service, the same single and very small towel (though they had washed it), and the same view of the same brick wall out of the window. There was approximately the same number of guests and the same number of cars in the car park. It was the same day of the week, and even the weather was about the same. It just cost two and a half times more.
Now, I can understand that prices change based on factors such as demand, the time of year, the day of the week, the general occupancy level trends, the cost of maintenance and bank loans, the number of months left before the owner needs to change their Rolls Royce for a new one, and hundreds of other variable factors. But it still seems a bit steep, just because I booked less than a week ahead.
I suppose this is one of the problems with the Web, online booking systems, and technology in general. Instead of printing a price list that people can see (and so has to at least appear to be relatively reasonable in the way charges are calculated), you can hide it all in the business logic behind a flashy Web page and make semi random (usually upwards) movements in the price on a whim. Ashtrays full in the Roller? Just change a configuration setting so everyone pays twice as much for the same thing for a couple of days.
So can I adopt this approach in my charging scheme? Maybe the documentation team here at p&p can figure a way to base our charge backs on some crafty business logic that combines essential factors such as the weather (we could be out in the garden), the day of the week (the pool hall charges half price on Wednesdays), or the time of year (we could be on a beach somewhere on vacation). Combined, of course, with the complexity of the task (need to find the appropriate reference book), the urgency (chance of keyboard friction burns), and the topic (might have to learn new stuff). In addition, to prevent unwelcome charge calculation transparency, we'll take the ANSI code of the first letter of the requester's name and add that on to the hourly rate.
Wow, sounds like a plan. So, do you need any documentation work doing this week...?