Writing ... or Just Practicing?

Random Disconnected Diatribes of a p&p Documentation Engineer

  • Writing ... or Just Practicing?

    Hybrid Triangulation (with Cat Food and Bananas)

    • 2 Comments

    Once again I'm at one of those gloriously satisfying stages in my p&p working life when I'm trying to define the structure for a new guide. We know what technologies we want to cover, how we will present the guidance, and the kind of sample that we'll provide to demonstrate the all-encompassing wonderfulness of the technologies on offer. But after two weeks of watching videos, perusing technical documents, consulting experts, and RSI from repeated spells of vicious Visioing, I'm still floundering in a cloud of Azure confusion.

    The target for the project is simple enough: explore the opportunities for building hybrid applications that run across the cloud/on-premises boundary, and provide good practice guidance on implementing such applications. It obviously centers on integration between the various remotely located bits, the customers and partners you interact with, and the stuff running in your own datacenter; and there is a veritable feast of technologies available in Azure targeted directly at this scenario.

    So why is it so difficult to get started? Surely we can toss a few components such as Web and Worker roles, databases, applications, and services into a virtual food mixer and pour out a nice architectural schematic that shows how all the bits fit together. I wish. Even with bendy arrows and very small text I still can't fit the result onto a single Visio page.

    Obviously you need a list of the technologies you want to use. In our case, the first things going into the plastic jug are ingredients such as Azure Service Bus (with its myriad and still growing set of capabilities), Azure Connect, Virtual Network Manager, Access Control Service, Data Sync, Business Intelligence, Data Market, and Azure Cache. Then add to that a pinch of frameworks such as Enterprise Library Extensions for Azure and Stream Insight.

    Yet every connection between the parts begs different questions. Where do I put the databases (cloud or on-premises) to resemble real-world scenarios but still show technologies such as Connect and Data Sync in action? Do I use Service Bus Queues or Topics and Rules to communicate between the cloud application and the suppliers? If I use ACS for authentication, when and where do I match the unique customer ID with their data in the Customers database? What's the most realistic location for the stock database, and do I replicate it to SQL Azure or just cache the minimum required content in the cloud instances? Does SQL Federation fit my scenarios, or is that a whole different kettle of fish that deserves a separate recipe book?

    And, most confusing of all, how do I cope with multiple geographical locations for the Azure datacenters and the warehouse partners who fulfill the orders? Do I allow customers to place orders that will be fulfilled from any warehouse (with the associated problem of delivery costs), or do I limit them to ordering only from their local warehouse? And if I take the second option (assuming I have a warehouse partner in both the East and West US), what happens if somebody in New York wants to place an order for delivery to California?

    And after you decide that, look what happens when you factor in Azure Traffic Manager. If you use it to minimize response times and protect against failures, the customer in New York might end up being routed to the California datacenter. That's fine if they want the goods delivered to California, but most likely they'll want them delivered to New York and so the order needs to go to that warehouse. Unless, of course, the New York warehouse is out of stock but they have some in the California warehouse.

    Of course, the whole concept of integrating applications and services is not new. Enterprise Application Integration (EAI) is a big money-spinner for many organizations, and everybody has their own favored theory accompanied by a different architectural layer model. And don't forget BPM (Business Process Management) and BPR (Business Process Reengineering). I read a dozen different reports and guides and none of them had the same layers, or recommended the same process.

    And, in reality, building a hybrid application (or adapting an existing on-premises application into a hybrid application) is not EAI, BPM, BPR, or any of the myriad other TLAs. It's a communication and workflow thing. Surely the core questions focus on how you get service calls, messages, and management functions to work securely across the boundaries, and how you manage processes that require these service calls and messages to work in the correct order, and make decisions at each step of the process. Yes you can match these questions to layers in many of the EAI models, but that doesn't really help with the overall architecture.

    What went wrong with the whole design process was that we started with a list of technologies rather than a business scenario that required a solution. We went down the route of trying to design an application that used all of the technologies in our list, but used each one only once (otherwise we'd be introducing unnecessary duplication). We'd effectively taken ingredients at random from the cupboard and expected the food mixer to turn them into a palatable, attractive, and satisfying beverage. It's obviously not going to work, especially if you keep the cat food in the same cupboard as the bananas.

    In the real world people start out with a problem that the technologies can help to solve, not a predefined list of technologies chosen because they have tempting names and capabilities. If you want to build a public-facing website with an on-premises management and reporting application, you wouldn't start by buying 100 copies of Microsoft Train Simulator and a refrigerator. You'd design the application based on requirements analysis and recommended architectural styles, then order the stuff in the resulting Visio schematic. Somewhere along the line the choice of technologies would be based on the application requirements, rather than the other way round.

    So at the moment we're tackling the issues from all three ends at once, and hoping for some central convergence. On our mental whiteboard there's a big circle containing the list of required technologies, another containing EAI and other TLA layer models, and a third containing the possible real-world scenarios. I'm just hoping that, like a Euler diagram, there will be a tiny triangle in the middle where they overlap.

    But that's enough rambling. The pains in my fingers are starting to recede, so I need to get Visioing again. I reckon I've still got some bendy arrows left that I can squeeze in somewhere...

  • Writing ... or Just Practicing?

    I Don't Believe It!

    • 0 Comments

    So it's been an interesting week in the world of amazingly unbelievable new technologies. I can't make up my mind which is the most implausible: test-tube sausages, invisible military vehicles, or Boolean values that are only 70% true. It reminded me of the story about the young boy who asks his Grandfather whether it's true that he still has his old army tin helmet from the war in France, to which the old man replies "Yes, it's in the attic behind the tank". The young lad's eyes widen in amazement as he exclaims "What, you've got a tank up there as well!"

    According to my newspaper, scientists at Maastricht University in Holland have perfected a technique for artificially growing stem cell muscles that could be used to make sausages and burgers. Maybe they don't think that because the result is somewhat grey and rather mushy it will put people off, or that very few are likely to respond well to a menu item "Stem Cell Muscleburger and Fries" in their local fast food joint. And they reckon it can be made to "look and feel (!) more like traditional meat", so that's OK; though the current cost of a quarter of a million dollars per burger may limit market demand.

    Mind you, the place where money is often no object is in building military equipment. My local news site The Register recently reported on the Swedish military's announcement of an invisible tank. They even published a rather nice picture of an empty field to show what it would look like. They don't say how they knew where it was parked at the time, but I guess the technology is safe from industrial spies taking surreptitious photos of it.

    And at least there is some scientific explanation of how it works. You have a video camera pointing at the scenery behind it and display the image on sheets of pixels nailed to the outside of the tank. It's a bit like James Bond's invisible Aston Martin. I wonder if that's where they got the idea. And it's neat that a related search on Bing brings up a link "Images of James Bond's Invisible Car". Obviously it doesn't work very well because I can see the car in the pictures.

    So maybe both the sausages and the invisible tanks are semi-believable. But what about Matteo Mariantoni's quantum computer where the bits flying around the super-cooled interior are both one and zero at the same time, or maybe they're one only 70% of the time and zero 30% of the time, or something else. There's a video where Matteo explains how each qubit consists of trillions of trillions of particles, and a nice graphic of a pinball table that helps to make it clear exactly what's going on inside.

    They do admit that it will be a while before this becomes the home computer paradigm. And I reckon I'd be a bit nervous of having something on my desk that needs to be cooled to near absolute zero, and has trillions and trillions of particles rushing about inside trying to make up their mind whether they're ones or zeros. Though I'm not sure my current laptop is any more convinced of the values of the variables rattling round inside it when I review what I've written some days.

    Meanwhile, the video of the quantum computer even shows a view of the machine itself, which rather resembles a large mechanical jellyfish. Of course, it probably it won't look quite so scary once they put it in a fancy exterior case with some USB sockets on the back. They could even talk to the nice people in Sweden and make it invisible to save the bother of designing a nice case, but whether I'd remember where I left it after I stop for my lunchtime Quarter-pound Muscleburger with Cheese might be a problem...

  • Writing ... or Just Practicing?

    Time to Stop Typing and Start Thinking

    • 0 Comments

    It's amazing how, sometimes, things get simpler the more you fiddle with them. Or, to be more precise, something that seems to be evolving into an increasingly complicated problem turns out to be easy to resolve when you step back and look at it from another direction. I guess it's what they call "lateral thinking"; the archetypal example being letting air out of the tyres of a truck that's just a bit too tall to pass under a low bridge.

    It actually happened to me this month (the lateral thinking bit, not the deflating truck tyres bit) as I've been updating some of my own server-monitoring-kludge software. I'm still skirting the decision on installing Microsoft System Center to manage my servers. It's made more complicated by the fact that they are on different networks, public and private, and I really don't fancy tackling the complexity of such a product just to get monitoring information.

    Over the years I've been using a selection of home-built Windows UI-based utilities to do things such as collect Event Log warnings and errors, monitor websites for connectivity, check for changes to firewall rules initiated by software updates (or by other more nasty causes), and monitor IIS logs for attacks or unusual activity. The trouble is that none of these work unless you are logged in, and collecting the information from each one is more complicated than it should be.

    So I finally decided to put together a Windows Service that can run at startup and collate all the required information in one place. It shouldn't be hard; I already have all the code in the other utilities, so it's just a matter of combining it into one lump of executable stuff. Except, that's where it started to get complicated because everything depends on a timer that correlates the activities.

    In the separate utilities the "timer tick" routines are optimized for the specific activity, and trying to combine them all into one resulted in a huge and unmanageable routine that attempted to reset the timer interval in multiple places. It needs to adjust the interval for requirements such as testing for recovery of failed websites at varying intervals, and concurrently manage different intervals for all of the other functions. At first I wondered whether to just include multiple System.Threading.Timer instances, one for each function. But to minimize server load and avoid threading problems when writing to log files I'd need to synchronize them so that only one function would be running at any time.

    I tried several different ways of updating the timer repeat interval in each monitoring function routine, and then debugging problems with the various bits of code that changed it depending on current status. Then I tried having it fire every minute and getting the code to check for functions that should execute at that specific time. But it all seemed to be hugely over-complicated. There was always an edge case I'd missed, or some sequence of events that broke the cycle. Lack of proper design and planning up front (mainly because I was trying to reuse existing code) meant that the source file just kept growing, and each new bit raised another problem. Time to sit back, stop typing, and start thinking.

    And that's when the solution became obvious. I don't actually need the timer to fire at specific intervals; I only need it to fire once after the required delay, at which point I can reset it. The interval depends on the status of each monitoring function (for example, if a website has failed and is waiting for recovery, or when the next specific monitoring check is due). When the timer fires I can simply disable it, execute any pending functions, figure out how long it is until the next monitoring action is due, and start the timer again with the appropriate delay.

    In other words, in each "tick" event I can test whether it's time to execute each of the monitoring functions, carry out the ones that are due, and then - after all that's done - calculate the new required delay and set the timer to fire at that time. Each monitoring function knows how many minutes should elapse before it executes again. So a simple routine called at the end of the "tick" event handler code just iterates over all of the functions to return the lowest "number of minutes until due" value, sets the timer to this interval, and starts it running. And the interesting point is that, if I'd been designing the program from scratch, this is probably how I'd have decided to do it in the first place!

    If you are feeling brave you can try out the server monitor service yourself (it's free). Get it from here.

    Thankfully the IT world is reasonably well protected from the results of my vague program design approach because my day job is writing about code it rather than creating it. However, it struck me how similar this "evolution to simplicity" is to my own world of writing guidance and documentation. When I worked on the Enterprise Library 5 project some time ago we had a big guidance management problem trying to reuse nearly 1000 pages of documentation that had accumulated from the previous four versions. Attempting to massage it into shape and add new information turned out to be a nightmare job, in particular for the features that were new or had changed significantly.

    In the end, it was only by scrapping large chunks and writing more targeted guidance for these features that we managed to scramble out of the mire. We did reuse small blocks of the original documentation, though (particularly for the Unity DI mechanism) we ended up writing mostly new content. It would be nice to say that we'd planned this approach at the start of the project, but sadly that's not true. We knew we'd need to rework the content, but trying to do that without a proper up-front plan just made the whole thing over-complicated and less useful.

    Stopping typing and starting thinking allowed us to define what we felt was the ideal documentation structure, into which we could drop the appropriate blocks of existing content and then build around them following the plan. In development terms, we'd refactored the code and reused the existing functions, but written a new control loop that fired the specific functions at the appropriate times in the execution cycle.

    I wonder if I can persuade the Office team to add the Visual Studio refactoring functionality into Microsoft Word...

  • Writing ... or Just Practicing?

    Material Choices

    • 0 Comments

    According to a recent revelation from a colleague, freshly returned from the bi-annual International Sock Summit, you can now buy sock knitting needles made from carbon fibre; the same stuff they used to make the Stealth Bomber. Why? Maybe it's so you can knit socks that can't be detected by enemy radar? Or it's so they won't break from the rapid heat generation and extreme strain during a sock-knitting speed contest (and, yes, there are such things - see this site if you don't believe me).

    Probably it's really all about status symbols and consumer aspiration. I guess it follows along from similar seemingly inappropriate material choices such as using airplane-grade aluminium for the cases of laptops (perhaps they are designed to fly when you toss them out of a window in exasperation at some software glitch). Or the protective sleeve for my mobile phone that is made out of ultra-tough scratch-resistant polyvinylsomethingorother, and is so slippery that the phone slides off my desk at least twice a day. Good thing, I suppose, that it has an ultra-tough protective sleeve.

    But cursory examination of modern consumer goods soon reveals that material choice is something manufacturers never really got the hang of. You only have to walk past our stainless steel microwave oven and the shiny front panel needs cleaning again. And the people who designed our electric kettle decided, for some reason, that the best material for the little lever that connects the switch to the internal gubbins is some kind of thin and very fragile plastic. We're on our third replaced-under-guarantee one already...

    So it's only right that I should strive to avoid poor material choice in my day job of writing guidance for software developers, designers, and administrators. Trouble is, it's a lot harder than choosing between stainless steel and vitreous enamel, or between metal and plastic; and it's far more difficult to do empirical testing of the finished product. They might have machines that turn a switch on and off ten million times to see if it breaks, but I've never been able to find a mechanized device for testing written guidance to destruction.

    Yes, I know that modern proofing tools such as Word do clever grammar, spelling, and sentence structure checks. In fact, Word just red-wiggleyd the word "gubbins" in an earlier paragraph. Yet it's a proper word (at least, here in England), as you can see from Free Dictionary (or, for a much more interesting definition, check out Uncyclopedia).

    The point is that getting the grammar, spelling, and sentence structure right is useful (it saves my editor tons of work), but it doesn't actually prove anything about the content in terms of material choice. It might be syntactically correct, wonderfully phrased, and rise from the page with all the grace and beauty of a Rachmaninov Piano Concerto. But is what's in there actually any use? Are there bits inside that will break as soon as you start to use it? Or will the shine go off it completely when you get to the end of the introductory paragraph (rather like this blog, I guess)? And will I suddenly get a ton of emails demanding a replacement under guarantee?

    For example, we recently decided we needed a chapter for some Azure guidance that describes the increasingly wide range of services and features available in Windows Azure and SQL Azure (and Windows Azure AppFabric and Windows Server AppFabric, which are different things). I reckon somebody told the Azure dev team because, as fast as I wrote stuff, they added more features. It was like trying to finish a bowl of soup while sitting outside a cafe in a rainstorm. And then, as soon as I finished one bit, they changed the feature again. Obviously I've done something in the past to annoy them and they're getting their own back.

    But that's just the point. How do I test my guidance other than re-reading it every day and re-checking all the sources? The original data came from dozens of different sources, and none of them will be helpful enough to drop me a note when they change something. They expect me to figure out what's happening by monitoring hundreds of different sources all the time. And, of course, once we press the big red "print" button and churn out ten thousand beautifully designed and bound hard copies it's too late to do anything about it.

    The answer, as I've suggested in previous posts, is to simply provide semi-vague descriptions of the features and links to the original resources - just as a developer links to some assembly they want to use in their code. As long as the interface (or, in our case, the URL) stays the same it will "just work". But all I'm doing is making the reader do the work of chasing round the myriad resources, and deciphering and distilling the knowledge they need from them. It's not really a solution.

    A typical example is with Azure Service Bus (a topic dutifully discoursed in a recent post). It used to be just a way of doing messaging, queuing, and eventing through firewalls and NAT routers. Now it's a fully-fledged communication and service access mechanism to which you can add topics, rules, discovery, and more. And they still haven't finished adding things. It almost seems as though soon you'll start building your application with Service Bus and then bolt things onto the edges afterwards.

    Ah, but maybe this is the answer to my guidance-material-appropriateness problem. I just need to build an application that runs in the cloud and uses Service Bus to connect directly to all of the guidance resources. It can use discovery to find new related material; topics, rules, and actions to select the appropriate parts; a worker role to assemble these into a finished guidance document; queues to store the updated material until I'm ready to use it; and secure messaging to send it directly to my laptop through Azure Connect.

    When the source material changes, it would use Service Bus event subscriptions to notify changes, and automatically rebuild the content with the new material. Of course, that doesn't help with the printed books, but if we could switch everyone to using tablets and electronic reader devices we could pipe the new content to them automatically once they subscribe to the Service Bus events.

    Unfortunately, however, even this level of technological capability and comprehensive interconnectedness can't resolve the core issue. For a start, there's no way that it can understand the actual contents, or comprehend the needs of the reader. I've fallen into the trap of being distracted by a great new technology, and tried to bend it to meet my requirements when it's obviously not going to solve the original problem. I lost track of the scenario whilst wallowing in the depths of the technical capabilities.

    And that's just it. Technical information about a specific product or feature may not help you to understand how you map it to your own infrastructure and requirements, or even if it's actually a suitable technology. Every system is different, and so the only real-world solution is to discover the scenario and then see if the product actually maps to that scenario. It may be that there are better ways to implement the solution; or different technologies that provide a better fit.

    So what we do here at p&p is create guidance that looks outside of the product documentation to map the vast range of products available to the requirement of ordinary users. Given scenarios discovered from wide-ranging user feedback, we attempt to show how you apply the most appropriate technology, and do so in the most efficient way. We look for common patterns and discover the ways that you can follow good practice when applying the technology solutions.

    Maybe that's why we're called "patterns & practices"...

Page 1 of 1 (4 items)