Writing ... or Just Practicing?

Random Disconnected Diatribes of a p&p Documentation Engineer

  • Writing ... or Just Practicing?

    I Can See Patterns In The Cloud

    • 0 Comments

    Well, we finally did it. After many months of redesign, reconsideration, rewrites, and recombination we've let loose on the web our first release of the Cloud Design Patterns guide.

    The guide is a combination of design patterns that are especially applicable to cloud-hosted applications and services. It explores the patterns in detail, provides good practice advice for when to use each pattern (and the issues to be aware of), and many have a working code example based on Windows Azure that you can download and play with.

    We also included a series of guidance topics that describe specific areas of concern around building applications for the cloud, particularly on how the distributed nature of these kinds of applications has an impact on design and implementation. These guidance topics include messaging, autoscaling, metering, and multiple data center deployment. There's also a series of topics related to distributed data management such as replication and synchronization, partitioning, consistency, and caching.

    We had a huge amount of feedback from the product groups, advisors, and customers during the development of the guide. This not only helped us select the most useful and popular patterns and topics, but also ensured that the topics provide good practice advice and cover the edge cases that may not be immediately obvious in cloud applications.

    For example, implementing features such as load-levelling and autoscaling require you to consider many aspects of how this can affect your operations and costs, while partitioning data can have a big impact on maintenance and the performance of queries if you don't plan ahead and choose the appropriate partitioning strategy before you start.

    Over time we expect to extend the range of patterns and topics in the guide. Let us know what you think. Or if you have a favorite topic or pattern that you think should be included, send me a note.

    "Cloud Design Patterns" from Microsoft patterns & practices is at http://msdn.microsoft.com/en-us/library/dn568099.aspx.

  • Writing ... or Just Practicing?

    Re-balanced and Re-routed (a.k.a. Wanna Buy An Old Modem?)

    • 0 Comments

    Maybe it's because I was a boy scout when I was young that I have this need to be prepared for every eventuality. Or maybe it's my default paranoia mode that assumes stuff will just break without warning, typically on a Friday evening. It's probably why you can't move in my office for piles of spare things.

    In fact it's so bad that, when a friend phoned and asked if I could lend him an ADSL modem for testing a line that was playing up, I was able to offer him a choice of Cisco, DLink, or Netgear. Not to mention the old BT one that came with the original ADSL contract. I like to keep a spare in case the latest Netgear one I'm using dies. And, to get to them, I had to move two old 8-port switches, three 4-port ones, and a 24-port one. These are all 100 MB types that got replaced with 1 GB ones, but I keep them as spares just in case.

    Thankfully the ADSL modems weren't buried underneath the three second-hand 15" LCD monitors I bought as spares for the server cabinet when it became clear that they were becoming as rare as chocolate teapots. Though I did have to shift the boxes containing two PCMCIA wireless cards that no longer fit in any modern laptop (but you never know if they'll come in handy one day) and the vast selection of old hard drives, all less than 150 GB, which may be useful if I ever need a very small disk for something.

    Of course, almost none of these essential spares will ever get used. They are all out of date, incompatible with the kit I have, or useless. Except maybe the 15" monitors, though they'll probably have stopped putting VGA sockets on servers by the time I get round to using one. And if something does break I know full well I'll be straight onto Amazon to order the newest, cleverest, fastest, and complete with more fancy flashing LEDs than the last one, and pay for next morning delivery. Perhaps by miniature helicopter if they ever get that to work.

    Since I replaced the wireless access point on top of a cupboard in the dining room with the new Netgear one, our neighbours think we've started running a nightclub. It's got so many flashing green and blue LEDs that it lights up the room at night. There's even one that flashes alternately green and orange to indicate that it's configured in "access point" mode. Isn't modern technology wonderful?

    But the one vital piece of connectivity kit I don't have a spare for in my junk pile is the load-balancing router than joins me to my two ISPs. Since I got rid of my proxy server, the router acts as my network firewall - as well as sharing traffic between the ADSL and cable connections. And it's some years old now, so to soothe the oncoming attack of paranoia I invested in a super-duper new one to replace it (and, of course, to provide a spare). I chose the Cisco RV320 based on the extra speed and 1GB Ethernet ports. It actually cost about the same as I paid originally for the old Linksys RV42 it's replacing, and only took the best part of a month to get here (see this post).

    One of the problems with the RV042 is that the firmware is very old, and the updates won't install because I have a "Series 1" model with insufficient memory. So at least with the new one I can be sure I'm up to date in that department. Though I was a bit surprised to discover that the firmware in the new one was out of date already. Out of date out of the box! But I soon got it painlessly updated from Cisco's support site.

    Mind you, reading the release notes was a little worrying. One of the fixes in the update is, reportedly, to solve the problem of the router suffering a memory leak and locking up after it's "been running continuously for several days". It didn't say how long "several days" might be - a week? A month? I don't know about you, but I kind of expect to plug a router in, turn it on, and leave it running until I decide to buy a new one. Hopefully there are no more unresolved issues of similar gravity waiting for the next firmware update.

    Actually configuring and using the RV320 was, however, quite painless in most areas. The UI is very similar to the old RV042, though the RV320 supports IPv6 as well so I'm ready for when that comes knocking. Setting up the firewall and the custom rules was easy, and it certainly boots and runs faster than the RV042. So far it's been up for nearly a week with no problems to report.

    The one annoying thing is that the UI doesn't support the "&" symbol that I regularly use in my complex passwords. I guess it's because the login screen is web-page based, but every other router, modem, wireless access point, and device I own allows an ampersand symbol in passwords. Including the old RV042. Still, if that's the only problem I'll be well chuffed.

    I suppose the one thing left now that still needs to be replaced with a newer and more efficient (and definitely prettier) model is me...

  • Writing ... or Just Practicing?

    We're All Professional Publishing House Operatives Now

    • 0 Comments

    OK, yes, I was tempted to find a title based on Slade's famous hit song Mama Weer All Crazee Now but finally caved in to the demands of the Word spelling checker. Not that this has anything at all to do with the topic of this week's ramble, which is about how home publishing has changed.

    I occasionally get requests to produce a booklet, poster, or other printed material to celebrate friends' weddings or christenings, for local events, or as publicity material for colleagues. Probably it's because they think I'm "good with computers", even though I have almost no artistic design capability. However, while producing a small booklet for some friends this week it struck me just how much easier it is now to produce professional results.

    When I first started doing things like this, very many years ago, the process involved more driving around in the car than actual computing. My dot matrix printer could never output good enough quality so all the text was created on my Mother's ancient typewriter. Text for hymns and psalms came from a hymn book borrowed from our local church. Photos came from prints done at the local chemist's shop, or by my Dad in his darkroom. Heading text was often Letraset rub-on transfers. All of these bits were cut out and mounted on card, then photo-copied at the local library.

    Of course, it wasn't long before I purchased my first mono laser printer that, in combination with a DTP application, provided endless opportunities for different fonts, text sizes, and layout. I could even incorporate the rather grainy and washed out images that early home scanners could extract from photographs. It really felt like you were doing proper publishing. Except that, without some artistic ability, everything came out looking like a church newsletter. Which was OK when I was actually doing a church newsletter, but a bit boring if it's to promote the local gardening club show.

    At times, in the days when I actively wrote and sold my own software, I would create professional leaflets. The layout and design were usually copied from something I saw in a magazine. The big problem was getting the design from screen to paper with the kind of quality I needed, and in large volumes. Luckily the DTP program I was using then could generate three-color print files that my local print shop could feed into their huge print line. They cost a fortune, and I doubt ever paid for themselves in increased sales, but they looked beautiful.

    So this week, as I was creating a small commemorative booklet, I realized just how easy it has all become. Microsoft Publisher contains a host of attractive design templates, and there are more online, so generating the outline was easy. Most of the text was provided by the friends, sent by email so I didn't even need to type it in. And they emailed the digital colour photos taken on their phone, which I could quickly tidy up, crop, and adjust in Paint Shop. The words of the hymns, psalms, and songs they wanted are available on the web, so I didn't need to type these in either.

    And then, before committing it to print, I generated a PDF and emailed it over so they could check it. Adjustments to content and layout are easy, and after three or four electronic interchanges I could simply dump the whole thing as a print job to my double-sided color laser printer; onto photo card for the cover and nice buff-colored pages for the inside. The result is startlingly professional, and all without ever needing to go out of the house.

    Except that I had to go out to a local print shop because I still haven't got round to buying a long-arm stapler. Maybe the next leap forward will be home printers that can fold and then ultrasonically bind the pages together...

  • Writing ... or Just Practicing?

    I Wished I Was ... in Iceland

    • 0 Comments

    Birthdays are usually celebrated with a day off work, perhaps a trip to a garden centre, or just a lazy afternoon with a good book. However, as the milestone 60 loomed it was decided for me that I was going to have a week's vacation and go somewhere exciting. Somewhere that really had that "Wish you were here" effect. Trouble is, most of it turned out to be "I wish I was a..."

    For example, this day I wished I was a surfer:

    Dyrholaey beach and cliffs

    And this day I wished I was a mountain climber:

    Solheimajokulsvegur glacier

    On this day, I wished I was a rally driver:

    Near Gulfoss on Road 30

    And on this day I wished I was a geologist:

    Dyrholaey caves

    And, of course, on this day I wished I was an astronomer:

     

    The Northern Lights seen from near Hella

    But at least, on this day, I could just relax and soak up the steam and sulphur fumes:

     

    The Blue Lagoon, Grindavik

    Iceland is a really amazing place. Especially as, since I'm still suffering from the after-effects of a spine injury that makes long trips and walks difficult, it's only two and a half hours by plane from a small local airport and I had a rental car waiting. And driving there is easy, although it's on the wrong side of the road - until you decide to explore a little. Driving on solid-packed ice, or gravel tracks with 1 in 3 inclines, is fun. Thankfully the car was 4-wheel drive with studded tyres!

    Iceland is also an ideal destination for those that have a fetish for waterfalls. This is the famous one:

    Gullfoss

    But we managed to find plenty more:

     

    Seljalandsfoss and Merkifoss

    Seljalandsfoss

    So, yes, we did the usual tourist things. Bathing in the Blue Lagoon hot springs (where they have to keep adding cold water to keep it below boiling point). Listening to the waves echoing round the caves at Dyrholaey and marvelling at the incredible rock formations. Driving miles over rocky unmade roads to see a glacier (the bits that break off do, in fact, look rather like a Fox's Glacier Mint that been loose in your pocket for a few weeks). Driving the "Golden Circle" tour to see the lakes, Gullfoss falls, the geysers and hot springs at Geysir:

    Some geezers looking at a geyser at Geysir

    And looking across the Mid Atlantic Ridge where two tectonic plates are gradually tearing Iceland apart (but there's a pretty church to look at):

    Tectonic plate boundary in Pingvellir National Park

     More Pingvellir National Park

    Iceland also has the most amazing sunrises and sunsets in wintertime:

    Sunset near Hella, South West Iceland

    And it's nice to know that the people have a sense of humour as well:

     

  • Writing ... or Just Practicing?

    Gimmie The Code!

    • 0 Comments

    It seems like a question that has an obvious answer: How should you show code listings in guidance documents? I'm not talking about the C#/VB/other language debate, or whether you orient it in landscape mode to avoid breaking long lines. No, I'm talking about the really important topics such as what color the text is, how big the tabs are, and where you put the accompanying description.

    We're told that developers increasingly demand code rather than explanation when they search for help, though I'm guessing they also need some description of what the code does, the places it works best, and how to modify it for their own unique requirements. However, copy and paste does seem to be a staple programming technique for many, and is certainly valid as long as you understand what's going on and can verify its suitability. I've actually seen extracts of code I wrote as samples for ASP 2.0 when it first appeared (complete with my code comments) in applications that I was asked to review.

    But here in p&p a lot of what we create is architectural guidance and design documentation designed to help you think about the problem, be aware of the issues and considerations, and apply best practice principles. As well as suggesting how you can implement efficient and secure code to achieve your design aims with minimal waste of time and cost. "Proven practices for predictable results", as it says on the p&p tin.

    But even design guidance needs to demonstrate how stuff works, so we generally have some developers in the team who create code samples or entire reference implementation (RI) applications. These are, of course, incredibly clever people who don't take kindly to us lowly documentation engineers telling them how to set up their environment, or that the code comments really should be sentences that make sense and have as few spelling mistakes as possible.

    In addition, Visual Studio has a really amazing built-in capability that we've so far failed to replicate in printed books. It can scroll sideways. These esteemed developers often prefer to have four or more space character wide tabs to make it easy to read the code on screen (the Visual Studio default is four). By the time you are inside a couple of if statements, a try/catch, and a lambda statement, you're off the page in a book. Two spaces is plenty in a printed document (where we have to replace the tabs with spaces anyway), but I've never yet persuaded a developer to change the settings.

    And now Microsoft mandates that we have to use the same colors for the text in listings as it appears in Visual Studio (I guess to make it look more realistic, or at least more familiar). The old trick of copying the code into Notepad, editing it, and then pasting it into the document no longer works. But copying it directly from the Visual Studio editor into a document is painful because it insists on setting the font, style, margins, and other stuff when I just want to copy the colors. Yet if I do Paste | Special | Unformatted Text in Word, I lose the colors.

    And then, when I finally get the code into the document, I need to describe how it works. Do I dump the entire code for a class into a single code section and describe it at the start, or at the end? If the code is a hundred lines or more (not unusual), the reader will find it cumbersome to relate parts of what is likely to be a long descriptive section to the actual code listing. I can break the class down into separate methods and describe each one separately, but often these listings are so long that I have the same problem.

    And, of course, explaining how the methods relate to each other often means including an abridged version of some parts of the class or one of its methods, showing how it calls other methods of the class. But do I list these methods first and reference back to them, or explain the flow of execution first with the abridged listing and then show the methods that get called?

    Typically I end up splitting the code into chunks of 30 lines or less (about half a printed page) and insert text to introduce the code before the chunk and text to describe how it works after the chunk. Something like:

    The GoDoIt method shown in the code listing above calls the DoThisBit method to carry out the first operation in the workflow. The DoThisBit method takes a parameter named thisAction that specifies the Task instance to execute, as shown in the following code listing.

    [CODE LISTING HERE]

    The DoThisBit method first checks that the task is valid and then creates an instance of a ThisBitFactory, which it uses to obtain an instance of the BitHelper class... and so on.

    After going backwards and forwards swapping code and text, breaking it up into ever smaller chunks, and trying to figure out what the code actually does it's just a matter of editing the code comments so that they make sense, breaking the lines in the correct places because they inevitably are longer than the page width, and then persuading the developer to update the code project files to match (or doing that myself just to annoy them).

    Sometimes I think that putting code listings into a document takes longer than actually writing the code, though I've never yet managed to convince our developers of that. But I've been doing it for nigh on twenty years now, so I probably should be starting to get the hang of it...

  • Writing ... or Just Practicing?

    Doing the Right Thing

    • 0 Comments

    So after I was castigated by Cisco's customer support people for buying from Amazon, who they class as a "grey importer", I decided to do the right thing this time. And look where it got me.

    I decided to upgrade my load-balancing router, and chose one that is relatively inexpensive yet has more power and features than the existing one. Yes, it's a Cisco product - the RV320. It looks like it's just the thing I need to provide high performance firewalling, port forwarding, and load balancing between two ISPs.

    On Amazon there are several comments from customers who bought one, indicating that the one supplied had a European (rather than UK) power supply. Probably because those suppliers are not UK-based. That does sound like it's a grey import, and - like last time - means I wouldn't get any technical support. However, there are UK-based suppliers on Amazon as well, so I could have ordered it from one of these, and easily returned it if it wasn't the UK version.

    But, instead, I used the Cisco UK site to locate an approved Cisco retailer located in the UK, and placed an order with them. Their website said they had 53 in stock and the price was much the same as those on Amazon, though I had to pay extra for delivery of the real thing. Still, the small extra charge would be worth it to get technical support, and just to know that I had an approved product.

    And two weeks later, with two promised delivery days passed, I'm still waiting. The first excuse was that their suppliers had not updated the stock figures over the holiday period. So in actual fact they didn't have any in stock, despite what the website said. Probably the 53 referred to what the UK Cisco main distributor had in its warehouse. And, of course, they sold all 53 over the holiday period. Maybe, like me, all network administrators choose the Christmas holiday to upgrade their network.

    A query after the second non-delivery simply prompted a "we'll investigate" response. So much for trying to spread my online acquisition pattern wider than just Amazon. I could have ordered one from Amazon.co.uk at the same price and had it installed and working a week ago. Or even paid for next-day delivery from Amazon, sent it back for replacement twice, and still be using it now. In a world that is increasingly driven by online purchasing and fast fulfilment, an arrangement of the words "act", "together", "your", and "get" seems particularly applicable if they want to remain competitive.

    But I suppose I should have remembered that you can't believe everything you read on the Internet...

  • Writing ... or Just Practicing?

    We Are Stardust

    • 0 Comments

    I've been watching the BBC program Stargazing on TV this week, and I have to say that they did a much better job this year than last. As well as stunning live views of the Northern Lights over three nights, and interviews with some ex-astronauts as well as a lady from the Cassini imaging team, viewers discovered a previously unknown galaxy. I guess that's what you call an interactive experience.

    I tend to be something of an armchair stargazer. I watch all the astronomy documentaries, and never miss "The Sky at Night" - thankfully the BBC changed their mind about dropping it after Patrick Moore passed away. We do have a telescope, but it seems to rarely see the light of day (or, more accurately, the dark of night). It's rather like the guitar sitting forlornly in the corner of the study. Both are waiting for me to retire so I have endless hours of free time.

    Of course, astronomy is like most physical sciences. Some things you can easily accept, such as the description and facts about our own solar system. Though looking at photos of the surface of another planet is a little un-nerving, and it requires a stretch of the imagination to accept that you're not looking at a film set in Hollywood or a piece of the Mojave desert. And the fact that they say they can tell what the weather is like on some distant Earth-like planet in a far-off galaxy seems to be stretching the bounds of possibility.

    They also had to mention the old "where's all the missing stuff" question again. Not only do we not know where 95% of the mass of every galaxy (including our own) is, but we have no idea what the dark matter that they use to describe this missing stuff actually is. Though there was an interesting discussion with the Gaia team, who reckon they can map it. We still won't know what it is, but at least we'll know where the largest lumps are.

    One exciting segment of the final program was where viewers who were taking part in an exercise to find lensing galaxies, which can help to locate far more distant galaxies, came up with a really interesting hit. So much so that they immediately retargeted the Jodrell Bank and several other telescopes around the world at it. We wait the results with bated breath, including the name - which is currently open to suggestions.

    But it's when they start talking about how you are seeing distant galaxies as they were several million, or even several billion, years ago that it gets a bit uncomfortable. Especially how the limit to our discoveries is stuff that is 14 billion light years away or closer, because the light coming from anything further away hasn't had time to reach us yet. Even though it all started in the same place at the Big Bang. And galaxies that are near the limit are actually 40 billion light years away now because they kept going since they emitted the light that we are looking at now. So will we still be able to see them next year?

    Also interesting was the discussion of what happens when two galaxies collide. It seems that the Andromeda galaxy, our nearest neighbor, is heading towards us at a fairly brisk pace right now. Due to the vast distances between the stars in each galaxy, there's only a small chance of two stars (or the planets that orbit them) colliding, but they say it will produce some exciting opportunities for astronomical observation as it passes through. And there's a theory that the shape and content of own Milky Way galaxy is actually the result of a previous encounter with Andromeda anyway.

    For me, however, one of the presenters managed to top all of these facts and theories almost by accident. When asked what the oldest visible thing in the Universe is, he simply pointed to himself and said that all the hydrogen atoms that make up parts of all of us (and everything around us) were made within two minutes of the Big Bang. So pretty much all of them are 14 billion years old.

    Of course, the other things that make up us, the larger and more complicated atoms, are a bit younger. Many of these types of atoms are still being manufactured in distant super-novae, but this stuff inside us has no doubt been around for a very long time. As any good astronomer will tell you, Joni Mitchell was right when she sang "We are stardust"...

  • Writing ... or Just Practicing?

    You Have To Trust Somebody...

    • 0 Comments

    After spending part of the seasonal holiday break reorganizing my network and removing ISA Server, this week's task was reviewing the result to see if it fixed the problems, or if it just introduced more. And assessing what impact it has on the security and resilience of the network as a whole.

    I always liked the fact that ISA Server sat between my internal domain network and the different subnet that hosted the router and modems. It felt like a warm blanket that would protect the internal servers and clients from anything nasty that crept in through the modems, and prevent anything untoward from escaping out onto the ‘Net.

    The new configuration should, however, do much the same. OK, so the load-balancing router is now on the internal subnet, but its firewall contains all the outbound rules that were in ISA Server so nothing untoward should be leaking out through some nefarious open port. And all incoming requests are blocked. Beyond the router are two different subnets connecting it to the ADSL and cable modems, and both of those have their firewalls set to block all incoming packets. So I effectively have a perimeter network (we're not allowed to call it a DMZ any more) as well.

    But there's no doubt that ISA Server does a lot more clever stuff than my router firewall. For example, it would occasionally tell me that a specific client had more than the safe number of concurrent connections open when I went on a mad spree of opening lots of new tabs in IE.

    ISA Server also contained a custom deny rule for a set of domains that were identified as being doubtful or dangerous, using lists I downloaded from a malware domains service that I subscribe to. I can't easily replicate this in the router's firewall, so another solution was required. Which meant investigating some blocking solution that could be applied to the entire network.

    Here in Britain, out deeply untechnical Government has responded to media-generated panic around the evils of the Internet by mandating that all ISPs introduce filtering for all subscribers. What would be really useful would be a system that blocked both illegal and malicious sites and content. Something like this could go a long way towards reducing the impact of viruses and Trojan propagation, and make the Web safer for everyone. But, of course, that doesn't get votes.

    Instead, we have a half-baked scheme that is supposed to block "inappropriate content" to "protect children and vulnerable adults". That's a great idea, though some experts consider it to be totally unworkable. But it's better than nothing, I guess, even if nobody seems to know exactly what will be blocked. I asked my ISPs for more details of (a) how it worked – is it a safe DNS mechanism or URL filtering, or both; and (b) if it will block known phishing sites and sites containing malware.

    The answer to both questions was, as you'd probably expect, "no comment". They either don't know, can't tell me (or they'd have to kill me), or won't reveal details in order to maintain the integrity of the mechanism. I suspect that they know it won't really be effective, especially against malware, and they're just doing it because not doing do would look bad.

    So the next stage was to investigate the "safe DNS services" that are available on the ‘Net. Some companies that focus on identifying malicious sites offer DNS lookup services that automatically redirect requests for dangerous sites to a default "blocked" URL by returning a replacement IP address. The idea is that you simply point your own DNS to their DNS servers and you get a layer of protection against client computers accessing dangerous sites.

    Previously I've used the DNS servers exposed by my ISPs, or public ones such as those exposed by Google and OpenNIC, which don't seem to do any of this clever stuff. But of the several safe DNS services I explored, some were less than ideal. At one of them the secondary DNS server was offline or failed. At another, every DNS lookup took five seconds. In the end the two candidates I identified were Norton ConnectSafe and OpenDNS. Both require sign-up, but as far as I can tell are free. In fact, you can see the DNS server addresses even without signing up.

    Playing with nslookup against these DNS servers revealed that they seem fast and efficient. OpenDNS says it blocks malware and phishing sites, whereas Norton ConnectSafe has separate DNS server pairs for different levels of filtering. However, ConnectSafe seems to be in some transitional state between v1 and v2 at the moment, with conflicting messages when you try to test your setup. And neither it nor the OpenDNS test page showed that filtering was enabled, though the OpenDNS site contains some example URLs you can use to test that their DNS filtering is working.

    The other issue I found with ConnectSafe is that the DNS Forwarders tab in Windows Server DNS Manager can't resolve their name servers (though they seem to work OK afterwards), whereas the OpenDNS servers can be resolved. Not that this should make any difference to the way DNS lookups work, but it was annoying enough to make me choose OpenDNS. Though I guess I could include both sets as Forwarders. It's likely that both of them keep their malware lists more up to date than I ever did.

    So now I've removed all but the OpenDNS ones from my DNS Forwarders list for the time being while I see how well it works. Of course, what's actually going on is something equivalent to DNS poisoning, where the browser shows the URL you expect but you end up on a different site. But (hopefully) their redirection is done in a good way. I did read reports on the Web of these services hijacking Google searches and displaying annoying popups, but I'm not convinced that a reputable service would do that. Though I will be doubly vigilant for strange behaviour now.

    Though I guess, at some point, you just have to trust somebody...

Page 4 of 41 (323 items) «23456»