Writing ... or Just Practicing?

Random Disconnected Diatribes of a p&p Documentation Engineer

  • Writing ... or Just Practicing?

    A Technological Rainbow Coalition

    • 0 Comments

    One of the joys of being a documentation engineer is the variety of projects I tackle. At the moment I'm sharing my time between two projects at diametrically opposite ends of the complexity and target audience spectrums. It even seems as though it requires my brain to work on different wavelengths at the same time. It's almost like a Rainbow Coalition, except that there's only me doing it.

    For the majority of my working week I'm fighting to understand the intricacies of claims-based applications that use ASP.NET, WCF, and WIF; and to create Hands-On-Lab documents that show how you can build applications that use tokens, federated identity, and claims-based authentication. In between I'm writing documents that describe what a developer does, the skills they require, and the tools and technologies they use for their everyday tasks.

    What's worrying is I'm not sure I really know enough about either of the topics. Claims-based authentication is a simple enough concept, but the intricacies that come into play when you combine the technologies such as ASP.NET sessions, browser cookies, WCF, WIF, ADFS, and Windows Azure Access Control Service can easily create a cloud (ouch!) of confusion. Add to that interfacing with Windows phone and, just to make matters even more complicated, SharePoint Server, it's easy to find yourself buried in extraneous detail.

    What became clear during research on these topics was why some people complain that there is plenty of guidance, but it doesn't actually tell you anything useful. I must have watched a whole week's worth of videos and presentations, and got plenty of information on the concepts and the underlying processes. But converting this knowledge into an understanding of the code is not easy. One look at a SharePoint web.config file that's nearly 1000 lines long is enough to scare anybody I reckon. Simply understanding one area of the whole requires considerable effort and research.

    Contrast that with the other task of describing what a software developer does. If you ask real developers what they do, chances are the answer will be something like "write code" or "build applications". Yet when you read articles on how development methodologies such as agile work, you soon come to the conclusion that developers don't really have time to write code at all. Their working day is filled with stand-up meetings, code reviews, customer feedback consultations, progressive design evolution, writing unit tests, and consulting with other team members.

    And what's becoming even more evident is that everything now is a cross-cutting concern. At one time you could safely assume that someone using Visual Basic was writing a desktop application, and that the developer using ASP.NET was building a website. Or the guy with a beard and sandals was writing operating system software in C++. Now we seem to use every technology and every language for every type of application. Developers need to know about all of them, and weave all of the various implementation frameworks into every application. And find time to do all that while writing some code.

    For example, just to figure out how our claims-based examples work means understanding the ASP.NET and C# code that runs on the server, the WCF mechanism that exposes the services the client consumes, the protocols for the tokens and claims, how ACS and ADFS work to issue tokens, how they interface with identity providers, how WIF authentication and authorization work on the client, and how it interfaces with the ASP.NET sessions to maintain the credentials. And don't get me started on the additional complexities involved in understanding how SharePoint 2010 implements multiple authentication methods, how it exposes its own token issuer for claims, and how that interacts (or, more significantly, doesn't) with SharePoint groups and profiles.

    At one point in the documentation about developer tasks, I started creating a schematic that maps development tools, technologies, and languages onto the four main types of application: web, desktop, cloud, and phone. It starts out simple enough until you realize that you forgot about Silverlight and XNA, or still need to find a slot for FxCop and the Expression Blend. And where do you put the Microsoft Baseline Security Analyzer? Meanwhile, Visual Studio seems to be all things to all people and to all tasks. Until you try to divide it up into the full versions, the Express editions, and the special editions such as the one for Windows Phone or the add-ons for WIF and Windows Azure. I don't think anybody has a screen large enough to display the final schematic, even if I could manage to create it.

    And just like Rainbow Coalitions in government, it sometimes seems like there are lots of different actors all trying to present a single face to the world but, underneath, all working to a different script. Should I use Razor in WebMatrix for my new website, or MVC, or just plain old ASP.NET Web Forms? Is WPF the best technology for desktop applications, or should I still be using Windows Forms? Or do it in Silverlight in the browser. And in which language? It's a good thing that, here at p&p, we provide plenty of guidance on all these kinds of topics.

    Still, at least I can create my technology matrix schematic using all of the colors of the rainbow for the myriad boxes and shaded sections, so it actually does look like a rainbow and provides a nice calming desktop background - even if it does nothing else more useful.

  • Writing ... or Just Practicing?

    A Risky Business...

    • 0 Comments

    Have you ever wondered what insurance companies do with all the money you pay them every month? It seems that one UK-based insurance company decided that a good way to use up some of the spare cash was to discover that, every day, people in the UK are carrying around over 2,000 tons of redundant keys. I'm surprised they didn't come up with some conclusion such as this requires the unwarranted consumption of 10,000 gallons of fuel, which emits enough carbon to flood a small Pacific island.

    It seems they questioned several hundred people about the number of keys they carry and which locks they fit, and the results indicate that everyone carries around two keys that they don't know what they are for. However, I did a quick survey amongst six family members and friends and discovered only a single unknown key amongst all of us. So, on average, we are only carrying one sixth of an unknown key around. Extrapolating this percentage across the country means that there must be several hundred people carrying bunches of keys around when they don't know what any of them are for.

    Of course, statisticians will tell you that you can't just average out results in this way - it's not mathematically logical. It's like saying that, because some cats have no tail, the average cat has 0.9 tails. Yet insurance companies rely on averages every day to calculate their charge rates. They use the figures for the historical average number of accident claims based on your age and driving record, or the average number of claims for flooding for houses in your street. Or your previous record of claims for accidental damage to your house contents.

    What's worrying, however, is how these numbers affect your premiums. I just changed the address for my son's car insurance when he moved a quarter of a mile (to the next street in the same town) and the premium came down by nearly a quarter. I've suggested that he move house every month so, by next year, we won't be paying anything. Though it probably doesn't work like that...

    Anyway, if the way they calculate premiums is already this accurate, just think what it will be like in a few years' time as more powerful processors, parallel algorithms, and quantum computing continue to improve the prediction accuracy. The inevitable result is that, when you apply for an insurance quote, the company will actually be able to tell exactly how much they will need to pay out during the term, and will charge you that plus a handsome profit margin. So you'll actually be better off not having insurance at all, and just paying the costs of the damage directly!

    And this is the interesting point. Insurance is supposed to be about "shared risk". When insurance first started after the severe fires in London in the 17th Century, the premiums were based on the value of the property and the type of construction. Wood houses cost twice as much to insure as stone or brick houses. Other than that, everyone paid the same so they shared the costs equally. Of course, you can't really argue with the concept that people who have lower risk should pay less, or that people whose property is more valuable (and will therefore cost more to replace) should pay more. But I wonder if we are starting to take this to extremes.

    Ah, you say, but even with the pinpoint accuracy of the current predictions of risk, they are still averages. So if you are lucky (or careful) you can beat the odds and not need to claim, while if you are unlucky (or careless) you will get back more than you paid in premiums. True, but next year they'll just use an even more powerful computer to recalculate the risk averages. Like a derivative function in calculus, the area of variability under the curve can only get smaller.

    But I suppose it will be useful in one respect. When you get your motor insurance renewal telling you that this summer you will collide with a 2004 registered blue Volkswagen Beetle at a traffic signal in Little Worthing by the Sea, you can simply cancel your policy and use the money you save to go on holiday to Spain instead.

  • Writing ... or Just Practicing?

    Tales Of A Paranoid SysAdmin (Part 1)

    • 1 Comments

    Oh dear. Here in this desolate and forgotten outpost of the p&p empire it's pretend-to-be-a-sysadmin time all over again. Daily event viewer errors about the servers running out of disk space and shadow copies failing (mainly because I had to disable them due to lack of disk space) are gradually driving me crazy. Will I finally have to abandon my prized collection of Shaun The Sheep videos, or risk my life by deleting my wife's beloved archive of Motown music? And, worse still, can I face losing all those TV recordings of wonderful classic rock and punk concerts? Or maybe (warning: bad pun approaching) I just need to find some extra GIGs to store the gigs.

    Yep, I finally decided it was time to bite the bullet and add some extra storage to the two main servers that run my network and, effectively, my life. Surprisingly, my two rather diminutive Dell T100 servers each had an empty drive bay and a spare SATA port available, though I'll admit I had to phone a friend and email him some photos of the innards to confirm this. And he managed to guide me into selecting a suitable model of drive and cable that had a reasonable chance of working. The drives even fitted into the spare bays with some cheap brackets I had the forethought to order. Of course, it was absolutely no surprise when Windows blithely took no notice of them after a reboot. I never really expect my computer upgrades to actually work. But at least the extra heat from them will help to stop the servers freezing up during next winter's ice-age.

    However, after poking around in the BIOS and discovering that I needed to enable the SATA port, everything suddenly sprang into life. For less than fifty of our increasingly worthless English pounds each server now has 320 brand new gigs available - doubling the previous disk space. Amazing. And after some reshuffling of data, and managing to persuade WSUS to still work on a different drive, I was up and running again.

    Mind you, setting the appropriate security permissions and creating the shares for drives and folders was an interesting experience. One tip if you want to know how many user-configured shares there are on a drive is to open the Shadow Copies tab of the Properties dialog for a drive. It doesn't tell you where they are, but just type net share into a command window to get a list that includes the path - though it includes all the system shares as well. And if you intend to change the drive letter, do it before you create the shares. If not they disappear from Windows Explorer, but continue to live as hidden shares pointing to the old drive letter. You have to create new shares with the same name and the required path, and accept the warning message about overwriting the existing ones.

    And now I can move a couple of the Hyper-V VMs to a different drive as well, instead of having all four on one physical drive. Maybe then it won't take 20 minutes for each one to start up after the monthly patch update and reboot cycle. So, being paranoid, I check the security permissions on the existing VM drive and the new one before I start and discover that the drive root folder needs to have special permissions for the "Virtual Machines" account. So here's a challenge - try and add this account to the list in the Security tab of the Properties dialog for a drive. You'll find, as I did, that there is no such account. Not even the one named NT VIRTUAL MACHINES mentioned in a couple of blog posts. But as the MS blogs and TechNet pages say that you can just export a VM, move it, and then import it, there should be no problem. Maybe.

    Of course, they also say you can use the same name for more than one VM as long as you don't reuse the existing VM ID (un-tick this option in the Import dialog). Or you can use the same ID if you don't intend to keep the original VM. Obviously I can't run both at the same time anyway as they have the same DNS name and SIDs. So should I export the VM to the new drive, remove it from Hyper-V Manager, and then import it with the same ID? Or import it alongside the original one in Hyper-V Manager but allow it to create a new ID and then delete the old one when I find out if it works?

    As the VM in question is my main domain controller and schema master, I'm fairly keen not to destroy it. In the end I crossed all my fingers and toes and let it create a new ID. And, despite my fears, it just worked. The newly imported VM fired up and ran fine, even though there are two in Hyper-V Manager with the same name (to identify which is which, you can open the Settings dialog and check the path of the hard disk used by each VM). And the export\import process adds the required GUID-named account permission to the virtual disk file automatically (though not to the drive itself, but it seems to work fine without).

    What's worrying is how I never really expect things like this to just work. Maybe it's something to do with the aggravation suffered over the years fighting with NT4 and Windows 2000 Server, and the associated Active Directory and Exchange Server hassles I encountered then. I really must be paranoid about this stuff because I even insist on installing my Windows Updates each month manually rather than just letting the boxes get on with it themselves. So it was nice to see that Hyper-V continues to live up to its promise, and I'm feeling even more secure that my backup strategy of regularly exporting the machines and keeping multiple copies scattered round the network will work when something does blow up.

    So anyway, having gained all the new gigs I need, should I finally risk my sanity altogether and upgrade the servers and Hyper-V VMs from Windows Server 2008 to Windows Server 2008 R2? I abandoned that idea last time because I didn't have the required 15 or so gigs of spare disk space for each one. But it seemed like as good a time as any to have another go at testing my reasonably non-existent sysadmin capabilities. Maybe I would even get properly working mouse pointers in the VMs with R2 installed.

    So as they say in all the best TV shows (and some of the very dire ones), "Tune in next week to see if Alex managed to destroy his network by upgrading it to Windows Server 2008 R2..."

  • Writing ... or Just Practicing?

    Search, And You Probably Won't Find

    • 0 Comments

    If I asked you what they manufacture in Seattle, I'd guess you'd say "software and aeroplanes". Obviously I'm biased, so Microsoft is the first name to spring to mind. And I discovered from a recent Boeing factory tour that they build a few 'planes there now and then. You might also, after some additional thought, throw in "coffee shops" (Starbucks) and "book stores" (Amazon). But I bet you didn't include "doorbells" in your list.

    I know about the doorbells because I just bought a SpOre push button from a UK distributor and it proudly says "Made in Seattle" on the side of the box. Unless there is another Seattle somewhere else in the world, I'll assume that somebody expert in working with aluminium got fed up nailing wings onto 747s and left to set up on their own. Though you have to wonder about the train of thought when creating their business plan. "Hmmm, I'm an expert in building massively complex, high-tech, hugely expensive pieces of equipment so I think I'll make doorbells..."

    But the point of this week's wandering ramble is not specifically doorbells (a subject in which, I'll admit, I'm not an expert). What started this was the time and effort required to actually find the item in the first place. We don't live anywhere near a city that contains one of those idiosyncratic designer showrooms, and I tend not to spend my weekends at building exhibitions. So, when my wife decides that she wants "something different" in terms of hardware, furniture, or other materialistic bauble that the average DIY store doesn't stock, I typically end up trailing through endless search engine results trying to track down products and suppliers.

    Inevitably, what seems like an obvious set of search terms fails to locate the desired items. For example, rather than the usual "black plastic box and white button" that typifies the height of doorbell-push style here in England, searching for "contemporary doorbell push" just finds tons of entries for shopping comparison sites, ugly Victorian-style ironmongery, a few rather nasty chrome things, and (of course) hundreds of entries on EBay. I finally found the link to the SpOre distributor on what felt like page 93.

    Much the same occurred when searching for somebody who could supply a decent secure aluminium front door to replace the wooden one we have now (which was already rotting away before the ice-age winter we just encountered here). It took many diligent hours Binging and Googling to find a particularly well-disguised construction suppliers contact site, which linked to a manufacturer in Germany, who finally forwarded my email to a garage door installation company here in England. When I looked at their site, it was obvious that they did exactly what we wanted, but there was pretty much zero chance of finding them directly through a web search.

    And, not satisfied with all this aggro, it seems that the door manufacturers in Germany won't put a letter box slot in the door. They can't believe that anyone buying a properly insulated secure entrance door would want to cut a hole in it just for people to shove letters though (they tell me that only people in the UK do stupid things like this), so I have to figure out another way to provide our post lady with the necessary aperture for our mail. The answer is a proper "through the wall post box", and I'll refrain from describing the web search hell resulting from locating a UK supplier for one of these.

    Of course, the reason for the web search hell is that I don't know the name of the company I want before I actually find it. If I search for "spore doorbells" or "hormann doors", the site I want is top of the list. Yet, despite entering a bewildering array of door-oriented search terms, all that comes up unless you include the manufacturer's name is a list of double-glazing companies advertising plastic panelled doors with flower patterns; or wooden doors that wouldn't look out of place on a medieval castle.

    The problem is; how do you resolve this? There are obviously lots of very clever people working on the issue; and for website owners the solution is, I suppose, experience in the black art of search engine optimization (SEO). But there are only a limited number of obvious generic search terms - none of which are unique - compared to the millions of sites out there that may contain marginally relevant content. It seems that only searches for a product name (a registered trade mark) can really get you near to the top of the list. Even the sponsored links that most sites now offer are little help unless you can afford to pay whatever it costs to get your site listed whenever someone searches for a non-unique word such as "door". Meanwhile, most product and shopping comparison sites are more about how cheap you can buy stuff than helping you find what you are looking for.

    One alternative is the tree-style organization of links. When done well, this can be a great way to help you find specific items. Most search engines have a Categories section that allows you to narrow the search by category, but the logic still depends on how the search engine analysed the page content. It's really just an intelligent filter over the millions of matching hits in the original list of results. It's easier, of course, if you only need to find something within a site that can manage the content and categorization directly. An example is the B&Q website at http://www.diy.com - and when you consider the vast number of lines they stock it makes it really easy to navigate down through the categories to find, for example, 25mm x 6mm posidrive zinc plated single thread woodscrews.

    Mind you, tree navigation is not always ideal either. Some products will fit well in more than one category, while others may not logically fit into any category other than the useless "Miscellaneous" one. And once the tree gets to be very deep, it's easy to get lost - even when there is a breadcrumb indicator. It's like those automated telephone answering systems where you only find out that you should have pressed 3 at the main menu instead of 2 once you get two more levels down. And then you can't remember which option you chose last time when you start all over again. But at least with a phone system you can just select "speak to a customer advisor...".

    I remember reading years ago about the Resource Description Framework (RDF). Now part of the W3C Semantic Web project, RDF has blossomed to encompass all kinds of techniques for navigating data and providing descriptive links across topic areas. It allows you to accurately define the categories, topics, and meaning of the content and how it relates to other content. So a site could accurately specify that it contained information in the categories "Construction/Doors/Entrance/Residential/Aluminium/Contemporary" and "Building Products/Installers/Windows and Doors/Residential/". And, best of all, RDF supports the notion of graphs of information, so that an RDF-aware search engine can make sensible decisions about selecting relevant information.

    Yet it's hard to see how, without an unbelievably monumental retrofit effort across all sites, this can resolve the issue. It does seem that, for the foreseeable future, we are all destined to spend many wasted hours paging and clicking in vain.

Page 1 of 1 (4 items)