Writing ... or Just Practicing?

Random Disconnected Diatribes of a p&p Documentation Engineer

  • Writing ... or Just Practicing?

    Deblogged and Lastposted

    • 0 Comments

    When I was a kid, my Dad was in the Royal Air Force. When he came home and said he'd got a posting, it meant we were going off to live in a foreign country for a few years. Of course, today, a posting is just something on a blog - rather like this one. However, in this particular case, the double meaning is actually applicable.

    My nine years working with patterns & practices, including six as a fulltime employee, have come to an end. As of last week I really have been posted to a foreign land - or at least to a different division where I'll be writing about a range of Visual Studio and Azure technologies, and maybe even Project Siena (if I can force myself to drop the "lication" from "application"). It's also a whole new opportunity to use all the clever content creation tools that I never figured out before, write in formats and styles I've never tried, and target scenarios and customers that I've never tackled in the past. I'll probably have to turn my brain back on.

    So, this is the last post in my patterns & practices blog. Maybe if I can find a new home for occasional diatribes and ruminations I'll drop back and put a link in here. Meantime, it seems amazing that I've written over 350 posts since I became a fulltime ‘Softie. I never realized there was so much stuff in our modern world to complain about. Not that all of the posts were actually diatribes, of course - some were actually useful and insightful (possibly).

    Amongst the melee were plenty of posts about agile documentation and being a technical writer, such as Can Writers Dance The Agile? And several that offered advice to computing newcomers, such asTop 10 Tips for New or Nervous Computer Users. There was also my famed abilities at predicting the future, for example Leaping To Conclusions - Predictions for Leap Year 2012. And, of course, a selection of just plain weird ones such as Does It Really Matter?.

    That flurry of activity contrasts with my previous efforts at being a blogger. In a previous life (before Microsoft) I usually managed just one post a month, with the corresponding effect of lack of brevity. And, often, focus. Though there were a few such as SATA = Space And Time Allowing that explored the vaguarities of modern software and hardware, and plenty that ruminated on the gradual shift to digital media and online services such as I Hear Voices - From The Planet Rock. And, even then, a few with rather suspect titles such as Seen in Tenerife - A Nice Pair of TITSAs.

    And I suppose that this blog has now become "a previous life". Maybe I need to apply some kind of cascading present-tense update so that my old blog becomes an "even more previous life". But, instead, I'll leave you with a thought for the future:

    If the posts stop in a blog and no one is around to read them, does it make any difference?...

  • Writing ... or Just Practicing?

    Just Collecting Dust...

    • 0 Comments

    It's Edinburgh Festival time again, and as usual they voted a winning joke. One of my favorite actors and stand-up comedians, Tim Vine, won it for the second time this year with a new one-liner: "I'm thinking of selling my vacuum cleaner. It's just collecting dust."

    Likewise, it seems I now have a large and redundant lump of technology just collecting dust. If you succumbed through total boredom to read last week's post, you'll know that I suffered dead server syndrome and placed an order for a replacement to keep my over-complex and under-utilized domain network alive. And it looks, for the time being at least, that it will be staying in the box while I figure out how to go forward. Why? Because, for some inexplicable reason, the deceased server has risen from the dead and is happily chuntering along as though nothing happened.

    I was passing the server cabinet the other day and, for no particular reason, decided to see if the old joke about the physicist, chemist, and computer programmer touring Switzerland in a car (near the end of this page if you haven't heard it) was true. Turn on the UPS and hit the power button on the server, listen for whirring noise, a few blinks of the LEDs on the front, and the KVM light comes on - just like the other day when it did the same every time but switched off after a second or two. Except, this time, it kept going, booted into Windows, and sat there looking at me as if to say "So what did you expect?"

    And it kept going, all that day and all the next day. Even though it was disconnected from the network because it's a domain controller that I removed from the domain so connecting it would have screwed my Active Directory. Despite half a day spent trying to get it to run last week, it seems to have forgotten it's supposed to be broken. And there is absolutely nothing in the Event Logs to indicate what went wrong, or in the BIOS. The last entries just show a graceful shut down from the last time is was running two weeks ago. Maybe I just dreamed it all...?

    So I need to remove Active Directory before I can connect back to the network and the domain. Running dcpromo just reminds me that I have to uninstall Certificate Services first. Then it asks if this is the last domain controller in the domain, and do I want to delete the domain. I don't, but as it's not connected I assume it can't actually delete the domain so I say yes. Then it furkles about for ages and tells me that it can't find the other domain controllers. Not surprising really as there is a three inch air gap in the network as far as its concerned.

    So I go into AD Sites and Services on the was dead but now isn't box and try to remove the other domain controllers from it, but it won't let me because it can't see them. No matter what combination of options and other bodges I try, it refuses to do anything useful. In the end I resort to asking Bing, and discover that you have to run dcpromo /forceremoval, say yes to everything, and watch as it joyfully scavenges all the DC-related parts from your server. Then you just need to uninstall the Active Directory, DNS, and DHCP roles in Server Manager. And, of course, remember the admin password you set so you can log in again afterwards.

    Next I join the machine back to the domain as an ordinary domain member, and everything comes back just like it was before. A quick backup to save the new machine configuration and then copy the latest exported Hyper-V VMs to it, and I have a cold-swap backup server all ready to go again. One of the VMs is a domain controller, so I'm covered if the main server fails now. As long, of course, as the was dead but now isn't box actually starts up when I need it...

    The interesting question now is what went wrong last week. I'm switching my suspicions to the UPS because the one it uses has been problematic in the past. In the days before I installed solar panels, and was forced to install a new mains fuse-box just for the server cabinet because the total combination of hi-tech gadgetry in our house now has enough earth leakage to trip the RCD, this UPS was suspect. It had an occasional habit of tripping the RCD, even when turned off but still connected.

    In fact, I'm only still using it because it replaced one I managed to destroy during a recent spring-clean and dust removal exercise in the cabinet. I remembered to disconnect the mains input and the battery, but forgot to unplug the safety connector on the back and managed to create a shower of sparks when I accidently touched something inside with the metal ferule of a paint brush. When I plugged it back in there was the most amazing display of flashing arc-lights and a deep "whoomph" that would make the owner of a Ford Escort with a 500 watt sound system proud.

    So I've replaced the suspect UPS with another one that I know is OK, and bought a new one for the main server. Which means that I've spent more on UPSs this year than on servers, even including the new one. Which is now pretending to be a vacuum cleaner by just collecting dust...

  • Writing ... or Just Practicing?

    Do I R2, or Just 12?

    • 0 Comments

    We all know that hardware failures are not unheard of events, and - like most people - I try to cover for such eventualities. Many features of my network depend on a single server running a few Hyper-V machines, and so I have a cold-swap backup server all set up and configured to take over the tasks with only minimal work required. Except, when I turned it on to do the weekly directory sync and install this month's patches, it quietly keeled over and died.

    Mind you, it's one of the pair I bought some seven or eight years ago. The other, which was the main server, died four years ago in much the same sad way so I guess this one hasn't done too badly. Unfortunately, like the first one, it seems to be a motherboard failure that will cost nearly as much to fix (even if I can get the bits) than a new server. So I've bitten the bullet, splashed the plastic, and am waiting with baited breath hoping that the remaining server hangs on until the new one arrives.

    What made the situation worse was that, in my usual paranoid way, the backup server was also domain controller - even though I have a domain controller as the other Hyper-V host and one running as a VM on Hyper-V. Last time I had a domain controller fail (a Windows 2003 one) I spent many unhappy hours forcibly removing it, and the crud it leaves behind, from Active Directory before the remaining DCs could agree that the domain was valid. So I was dreading the task this time.

    Amazingly, however, it was easy. I just followed the advice on MSDN for doing it with the GUI. In Active Directory Sites and Services on another domain controller, you delete the NTDS settings object for the failed server and then delete the server itself. It took only a few seconds and, after they had done a bit of sniffing round the network, the remaining two domain controller seemed to be happy. So far, everything is working as it should. Probably because many settings, such as the DHCP options and DNS settings, purposely omitted the backup server because it was offline most of the time. If it was the main DC that had failed, I'd have needed to update these.

    However, now I face an interesting decision. The still surviving box runs Server 2008 R2. Do I install Server 2008 R2, Server 2012, or Server 2012 R2 on the new box? In theory it should be the latest and greatest, Server 2012 R2. However, somewhat unusually for me, I planned ahead by checking that Server 2008 R2 VMs could be painlessly imported onto Server 2012 R2 Hyper-V. It seems they can't. The latest version of Hyper-V uses a different schema for the info files, so you either have to copy the VM itself and set up all the options afterwards (such as network connections and other stuff), or use a conversion script.

    The problem is that I suspect this is a one-way transaction. At the moment I stop and export each VM, and then copy the exported folder to the same directory structure on the back-up server so that - in case of emergency - I can simply import the exported image and run it. The servers had identical physical and virtual network setups, and this worked fine when I tested it (yes, I did test my backup strategy!). But it gets more complicated if I have Server 2008 R2 on one box and Server 2012 R2 on the other. And I probably won't be able to use the new box as the main host with the existing one as the backup because I can't export/copy the VMs that way.

    So the choice seems to be to install either Server 2008 R2 to ensure compatibility (with Active Directory as well as Hyper-V), or Server 2012 not R2, which uses the same Hyper-V schema. With 2008 R2 I've maybe only got 4 or 5 years until support ends, though probably the old server will be dead before then. It seems like the second option is the best, but I wonder if I'll get continually nagged to upgrade to 2012 R2? In reality, I should probably bite the bullet, burn my bridges, cast fate to the winds, and upgrade the old box to 2012 R2 and just use that on both servers. Maybe it's a Star Wars spin-off: R2 2 DO ...

  • Writing ... or Just Practicing?

    More Wandering OOF

    • 0 Comments

    I'm not much of a gardener. Instead of green fingers, I have black fingers where the numbers rub off my laptop keyboard. What gardening I do mainly consists of chopping stuff down to a manageable height. I seem to spend all my garden-allocated time cutting grass, and attacking trees and bushes. My wife thinks I've got a pruner fetish.

    So it's a nice change to see some real gardens where stuff other than weeds and trees grow. I watched an interesting program about the history of Biddulph Grange gardens a while ago, and so we took a day of our vacation to pay a visit. The gardens were laid out by James Bateman in the mid 1800's based mainly on photos of foreign gardens. He supposedly never left England, and used to send his head gardener around the world collecting plants and seeds instead. It's a beautifully scenic place, as you can see here. And, yes, it has ducks (see last week). 

    A lot of the garden is narrow paths and steep climbs that weave between the sections of the garden, and the landscaping is extremely unusual. There is a dinosaur path edged with old bits of fallen trees, caves cut into the rocks, bridges to cross and streams with stepping stones, and odd buildings that lead you between vistas.

    One of the famous features is the Dahlia Walk. At this time of year there's not much to see in terms of Dahlias as they haven't flowered yet, but its a wonderful piece of engineering that you can view from above and then walk through. During the Great War they ploughed the whole garden flat when the hall itself was a hospital, but National Trust has done an incredible job of restoring it all, as you can see. Other oddities include tiny buildings and recesses containing a seat where you can relax and admire unusual views of the garden.  

    Another famous part is the Chinese pavilion and lake. An old photo shows James Bateman standing next to the lake holding a Chinese blue willow pattern plate, on which he supposedly modelled this section of the garden. It is truly beautiful and stunning - the photo doesn't come near to doing it justice.

    And finally, something a bit different. I used to work for a company based in Kingston-Upon-Hull many years ago, and my experience of the city has not tempted me back there since. However, it's changed a great deal since then by gaining a marina, new shopping centres, and a general facelift of the old industrial eyesore areas. Even the docks area has been spruced up. But the reason for our visit was to The Deep - a large aquarium and sea-life centre built alongside (and under) the Humber estuary. So you won't be surprised to see a photo of fish.  

    It's quite an amazing place, even if you have been to some of the US sea-life centres (as we have). The main tank is huge and contains the most amazing collection of fish, rays, sharks (including the chainsaw-adapted version below) and more. There's the usual tunnel where you can walk through the bottom of the tank and watch the occupants swim by. Of course, taking photos of a few million gallons of water isn't generally a hugely successful operation, but you get the gist. 

    There's also lots of smaller displays of aquatic animals. Some even seem quite interested in the passing hordes.

    And, of course, there's penguins. How can you not enjoy watching them waddling about so ungainly on land and yet so amazingly lithe in the water.

    It's not a cheap place to visit, and I never figured out how they stop the sharks from eating everything else, but it's worth a visit. Especially if you can time it, as we did, for the one day in your vacation week when it decides to pour with rain. I must be starting to get the hang of this holiday thing...

     

  • Writing ... or Just Practicing?

    Wandering OOF

    • 0 Comments

    How can I not Wallaby in England while the weather is so fair? Though, looking at these three, dinner is obviously more important than worrying about the chances of rain. 

    Yep, it's been "week vacation" time again and we've been wandering off to see some more of the sights and attractions here in our corner of Merry Olde England. Starting with Yorkshire Wildlife Park. Though, as you can see here, some of the residents were so concerned about the weather they had no time to take an interest in visitors.  

    Or perhaps they just couldn't be bothered to acknowledge passers-by. We watched this chap for ten minutes and he never moved so much as an eyelid. I suspected he was made of plastic, but decided against prodding him to confirm this.

    Thankfully, there are also plenty of wildfowl and water birds there as well. As will be obvious from previous travel posts, I'm not allowed to plan visits to wildlife parks unless there will be ducks. But I thought these Flamingos posed a much nicer picture.  

    The other good news is that some of the residents actually are pleased to see the occasional visitor. In fact, some even pose for pictures, even if sitting to attention is a bit too much for everyone. I think they were expecting us to have been organized enough to purchase a bag of food for them at the ticket booth.  

    But, as it was an incredibly warm day by English standards (i.e. above 80 deg.F) you can't blame anything that resembles a cat for being asleep for 90% of the day. Unfortunately we missed the 10% when he was awake. At least, unlike our two cats, you couldn't hear him snoring.

    Even the King of the Beasts was feeling the strain of staying awake until mid morning.

    But it was a lovely day out. The park is huge, with dozens of different types of animals, birds, snakes, and other creatures. Think Giraffes, Monkeys and Apes, Zebra, Owls, Mongooses (Mongeese?), Meerkats, and many other small furry, feathery, crawly, and scaly things. Well worth a visit.

    Wandering around eating ice-cream and waving at animals is OK, but you also need to take in some historical information to make it a worthwhile holiday. There's been lots on TV recently about the Black Death since they found a cemetery full of victims in London. So we decided on a trip to the famous "Plague Village" of Eyam, not far from where we live. It's interesting to wander through the village and visit the church. There's signs everywhere telling you who lived (and died) where, and what they did. The village museum is superb, with tons of information about the plague, as well as details of the population and exhibits showing how they lived and worked in the area. Of course, the main story is how they isolated themselves from the surrounding community to prevent the plague from spreading.

    Eyam village also boasts the local hall, now fully open to the public since the remaining members of the family moved out a few years ago. It's an interesting place, with whole rooms left just as they were in Edwardian and Victorian times. There's even one room where the walls are lined with tapestries that are more than 100 years older than the house. Just a shame they cut them into pieces and nailed them to the walls. What's also a little disconcerting is that many of the historical artefacts on display are things that I can remember using or seeing in our house when I was young.      

    After we balked at the cost of a National Trust coffee and bun in the hall's restaurant, the nice lady at the museum suggested a ride to Grindleford station, where the old station building is now a cafe that serves rather wonderful sausage and bacon sandwiches. So that was the next stop. Of course, being a railway buff, it also meant getting in some train-spotting time. So, just for fellow railway fans, here's a photo of a local Sheffield MU service that's just left Grindleford station and is entering the famous Totley Tunnel

    And the bad news is that this is only two of the "days out". There's two more to follow next week...

     

  • Writing ... or Just Practicing?

    Kitchen Music

    • 0 Comments

    I guess most people know what Garage Music is, but I reckon I just invented a new category: Kitchen Music. Though the definition is somewhat woolly and vague. Basically, its music that my wife wants to listen to when she's in the kitchen. You could say that it's a user-defined category.

    Some time ago I replaced our failed Soundbridge Internet Radio with a Roberts 83i box. It's a neat bit of kit, and is proving reliable (touch wood) and works really well with many Internet radio stations. Though I have to say that there are several stations we'd like to listen to that it can't seem to receive - Planet Rock being a typical example. Unlike the Soundbridge, you can't just enter the URI of a stream. Instead, it uses a pre-defined station list maintained and accessed over the web.

    However, it's neat that, after you tune to a station, it carries on receiving that station when you turn it off and back on - just like you'd expect from an ordinary trannie radio. Or you can simply turn it on and hit one of the five preset buttons to tune to another station.

    I should probably explain for younger readers that "trannie" means "transistor radio", a left-over from my younger days when we were amazed that you could have a portable radio instead of one of those big mains-powered wooden boxes full of valves.

    The only drawback is that we're struggling to find a station that we can live with for long periods. Increasingly, they all seem to have limited playlists - so that you hear the same music over and over again. Or they are full of adverts and chat, when we just want music. I found one US station that plays great classic rock music, but every afternoon has an hour-long chat section and news/weather from somewhere we don't live. Another that plays good music turns out to be in Albania, and the music is interspersed with adverts and chat in Albanian.

    So I decided that the answer is to simply stream music from the multiple GBs of ripped CDs stored on the file server in my garage. I looked at buying a fancy soundbar to go on top of the kitchen units, and a wireless receiver to stream the music to it, but the cost and the apparent complexity put me off. It seems to involve a phone app, several remote control handsets, and - from reading reviews on the web - plenty of fiddling with Wi-Fi and other settings.

    Ah, but the Roberts Radio can supposedly do media streaming from any UPnP source. So I set up Media Player on the Windows 7 Hyper-V VM in the server cabinet to read music from the file server, turned on media streaming, and created a few playlists of our favourite music. Then tried to connect from the Roberts radio - but no luck. It found the media server but timed out reading the playlists. However, after a day or so I discovered that it had read them. It seems it does network discovery, and it just takes a while to get comfortable with what it finds.

    So now we can get Kitchen Music with no chat, no adverts, and even choose the songs we want to hear. I used the Auto-playlist function in Media Player to set up a few "all rock" and other playlists, some including hundreds of songs, and the Roberts box seems to play them fine. The sound quality is, if not Hi-Fi, quite good as well. You can even set up auto-repeat and auto-shuffle. So it seems like a perfect solution.

    However, here's the rub. It forgets what it was doing when you turn it off and back on again. Unless you leave it turned on all the time with the volume at nothing, you have to go through about eight menu options just to start the music playing again. And if you can't be bothered, pressing the Internet Radio presets to get back to a radio station doesn't work either unless you first go through three menu options to get back to Internet Radio mode.

    So it looks much like we'll be back to listening to the same limited set of songs, interspersed with adverts and chat in an increasing range of foreign langauges, because the effort of restarting the local music stream is just too annoyingly fiddly. Another example of half-hearted user requirements research as design time? Probably, just like all software, the features you really want are always implemented in the new version that you haven't got...

  • Writing ... or Just Practicing?

    Are Answers On The Menu?

    • 0 Comments

    Reading in the newspaper this week about the technological advances in political campaigning set my mind wondering about whether there is an ethics/success trade-off in most areas of work, as well as in life generally.

    I don't mean cheating in order to win; it's more about how you balance what you do, with what you think people want you to do. The article I was reading focused on the area of national politics. Technologies that we in the IT world are familiar with are increasingly being used to determine the "mood of the people" and to target susceptible voters. In the U.S. they already use Big Data techniques to profile the population and to analyze sectors for specific actions. The same is happening here in Britain.

    What I can't help wondering is whether this spells the end to true political conviction. If, as a party, you firmly believe that policy A is an absolutely necessary requirement for the country, and will provide the best future for the people, what happens when your data analysis reveals that it's not likely to be as popular as policy B? Do you try to adapt policy A to match the results from the data and sound like policy B, abandon it altogether in favour of policy C that is even more popular, or carry on regardless and hope that people will finally realize policy A is the best way to go?

    Some of the greatest politicians of the past worked from a basis of pure conviction, and many achieved changes for the better. Some pushed on regardless and failed. Does the ability to get accurate feedback on the perceived desires of the population, or of specific and increasingly narrowly defined sectors, reduce the conviction that has always been at the heart of real politicians? Perhaps now, instead of relying on the experts that govern us to make a real difference to our lives, we just get the policies we deserve because we all just want what's best for each of us today - and politicians can discover what that is.

    There's an ongoing discussion that the same is true of many large companies and organizations. They call it "short-termism" because public companies have to focus on what will look good in the next quarter's results in order to keep shareholders happy, rather than being able to take the long view and maximize success through long term changes. Even though governments generally get a longer term, such as five years, the same applies because it's pretty much impossible to make real changes in politics in such a short space of time.

    Of course, there are some organizations where you don't need to worry about public opinion. In private companies you can, in theory, do all the long term planning you need because you have no shareholders to please. You just need to be able to stay in business as you plan and change for the future. In extreme cases, such as here in the European Union, you don't even need to worry what the public thinks. The central masters of the project can just do whatever they feel is right for the Union, and nobody gets to influence the decisions. Maybe the EU, and other non-democratic regions of the world, are the only place where the politics of conviction still apply.

    So how does all this relate to our world of technology? As I read the article it seemed as though it was a similar situation to that we have in creating guidance and documentation for our products and services. Traditionally, the process of creating documentation for a software product revolved about explaining the features of the product. In many cases, this simply meant explaining what each of the menu options does, and how you use that feature.

    I've recently installed a 4-channel DVR to monitor four bird nest boxes, and the instructions for the DVR follow just this pattern. There are over 100 pages that patiently explain every option in the multiple menus for setting up and using it, yet nowhere does it answer some obvious questions such as "do I need to enable alarms to make motion detection work?", "why is the hard disk light flashing when it's not recording anything?", and "why are there four video inputs but only two audio inputs?" And that's just the first three of the unanswered questions.

    Over the years, we've learned to write documentation that is more focused on the customer's point of view instead. We start with scenarios for using the product, and develop these into procedures for achieving the most common tasks. Along the way we use examples and background information to try to help users understand the product. But, in many cases, the scenarios themselves come from our best guesses at what the user needs to know, and how they will use the product. It's still very much built from our opinions and a conviction that we know what the customer needs to know, rather than being based on what they tell us they actually want to know.

    However, more recently, even this has started to change. The current thinking is that we should answer the questions users are asking now, rather than telling them what we think they need to know. It's become a data gathering exercise, and we use the data to maximize the impact we have by targeting effort at the most popular requirements. In most IT sectors and organizations, fast and flexible responsiveness is replacing principles and conviction.

    Is it a good thing? I have to say that I'm not entirely persuaded so far. Perhaps, with the rate of change in modern service-based software and marketplace-delivered apps, this is the only way to go forward. Yet I can't help wondering if it just introduces randomness, which can dilute the structured approach to guidance that helps users get the most from the product.

    Maybe if I could get a manual for my new DVR that answers my questions, I would be more convinced...

  • Writing ... or Just Practicing?

    Semantically Speaking...

    • 2 Comments

    So I've temporarily escaped from Azure to lend a hand with, as Monty Python would say, something completely different. And it's a bit like coming home again, because I'm back with the Enterprise Library team. Some of them even remembered me from last time (though I'm not sure that's a huge advantage).

    Enterprise Library has changed almost beyond recognition while I've been away. Several of the application blocks have gone, and there are some new ones. One even appeared and then disappeared again during my absence (the Autoscaling block). And the blocks are generally drifting apart so that they can be used stand-alone more easily, especially in environments such as Azure.

    It's interesting that, when I first started work with the EntLib team, we were building the Composite Application Block (CAB) - parts of which sort of morphed into the Unity Dependency Injection mechanism. And the other separate application blocks were slowly becoming more tightly integrated into a cohesive whole. Through versions 3, 4, and 5 they became a one-stop solution for a range of cross-cutting concerns. But now one or two of the blocks are starting to reach adolescence, and break free to seek their fortune in the big wide world outside.

    One of these fledglings is the block I'm working on now. The Semantic Logging Application Block is an interesting combination of bits that makes it easier to work with structured events. It allows you to capture events from classes based on the .NET 4.5 and above EventSource class, play about with the events, and store them in a range of different logging destinations. As well as text files and databases, there's an event sink that writes events to Azure table storage (so I still haven't quite managed to escape from the cloud).

    The latest version of the block itself is available from NuGet, and we should have the comprehensive documentation available any time now. It started out as a quick update of the existing docs to match the new features in the block, but has expanded into a refactoring of the content into a more logical form, and to provide a better user experience. Something I seem to spend my life doing - I keep hoping that the next version of Word will have an "Auto-Refactor" button on the ribbon.

    More than anything, though, is the useful experience it's providing in learning more about structured (or semantic) logging. I played with Event Tracing for Windows (ETW) a few times in the past when trying to consolidate event logs from my own servers, and gave up when the level of complexity surpassed by capabilities (it didn't take long). But EventSource seems easy to work with, and I've promised myself that every kludgy utility and tool I write in future will expose proper modern events with a structured and typed payload.

    This means that I can use the clever and easy to configure Out-of-Process Host listener that comes with the Semantic Logging Application Block to write them all to a central database where I can play with them. And the neat thing is that, by doing this, I can record the details of the event but just have a nice useful error message for the user that reflects modern application practice. Such as "Warning! Your hovercraft is full of eels...", or maybe just "Oh dear, it's broken again..."

Page 1 of 41 (326 items) 12345»