Random Disconnected Diatribes of a p&p Documentation Engineer
It's Edinburgh Festival time again, and as usual they voted a winning joke. One of my favorite actors and stand-up comedians, Tim Vine, won it for the second time this year with a new one-liner: "I'm thinking of selling my vacuum cleaner. It's just collecting dust."
Likewise, it seems I now have a large and redundant lump of technology just collecting dust. If you succumbed through total boredom to read last week's post, you'll know that I suffered dead server syndrome and placed an order for a replacement to keep my over-complex and under-utilized domain network alive. And it looks, for the time being at least, that it will be staying in the box while I figure out how to go forward. Why? Because, for some inexplicable reason, the deceased server has risen from the dead and is happily chuntering along as though nothing happened.
I was passing the server cabinet the other day and, for no particular reason, decided to see if the old joke about the physicist, chemist, and computer programmer touring Switzerland in a car (near the end of this page if you haven't heard it) was true. Turn on the UPS and hit the power button on the server, listen for whirring noise, a few blinks of the LEDs on the front, and the KVM light comes on - just like the other day when it did the same every time but switched off after a second or two. Except, this time, it kept going, booted into Windows, and sat there looking at me as if to say "So what did you expect?"
And it kept going, all that day and all the next day. Even though it was disconnected from the network because it's a domain controller that I removed from the domain so connecting it would have screwed my Active Directory. Despite half a day spent trying to get it to run last week, it seems to have forgotten it's supposed to be broken. And there is absolutely nothing in the Event Logs to indicate what went wrong, or in the BIOS. The last entries just show a graceful shut down from the last time is was running two weeks ago. Maybe I just dreamed it all...?
So I need to remove Active Directory before I can connect back to the network and the domain. Running dcpromo just reminds me that I have to uninstall Certificate Services first. Then it asks if this is the last domain controller in the domain, and do I want to delete the domain. I don't, but as it's not connected I assume it can't actually delete the domain so I say yes. Then it furkles about for ages and tells me that it can't find the other domain controllers. Not surprising really as there is a three inch air gap in the network as far as its concerned.
So I go into AD Sites and Services on the was dead but now isn't box and try to remove the other domain controllers from it, but it won't let me because it can't see them. No matter what combination of options and other bodges I try, it refuses to do anything useful. In the end I resort to asking Bing, and discover that you have to run dcpromo /forceremoval, say yes to everything, and watch as it joyfully scavenges all the DC-related parts from your server. Then you just need to uninstall the Active Directory, DNS, and DHCP roles in Server Manager. And, of course, remember the admin password you set so you can log in again afterwards.
Next I join the machine back to the domain as an ordinary domain member, and everything comes back just like it was before. A quick backup to save the new machine configuration and then copy the latest exported Hyper-V VMs to it, and I have a cold-swap backup server all ready to go again. One of the VMs is a domain controller, so I'm covered if the main server fails now. As long, of course, as the was dead but now isn't box actually starts up when I need it...
The interesting question now is what went wrong last week. I'm switching my suspicions to the UPS because the one it uses has been problematic in the past. In the days before I installed solar panels, and was forced to install a new mains fuse-box just for the server cabinet because the total combination of hi-tech gadgetry in our house now has enough earth leakage to trip the RCD, this UPS was suspect. It had an occasional habit of tripping the RCD, even when turned off but still connected.
In fact, I'm only still using it because it replaced one I managed to destroy during a recent spring-clean and dust removal exercise in the cabinet. I remembered to disconnect the mains input and the battery, but forgot to unplug the safety connector on the back and managed to create a shower of sparks when I accidently touched something inside with the metal ferule of a paint brush. When I plugged it back in there was the most amazing display of flashing arc-lights and a deep "whoomph" that would make the owner of a Ford Escort with a 500 watt sound system proud.
So I've replaced the suspect UPS with another one that I know is OK, and bought a new one for the main server. Which means that I've spent more on UPSs this year than on servers, even including the new one. Which is now pretending to be a vacuum cleaner by just collecting dust...
We all know that hardware failures are not unheard of events, and - like most people - I try to cover for such eventualities. Many features of my network depend on a single server running a few Hyper-V machines, and so I have a cold-swap backup server all set up and configured to take over the tasks with only minimal work required. Except, when I turned it on to do the weekly directory sync and install this month's patches, it quietly keeled over and died.
Mind you, it's one of the pair I bought some seven or eight years ago. The other, which was the main server, died four years ago in much the same sad way so I guess this one hasn't done too badly. Unfortunately, like the first one, it seems to be a motherboard failure that will cost nearly as much to fix (even if I can get the bits) than a new server. So I've bitten the bullet, splashed the plastic, and am waiting with baited breath hoping that the remaining server hangs on until the new one arrives.
What made the situation worse was that, in my usual paranoid way, the backup server was also domain controller - even though I have a domain controller as the other Hyper-V host and one running as a VM on Hyper-V. Last time I had a domain controller fail (a Windows 2003 one) I spent many unhappy hours forcibly removing it, and the crud it leaves behind, from Active Directory before the remaining DCs could agree that the domain was valid. So I was dreading the task this time.
Amazingly, however, it was easy. I just followed the advice on MSDN for doing it with the GUI. In Active Directory Sites and Services on another domain controller, you delete the NTDS settings object for the failed server and then delete the server itself. It took only a few seconds and, after they had done a bit of sniffing round the network, the remaining two domain controller seemed to be happy. So far, everything is working as it should. Probably because many settings, such as the DHCP options and DNS settings, purposely omitted the backup server because it was offline most of the time. If it was the main DC that had failed, I'd have needed to update these.
However, now I face an interesting decision. The still surviving box runs Server 2008 R2. Do I install Server 2008 R2, Server 2012, or Server 2012 R2 on the new box? In theory it should be the latest and greatest, Server 2012 R2. However, somewhat unusually for me, I planned ahead by checking that Server 2008 R2 VMs could be painlessly imported onto Server 2012 R2 Hyper-V. It seems they can't. The latest version of Hyper-V uses a different schema for the info files, so you either have to copy the VM itself and set up all the options afterwards (such as network connections and other stuff), or use a conversion script.
The problem is that I suspect this is a one-way transaction. At the moment I stop and export each VM, and then copy the exported folder to the same directory structure on the back-up server so that - in case of emergency - I can simply import the exported image and run it. The servers had identical physical and virtual network setups, and this worked fine when I tested it (yes, I did test my backup strategy!). But it gets more complicated if I have Server 2008 R2 on one box and Server 2012 R2 on the other. And I probably won't be able to use the new box as the main host with the existing one as the backup because I can't export/copy the VMs that way.
So the choice seems to be to install either Server 2008 R2 to ensure compatibility (with Active Directory as well as Hyper-V), or Server 2012 not R2, which uses the same Hyper-V schema. With 2008 R2 I've maybe only got 4 or 5 years until support ends, though probably the old server will be dead before then. It seems like the second option is the best, but I wonder if I'll get continually nagged to upgrade to 2012 R2? In reality, I should probably bite the bullet, burn my bridges, cast fate to the winds, and upgrade the old box to 2012 R2 and just use that on both servers. Maybe it's a Star Wars spin-off: R2 2 DO ...
I'm not much of a gardener. Instead of green fingers, I have black fingers where the numbers rub off my laptop keyboard. What gardening I do mainly consists of chopping stuff down to a manageable height. I seem to spend all my garden-allocated time cutting grass, and attacking trees and bushes. My wife thinks I've got a pruner fetish.
So it's a nice change to see some real gardens where stuff other than weeds and trees grow. I watched an interesting program about the history of Biddulph Grange gardens a while ago, and so we took a day of our vacation to pay a visit. The gardens were laid out by James Bateman in the mid 1800's based mainly on photos of foreign gardens. He supposedly never left England, and used to send his head gardener around the world collecting plants and seeds instead. It's a beautifully scenic place, as you can see here. And, yes, it has ducks (see last week).
A lot of the garden is narrow paths and steep climbs that weave between the sections of the garden, and the landscaping is extremely unusual. There is a dinosaur path edged with old bits of fallen trees, caves cut into the rocks, bridges to cross and streams with stepping stones, and odd buildings that lead you between vistas.
One of the famous features is the Dahlia Walk. At this time of year there's not much to see in terms of Dahlias as they haven't flowered yet, but its a wonderful piece of engineering that you can view from above and then walk through. During the Great War they ploughed the whole garden flat when the hall itself was a hospital, but National Trust has done an incredible job of restoring it all, as you can see. Other oddities include tiny buildings and recesses containing a seat where you can relax and admire unusual views of the garden.
Another famous part is the Chinese pavilion and lake. An old photo shows James Bateman standing next to the lake holding a Chinese blue willow pattern plate, on which he supposedly modelled this section of the garden. It is truly beautiful and stunning - the photo doesn't come near to doing it justice.
And finally, something a bit different. I used to work for a company based in Kingston-Upon-Hull many years ago, and my experience of the city has not tempted me back there since. However, it's changed a great deal since then by gaining a marina, new shopping centres, and a general facelift of the old industrial eyesore areas. Even the docks area has been spruced up. But the reason for our visit was to The Deep - a large aquarium and sea-life centre built alongside (and under) the Humber estuary. So you won't be surprised to see a photo of fish.
It's quite an amazing place, even if you have been to some of the US sea-life centres (as we have). The main tank is huge and contains the most amazing collection of fish, rays, sharks (including the chainsaw-adapted version below) and more. There's the usual tunnel where you can walk through the bottom of the tank and watch the occupants swim by. Of course, taking photos of a few million gallons of water isn't generally a hugely successful operation, but you get the gist.
There's also lots of smaller displays of aquatic animals. Some even seem quite interested in the passing hordes.
And, of course, there's penguins. How can you not enjoy watching them waddling about so ungainly on land and yet so amazingly lithe in the water.
It's not a cheap place to visit, and I never figured out how they stop the sharks from eating everything else, but it's worth a visit. Especially if you can time it, as we did, for the one day in your vacation week when it decides to pour with rain. I must be starting to get the hang of this holiday thing...
How can I not Wallaby in England while the weather is so fair? Though, looking at these three, dinner is obviously more important than worrying about the chances of rain.
Yep, it's been "week vacation" time again and we've been wandering off to see some more of the sights and attractions here in our corner of Merry Olde England. Starting with Yorkshire Wildlife Park. Though, as you can see here, some of the residents were so concerned about the weather they had no time to take an interest in visitors.
Or perhaps they just couldn't be bothered to acknowledge passers-by. We watched this chap for ten minutes and he never moved so much as an eyelid. I suspected he was made of plastic, but decided against prodding him to confirm this.
Thankfully, there are also plenty of wildfowl and water birds there as well. As will be obvious from previous travel posts, I'm not allowed to plan visits to wildlife parks unless there will be ducks. But I thought these Flamingos posed a much nicer picture.
The other good news is that some of the residents actually are pleased to see the occasional visitor. In fact, some even pose for pictures, even if sitting to attention is a bit too much for everyone. I think they were expecting us to have been organized enough to purchase a bag of food for them at the ticket booth.
But, as it was an incredibly warm day by English standards (i.e. above 80 deg.F) you can't blame anything that resembles a cat for being asleep for 90% of the day. Unfortunately we missed the 10% when he was awake. At least, unlike our two cats, you couldn't hear him snoring.
Even the King of the Beasts was feeling the strain of staying awake until mid morning.
But it was a lovely day out. The park is huge, with dozens of different types of animals, birds, snakes, and other creatures. Think Giraffes, Monkeys and Apes, Zebra, Owls, Mongooses (Mongeese?), Meerkats, and many other small furry, feathery, crawly, and scaly things. Well worth a visit.
Wandering around eating ice-cream and waving at animals is OK, but you also need to take in some historical information to make it a worthwhile holiday. There's been lots on TV recently about the Black Death since they found a cemetery full of victims in London. So we decided on a trip to the famous "Plague Village" of Eyam, not far from where we live. It's interesting to wander through the village and visit the church. There's signs everywhere telling you who lived (and died) where, and what they did. The village museum is superb, with tons of information about the plague, as well as details of the population and exhibits showing how they lived and worked in the area. Of course, the main story is how they isolated themselves from the surrounding community to prevent the plague from spreading.
Eyam village also boasts the local hall, now fully open to the public since the remaining members of the family moved out a few years ago. It's an interesting place, with whole rooms left just as they were in Edwardian and Victorian times. There's even one room where the walls are lined with tapestries that are more than 100 years older than the house. Just a shame they cut them into pieces and nailed them to the walls. What's also a little disconcerting is that many of the historical artefacts on display are things that I can remember using or seeing in our house when I was young.
After we balked at the cost of a National Trust coffee and bun in the hall's restaurant, the nice lady at the museum suggested a ride to Grindleford station, where the old station building is now a cafe that serves rather wonderful sausage and bacon sandwiches. So that was the next stop. Of course, being a railway buff, it also meant getting in some train-spotting time. So, just for fellow railway fans, here's a photo of a local Sheffield MU service that's just left Grindleford station and is entering the famous Totley Tunnel.
And the bad news is that this is only two of the "days out". There's two more to follow next week...
I guess most people know what Garage Music is, but I reckon I just invented a new category: Kitchen Music. Though the definition is somewhat woolly and vague. Basically, its music that my wife wants to listen to when she's in the kitchen. You could say that it's a user-defined category.
Some time ago I replaced our failed Soundbridge Internet Radio with a Roberts 83i box. It's a neat bit of kit, and is proving reliable (touch wood) and works really well with many Internet radio stations. Though I have to say that there are several stations we'd like to listen to that it can't seem to receive - Planet Rock being a typical example. Unlike the Soundbridge, you can't just enter the URI of a stream. Instead, it uses a pre-defined station list maintained and accessed over the web.
However, it's neat that, after you tune to a station, it carries on receiving that station when you turn it off and back on - just like you'd expect from an ordinary trannie radio. Or you can simply turn it on and hit one of the five preset buttons to tune to another station.
I should probably explain for younger readers that "trannie" means "transistor radio", a left-over from my younger days when we were amazed that you could have a portable radio instead of one of those big mains-powered wooden boxes full of valves.
The only drawback is that we're struggling to find a station that we can live with for long periods. Increasingly, they all seem to have limited playlists - so that you hear the same music over and over again. Or they are full of adverts and chat, when we just want music. I found one US station that plays great classic rock music, but every afternoon has an hour-long chat section and news/weather from somewhere we don't live. Another that plays good music turns out to be in Albania, and the music is interspersed with adverts and chat in Albanian.
So I decided that the answer is to simply stream music from the multiple GBs of ripped CDs stored on the file server in my garage. I looked at buying a fancy soundbar to go on top of the kitchen units, and a wireless receiver to stream the music to it, but the cost and the apparent complexity put me off. It seems to involve a phone app, several remote control handsets, and - from reading reviews on the web - plenty of fiddling with Wi-Fi and other settings.
Ah, but the Roberts Radio can supposedly do media streaming from any UPnP source. So I set up Media Player on the Windows 7 Hyper-V VM in the server cabinet to read music from the file server, turned on media streaming, and created a few playlists of our favourite music. Then tried to connect from the Roberts radio - but no luck. It found the media server but timed out reading the playlists. However, after a day or so I discovered that it had read them. It seems it does network discovery, and it just takes a while to get comfortable with what it finds.
So now we can get Kitchen Music with no chat, no adverts, and even choose the songs we want to hear. I used the Auto-playlist function in Media Player to set up a few "all rock" and other playlists, some including hundreds of songs, and the Roberts box seems to play them fine. The sound quality is, if not Hi-Fi, quite good as well. You can even set up auto-repeat and auto-shuffle. So it seems like a perfect solution.
However, here's the rub. It forgets what it was doing when you turn it off and back on again. Unless you leave it turned on all the time with the volume at nothing, you have to go through about eight menu options just to start the music playing again. And if you can't be bothered, pressing the Internet Radio presets to get back to a radio station doesn't work either unless you first go through three menu options to get back to Internet Radio mode.
So it looks much like we'll be back to listening to the same limited set of songs, interspersed with adverts and chat in an increasing range of foreign langauges, because the effort of restarting the local music stream is just too annoyingly fiddly. Another example of half-hearted user requirements research as design time? Probably, just like all software, the features you really want are always implemented in the new version that you haven't got...
Reading in the newspaper this week about the technological advances in political campaigning set my mind wondering about whether there is an ethics/success trade-off in most areas of work, as well as in life generally.
I don't mean cheating in order to win; it's more about how you balance what you do, with what you think people want you to do. The article I was reading focused on the area of national politics. Technologies that we in the IT world are familiar with are increasingly being used to determine the "mood of the people" and to target susceptible voters. In the U.S. they already use Big Data techniques to profile the population and to analyze sectors for specific actions. The same is happening here in Britain.
What I can't help wondering is whether this spells the end to true political conviction. If, as a party, you firmly believe that policy A is an absolutely necessary requirement for the country, and will provide the best future for the people, what happens when your data analysis reveals that it's not likely to be as popular as policy B? Do you try to adapt policy A to match the results from the data and sound like policy B, abandon it altogether in favour of policy C that is even more popular, or carry on regardless and hope that people will finally realize policy A is the best way to go?
Some of the greatest politicians of the past worked from a basis of pure conviction, and many achieved changes for the better. Some pushed on regardless and failed. Does the ability to get accurate feedback on the perceived desires of the population, or of specific and increasingly narrowly defined sectors, reduce the conviction that has always been at the heart of real politicians? Perhaps now, instead of relying on the experts that govern us to make a real difference to our lives, we just get the policies we deserve because we all just want what's best for each of us today - and politicians can discover what that is.
There's an ongoing discussion that the same is true of many large companies and organizations. They call it "short-termism" because public companies have to focus on what will look good in the next quarter's results in order to keep shareholders happy, rather than being able to take the long view and maximize success through long term changes. Even though governments generally get a longer term, such as five years, the same applies because it's pretty much impossible to make real changes in politics in such a short space of time.
Of course, there are some organizations where you don't need to worry about public opinion. In private companies you can, in theory, do all the long term planning you need because you have no shareholders to please. You just need to be able to stay in business as you plan and change for the future. In extreme cases, such as here in the European Union, you don't even need to worry what the public thinks. The central masters of the project can just do whatever they feel is right for the Union, and nobody gets to influence the decisions. Maybe the EU, and other non-democratic regions of the world, are the only place where the politics of conviction still apply.
So how does all this relate to our world of technology? As I read the article it seemed as though it was a similar situation to that we have in creating guidance and documentation for our products and services. Traditionally, the process of creating documentation for a software product revolved about explaining the features of the product. In many cases, this simply meant explaining what each of the menu options does, and how you use that feature.
I've recently installed a 4-channel DVR to monitor four bird nest boxes, and the instructions for the DVR follow just this pattern. There are over 100 pages that patiently explain every option in the multiple menus for setting up and using it, yet nowhere does it answer some obvious questions such as "do I need to enable alarms to make motion detection work?", "why is the hard disk light flashing when it's not recording anything?", and "why are there four video inputs but only two audio inputs?" And that's just the first three of the unanswered questions.
Over the years, we've learned to write documentation that is more focused on the customer's point of view instead. We start with scenarios for using the product, and develop these into procedures for achieving the most common tasks. Along the way we use examples and background information to try to help users understand the product. But, in many cases, the scenarios themselves come from our best guesses at what the user needs to know, and how they will use the product. It's still very much built from our opinions and a conviction that we know what the customer needs to know, rather than being based on what they tell us they actually want to know.
However, more recently, even this has started to change. The current thinking is that we should answer the questions users are asking now, rather than telling them what we think they need to know. It's become a data gathering exercise, and we use the data to maximize the impact we have by targeting effort at the most popular requirements. In most IT sectors and organizations, fast and flexible responsiveness is replacing principles and conviction.
Is it a good thing? I have to say that I'm not entirely persuaded so far. Perhaps, with the rate of change in modern service-based software and marketplace-delivered apps, this is the only way to go forward. Yet I can't help wondering if it just introduces randomness, which can dilute the structured approach to guidance that helps users get the most from the product.
Maybe if I could get a manual for my new DVR that answers my questions, I would be more convinced...
So I've temporarily escaped from Azure to lend a hand with, as Monty Python would say, something completely different. And it's a bit like coming home again, because I'm back with the Enterprise Library team. Some of them even remembered me from last time (though I'm not sure that's a huge advantage).
Enterprise Library has changed almost beyond recognition while I've been away. Several of the application blocks have gone, and there are some new ones. One even appeared and then disappeared again during my absence (the Autoscaling block). And the blocks are generally drifting apart so that they can be used stand-alone more easily, especially in environments such as Azure.
It's interesting that, when I first started work with the EntLib team, we were building the Composite Application Block (CAB) - parts of which sort of morphed into the Unity Dependency Injection mechanism. And the other separate application blocks were slowly becoming more tightly integrated into a cohesive whole. Through versions 3, 4, and 5 they became a one-stop solution for a range of cross-cutting concerns. But now one or two of the blocks are starting to reach adolescence, and break free to seek their fortune in the big wide world outside.
One of these fledglings is the block I'm working on now. The Semantic Logging Application Block is an interesting combination of bits that makes it easier to work with structured events. It allows you to capture events from classes based on the .NET 4.5 and above EventSource class, play about with the events, and store them in a range of different logging destinations. As well as text files and databases, there's an event sink that writes events to Azure table storage (so I still haven't quite managed to escape from the cloud).
The latest version of the block itself is available from NuGet, and we should have the comprehensive documentation available any time now. It started out as a quick update of the existing docs to match the new features in the block, but has expanded into a refactoring of the content into a more logical form, and to provide a better user experience. Something I seem to spend my life doing - I keep hoping that the next version of Word will have an "Auto-Refactor" button on the ribbon.
More than anything, though, is the useful experience it's providing in learning more about structured (or semantic) logging. I played with Event Tracing for Windows (ETW) a few times in the past when trying to consolidate event logs from my own servers, and gave up when the level of complexity surpassed by capabilities (it didn't take long). But EventSource seems easy to work with, and I've promised myself that every kludgy utility and tool I write in future will expose proper modern events with a structured and typed payload.
This means that I can use the clever and easy to configure Out-of-Process Host listener that comes with the Semantic Logging Application Block to write them all to a central database where I can play with them. And the neat thing is that, by doing this, I can record the details of the event but just have a nice useful error message for the user that reflects modern application practice. Such as "Warning! Your hovercraft is full of eels...", or maybe just "Oh dear, it's broken again..."
Probably there's not many people who can remember when TVs had just six buttons and a volume knob. You simply tuned each of the buttons to one of the five available channels (which were helpfully numbered 1 to 5), hopefully in the correct order so you knew which channel you were watching, and tuned the sixth button to the output from your Betamax videocassette recorder.
As long as the aerial tied to your chimney wasn't blown down by the wind, or struck by lightning, that was it. You were partaking in the peak of technical media broadcasting advancement. Years, if not decades, could pass and you never had to change anything. It all just worked.
And then we went digital. Now I can get 594 channels on terrestrial FreeView and satellite-delivered FreeSat. Even more if I chose to pay for a Sky or Virgin Media TV package. Yet all I seem to have gained is more hassle. And, looking back at our viewing habits over the previous few weeks, pretty much all of the programs we watch are on the original five channels!
Of course, the list of channels includes many duplicates, with the current fascination for "+1" channels where it's the same schedule but an hour later (which is fun when you watch a live program like "News At Ten" that's on at 11:00 o'clock). Channel 5 even has a "+24" channel now, so you can watch yesterday's programs today. A breakthrough in entertainment provision, which may even be useful for the 1% of the population that doesn't have a video recorder. How long will it be before we get "+168" channels so you can watch last week's episode that you missed?
What's really annoying, however, is that I've chosen to fully partake in the modern technological "now" by using Media Center. Our new Mamba box (see Snakin' All Over) is amazing in that it happily tunes all the available FreeView and FreeSat channels and, if what it says it did last night is actually true, it can record three channels at the same time while you are watching a recorded program. I was convinced that it's not supposed to do more than two.
However, it also seems to have issues with starting recordings, and with losing channels or suddenly gaining extra copies of existing channels. For some reason this week we had three BBC1 channels in the guide, but ITV1 was blank. Another wasted half an hour fiddling with the channel list put that right, but why does it keep happening? I can only assume that the channel and schedule lists Media Center downloads every day contain something that fires off a channel update process. And helpfully sets all the new ones (or ones where the name changed slightly) to "selected" so that they appear in the guide's channel list. I suppose if it didn't pre-select them, you wouldn't know they had changed.
Talking with the ever-helpful Glen at QuitePC.com, who supplied the machine, was also very illuminating. Media Center is clever in that it combines the multiple digital signals for the same channel into one (you can see them in the Edit Sources list when you edit a channel) and he suggested editing the list to make sure the first ones were those with the best signal so that Media Center would not need to scan through them all when changing channels to start a recording.
Glen also suggested using the website King Of Sat to check or modify the frequencies when channels move.
This makes sense because Media Center does seem to take a few seconds to change channels. Probably it times out too quite quickly when it doesn't detect a signal, pops up the warning box on screen, and then tries the other tuner on the same card. Which works, maybe because the card is now responding, and the program gets recorded. But when I checked yesterday for a channel where this happens, there is only one source in the Edit Sources list and it's showing "100%" signal strength.
And a channel that had worked fine all last week just came up as "No signal" yesterday. Yet looking in the Edit Sources list, the single source was shown as "100%". Today it's working again. Is this what we expected from the promise of a brave new digital future in broadcasting? I'm already limited to using Internet Radio because the DAB and FM signals are so poor here. How long will it be before I can get TV only over the Internet?
Mind you, Media Center itself can be really annoying sometimes. Yes it's a great system that generally works very well overall, and has some very clever features. But, during the "lost channel" episode this week, I tried to modify a manual recording by changing the channel number to a different one. It was set to use channel 913 (satellite ITV1) but I wanted to change it to use channel 1 (terrestrial ITV1). Yet all I got every time was the error message "You must choose a valid channel number." As channel 1 is in the guide and works fine, I can't see why it's invalid. Maybe because it uses a different tuner card, and the system checks only the channel list for the current tuner card?
It does seem that software in general often doesn't always get completely tested in a real working environment. For example, I use Word all the time and - for an incredibly complex piece of software - it does what I expect and works fine. Yet, when I come to save a document the first time onto the network server, I'm faced with an unresponsive Save dialog for up to 20 seconds. It seems that it's looking for existing Word docs so it can show me a list, which is OK if it was on the local machine or there were only a few folders and docs to scan. But there are many hundreds on the network server, so it takes ages.
Perhaps, because I use software like this all day, I just expect too much. Maybe there is no such thing as perfect software...