Writing ... or Just Practicing?

Random Disconnected Diatribes of a p&p Documentation Engineer

  • Writing ... or Just Practicing?

    Another Bad Where? Day


    Sometimes I stop and wonder if I'm having one of those "more-senile-than-usual" moments. Did I click the wrong button, or have I forgotten to set some weird option before I started the process that looks like it will still be running when I get up tomorrow morning? What on earth am I trying to do that is so complicated a 2.27 GHz Quad Core Xeon E5520 running 64-bit Windows 7 can't achieve while I'm still awake?

    So why not try it for yourself? Open Windows Explorer, select drive C: in the folder tree, and type the full name and extension of some file you know exists into the Search box at the top right of the window. A good one to try, if you have Microsoft Office installed, is Default.dotx. Then wait. Or go to bed. On my machine it was still searching after 10 minutes and hadn't found it yet. Now open your C:\Program Files (x86)\Microsoft Office\Office14\1033\QuickStyles folder (or some similar path depending on the version of Office you have installed). Gosh - there's a file named Default.dotx. But would you know to look there?

    Where all this started was looking for a specific template file we use here at p&p. I know the file name - but after watching the pulsating green crawly thing in the address bar on my machine for more than 20 minutes, with the search still running and no sign it found the file, I opened another instance of Windows Explorer and started to look in the folders where I thought it would be. Needless to say, it wasn't. Not even in the elusive QuickStyles folder. Maybe I should just reinstall the tools that the template is part of with logging enabled on the MSI, and then read the log file to see where it gets put? Not exactly an interactive approach to searching...

    In fact, the interactive approach I took was to write a simple search utility in VS 2010 to search all files and subfolders starting at a specified folder for a specific file based on the full name or a standard partial-match search string (such as "*.dotx" or "myfile?.*"). I actually got it finished and working before the search had finished. It found the file I wanted in the C:\Users\[me]\AppData\Roaming\Microsoft\Templates\ folder in 12 seconds. And the time taken to search all of 40 GB of files on drive C: was less than 30 seconds.

    Yes, I know the Windows 7 search is doing clever stuff like looking inside files and looking at metadata. And I know you can change the options from the Organize menu in Windows Explorer (though it seemed to make little difference). And I know I should better organize where I put files that I want to be able to find again. OK, so Windows 7 does seem to be more predictable when searching than Vista was (see Having A Bad Where? Day). But, sometimes, all you need is to find a file that you know is in there somewhere...

  • Writing ... or Just Practicing?

    The Latest Love of My (Programming) Life?


    Is it really possible to love jQuery? It certainly seems like it is from the numerous blog and forum posts I've read while trying to figure out how to make it do some fairly simple things. Many of the posts end with rather disturbing terms of endearment: "...this is why I just love jQuery" being a typical example. Yet I'm still not sure that our first blind date will result in a lasting relationship.

    Perhaps if you spend your life building websites that incorporate the now mandatory level of flashy UI, animations, and interactivity jQuery is pretty much a given. At least it means that paranoid people like me who have Java, ActiveX, and Flash disabled in their browser actually get to see something. I got fed up with the sites you used to see (or, in my case, not see) a while ago that were basically just a large Flash animation - invariably with the focus on appearance rather than containing any useful content. But disabling script is generally not an option these days.

    Mind you, I'm now finding the same problem with sites that are just a single large Silverlight control; though - being a 'Softie - I guess I do tend to trust Silverlight rather more than other animation technologies. Well, marginally more - I'm still a paranoid neurotic. You know what they say: "Just because you're paranoid doesn't mean they aren't watching you."

    So, getting back to the main thrust of this diatribe, can I get to the point where jQuery is my newest live-in lover? I have to say that initial impressions were less than favorable; through this was probably a combination of the fact that I've almost forgotten how to use JavaScript, and that much of the documentation I found on the Web seems to assume you are either an idiot or already a jQuery expert.

    There's plenty of API information, and plenty of blogs that provide just enough to not quite get things working. As an example, I'm using the load method to reload a partial section of a page into a div element, and I want to change the mouse pointer to a "wait" cursor and display an indeterminate "Please wait..." image while it loads. The docs say I can flip the cursor for the page using the css method of the element that holds the partial page content (though they don't mention that it doesn't work on hyperlinks within it). And that I can display my hidden div containing the loading image using the element's show method.

    But, of course, figuring out this is the easy part. Simply calling the methods one after the other to change the mouse pointer, show the image, load the new content, hide the image, and change the mouse pointer back again doesn't work. Instead, you have to chain the method calls so that they only execute after the previous one has completed - mainly, of course, because the load method is asynchronous. The "getting started" docs hint at all this without actually using the dreaded "a" word; while the "real programmer" docs are full of barely comprehensible tips such as "CSS key changes are not executed within the queue."

    The trick is to realize that all of the methods (at least all the ones I've found so far) take an event as the final parameter. That's when the aging gray cells slowly spluttered into life and I remembered how we used to use the setInterval method of the window object to execute a delay during some hand-crafted animation in JavaScript. You gave it the name of another function to execute after the delay, and your code ended up as a mass of functions calling other functions after they finish what they're doing (we called it "Dynamic HTML" in those days). It usually required only a couple of hundred lines of JavaScript, and generally no more than a week to debug using alert dialogs, and get it all working properly.

    Of course, these days, asynchronous programming is a common scenario, so I'm a bit surprised that the docs don't just bite the bullet and use the "a" word from the start. But I guess there's another issue as well: no programmer with any remaining shred of pride would use separate callback functions. You wouldn't dare let anyone see your code if it didn't use lambda expressions for callbacks - even if you are still a bit frightened by them. Let's face it, finding syntax errors and debugging statements that cover twenty lines and end with a dozen closing curly and round brackets is not a procedure designed to aid mental stability or promote a restful programming experience. Especially when the typical error message is just the amazingly useless phrase "Object expected". So maybe the documentation people want to avoid using the "l" word as well...

    But the great thing is that, once you grasp the facts about the unmentionable "a" and "l" factors, it all starts to make sense and even - dare I say it - seems easy. Compared to the effort of doing the same in pure JavaScript, jQuery is starting to look like a distinctly attractive lifetime partner; even if it's really just a library that hides the complex stuff underneath a layer of not quite so complex stuff. And it may even help you get to be less frightened of lambda expressions.

    Though what I still can't figure is why, when not so long ago everyone was decrying the eye candy proliferation of scrolling text, sliding sections, and animated content in web pages, everywhere I look now has fancy jQuery effects that often seem designed to be as annoying as possible. And, of course, why we're still using JavaScript more than fifteen years after most people realized it was a rather nasty technology that could never last...


  • Writing ... or Just Practicing?

    Blocking Malware Domains in ISA 2006


    As in many households, several regular and occasional computer users take advantage of my connection to the outside world. I use ISA Server 2006 running as a virtual Hyper-V instance for firewalling and connection management (I'm not brave enough to upgrade to Forefront yet), and all incoming ports are firmly closed. But these days the risk of picking up some nasty infection is just as great from mistaken actions by users inside the network as from the proliferation of malware distributors outside.

    Even when running under limited-permission Windows accounts and with reasonably strict security settings on the browsers, there is a chance that less informed users may allow a gremlin in. So I decided some while ago to implement additional security by blocking all access to known malware sites. I know that the web browser does this to some extent, but I figured that some additional layer of protection - plus logging and alerts in ISA - would be a useful precaution. So far it seems to have worked well, with thankfully only occasional warnings that a request was blocked (and mostly to an ad server site).

    The problem is: where do you get a list of malware sites? After some searching, I found the Malware Domain Blocklist site. They publish malware site lists in a range of formats aimed at DNS server management and use in your hosts file. However, they also provide a simple text list of domains called JustDomains.txt that is easy to use in a proxy server or ISA. Blocking all web requests for the listed domains will provide some additional protection against ingress and the effects of malware that might inadvertently find its way into a machine otherwise; and you will see the blocked requests in your log files.

    They don’t charge for the malware domain lists, but you decide to use them please do as I did and make a reasonable donation. Also be aware that malware that connects using an IP address instead of a domain name will not be blocked when you use just domain name lists.

    To set it up in ISA 2006, you need the domain list file to be in the appropriate ISA-specific format. It's not available in this format, but a simple utility will convert it. You can download a free command line utility I threw together (the Visual Studio 2010 source project is included so you can check the code and recompile it yourself if you wish). It takes the text file containing the list of malware domain names and generates the required XML import file for ISA 2006 using a template. There's a sample supplied but you'll need to export your own configuration from the target node and edit that to create a suitable template for your system. You can also use a template to generate a file in any other format you might need.

    ISA 2006 Toolbox

    To configure ISA open the Toolbox list, find the Domain Name Sets node, right-click, and select New Domain Name Set. Call it something like "Malware Domains". Click Add and add a dummy temporary domain name to it, then click OK. The dummy domain will be removed when you import the list of actual domain names. Then right-click on your new Malware Domains node, click Export Selected, and save the file as your template for this node. Edit it to insert the placeholders the utility requires to inject the domain names into it as described in the readme file and sample template provided.

    Malware Domaind List

    After you generate your import file, right-click on your Malware Domains node, click Import to Selected, and locate the import file you just created from the list of domain names. Click Next, specify not to import server-specific information, and then click Finish. Open your Malware Domain set from the Toolbox and you should see the list of several thousand domain names.

     Now you can configure a firewall rule for the new domain name set. Right-click the Firewall Policy node in the main ISA tree view and click New Rule. Call it something recognizable such as "Malware Domains". In the Action tab select Deny and turn on the Log requests matching this rule option. In the Protocols tab, select All outbound traffic. In the From tab, click Add and add all of your local and internal networks. In the To tab click Add and add your Malware Domains domain name set. In the Content Types tab, select All content types. In the Users tab select All users, and in the Schedule tab select Always. Then click OK, click Apply in the main ISA window, and move the rule to the top of the list of rules.

    ISA Block Rule

    You can test your new rule by temporarily adding a dummy domain to the Domain Name Set list and trying to navigate to it. You should see the ISA server page indicating that the domain is blocked.

    If you wish, you can create a list of IP addresses of malware domains and add this set to your blocking rule as well so that malware requests that use an IP address instead of a domain name are also blocked. The utility can resolve each of the domain names in the input list and create a file suitable for importing into a Computer Set in ISA 2006. The process for creating the Computer Set and the template is the same as for the Domain Name Set, except you need to inject the domain name and IP address of each item into your import file. Again, a sample template that demonstrates how is included, but you must create your own version as described above.

    Be aware that some domains may resolve to internal or loopback addresses, which may affect operation of your network if blocked. The utility attempts to recognize these and remove them from the resolved IP address list, but use this feature with care and check the resolved IP addresses before applying a blocking rule.

    Another issue is the time it takes to perform resolution of every domain name, and investigations undertaken here suggest that only about one third of them actually have a mapped IP address. You'll need to decide if it's worth the effort, but you can choose to have the utility cache resolved IP addresses to save time and bandwidth resolving them all again (though this can result in stale entries). If you do create a Computer Set, you simply add it to the list in the To tab of your blocking rule along with your Domain Name Set. Of course, you need to regularly update the lists in ISA, but this just involves downloading the new list, creating the import file(s), and importing them into your existing Domain Name Set and Computer Set nodes in ISA.

  • Writing ... or Just Practicing?

    Get Your Hands On Our Labs


    I suppose I could start with a bad pun by saying that you've had your hols, and now it's time for some HOLs instead! Of course, this assumes you understand that "hols" is English shorthand for "holidays" and that "HOLs" is IT guidance shorthand for "Hands-On-Labs" (I wonder if people in the US have a shorthand word for "vacations", or if - seeing they are typed rather than written by hand these days - that should be a shorttype word, or even just a shortword).

    Anyway, enough rambling, where I'm going with all this is that p&p just released the Hands-On-Labs for Windows Phone 7. This set of practical exercises and associated guidance is part of the patterns & practices Windows Phone 7 Developer Guide project, which resulted in a book (also available online) and associated guidance aimed at helping developers to create business applications for Windows Phone 7.

    As part of this guidance, the team released a sample application that runs in Windows Azure and is consumed by Windows Phone 7. The application is based on a fictional corporation named Tailspin that carries out surveys for other companies by distributing them to consumers and then collating the results. It's a fairly complex application, but demonstrates a great many features of both Azure and Windows Phone 7. The Azure part came from a previous associated project (patterns & practices Windows Azure Guidance) with some added support for networking with the phone. The bits that run on the phone are (obviously) all new, and demonstrate a whole raft of capabilities including using the camera and microphone to collect picture and voice answers to questions.

    What's been interesting to discover while working on the HOLs is how the design and structure of the application really does make it much easier to adapt and maintain it over time. When some of the reviewers first saw the mobile client application for the Windows Phone 7 Developer Guide project, they commented that it seemed over-engineered and over-complicated. It uses patterns such as Dependency Injection and Service Locator; as well as a strict implementation of the Model-View-ViewModel (MVVM) pattern that Silverlight developers love so much.

    So, yes, it is complex. There is no (or only a tiny bit of) code-behind, and instead all interaction is handled through the view models. It uses components from the Prism Library for Windows Phone (see patterns & practices: Prism) to implement some of this interaction, such as wiring up application bar buttons and displaying non-interruptive notifications. Views are instantiated though a view model locator class, allowing them to be bound to a view but created through the dependency injection container. Likewise, services are instantiated and their lifetime managed by the container, and references to these services are injected into the constructor parameters of the view models and other types that require them. Changes to the way that the application works simply require a change to the container registrations.

    In addition, there are custom types that wrap and abstract basic tasks such as storing data and application settings, managing activation and deactivation, and synchronizing data with the remote services. There is a lot in there to understand, and one of the tasks for the Hands-On-Labs was to walk developers through the important parts of the process, and even try to justify the complexity.

    But where it really became clear that the architecture and design of the application was "right" was when we came to plan the practical step-by-step tasks for the Hands-On-Labs. We wanted to do something more than just "Hello Mobile World", and planned to use a modified version of the example Tailspin Surveys application as the basis for the user's tasks. So I asked the wizard dev guy (hi Jose) who was tasked with making it all work if we could do some really complicated things to extend and modify the application (bearing in mind that each task should take the user less that 30 minutes to complete):

    Me: Can we add a new UI page to the application and integrate it with the DI and service locator mechanism?
    Jose: Easy. We can do the whole MVVM thing including integration with the DI mechanism and application bar buttons in a few steps.

    Me: OK, so how about adding a completely new question type to the application, which uses a new and different UI control? And getting the answer posted to the service?
    Jose: No problem. The design of the application allows you to extend it with new question types, new question views, and new question view models. They'll all seamlessly integrate with the existing processes and services. Twenty minutes max.

    Me: Wow. So how about changing the wire format of the submitted data, and perhaps encrypting it as well?
    Jose: Yep, do that simply by adding a new implementation of the proxy. Everything else will just work with it.

    As you can see, good design pays dividends when you get some awkward customer like me poking about with the application and wanting it to do something different. It's a good thing real customers aren't like that...

  • Writing ... or Just Practicing?

    Can An Arial Attack Produce High Calibri Guidance?


    If an article I read in the paper this week is correct, you need to immediately uninstall Arial, Verdana, Calibri, and Tahoma fonts from your computer; and instead use Comic Sans, Boldini, Old English, Magneto, Rage Italic, or one of those semi-indecipherable handwriting-style script fonts for all of your documents. According to experts, it will also be advantageous to turn off the spelling checker; and endeavour to include plenty of unfamiliar words and a sprinkling of tortuous grammatical constructs.

    It seems researchers funded by Princeton University have discovered that people are 14% less likely to remember things they read when they are written in a clean and easy-to-read font and use a simple grammatical style. By making material "harder to read and understand" they say you can "improve long term learning and retention." In particular, they suggest, reading anything on screen - especially on a device such as a Kindle or Sony Reader that provides a relaxing and easy to read display - makes that content instantly forgettable. In contrast, reading hand-written text, text in a non-standard font, and text that is difficult to parse and comprehend provides a more challenging experience that forces the brain to remember the content.

    There's a lot more in the article about frontal lobes and dorsal fins (or something like that) to explain the mechanics of the process. As they say in the trade, "here comes the science bit". Unfortunately it was printed in a nice clear Times Roman font using unchallenging sentence structure and grammar, so I've forgotten most of what it said. Obviously the writer didn't practice what they preached.

    But this is an interesting finding. I can't argue with the bit about stuff you read on screen being instantly forgettable. After all, I write a blog that definitely proves it - nobody I speak to can remember what I wrote about last week (though there's probably plenty of other explanations for that). However, there have been several major studies that show readers skip around and don't concentrate when reading text on the Web, often jumping from one page to another without taking in the majority of the content. It's something to do with the format of the page, the instant availability, and the fundamental nature of hyperlinked text that encourages exploration; whereas printed text on paper is a controlled, straight line, consecutive reading process.

    From my own experience with the user manual for our new all-singing, all-dancing mobile phones, I can only concur. I was getting nowhere trying to figure out how to configure all of the huge range of options and settings for mail, messaging, synchronization, contacts, and more despite having the laptop next to me with the online user manual open. Instead, I ended up printing out all 200 pages in booklet form and binding them with old bits of string into something that is nothing like a proper manual - but is ten times more useful.

    And I always find that proof-reading my own documents on screen is never as successful as when I print them out and sit down in a comfy chair, red pen in hand, to read them. Here at p&p we are actively increasing the amount of guidance content that we publish as real books so that developers and software architects can do the same (red pen optional). The additional requirements and processes required for hard-copy printed materials (such as graphic artists, indexers, additional proof readers, layout, and the nagging realization that you only have one chance to get it right) also seem to hone the material to an even finer degree.

    So what about the findings of those University boffins? Is all this effort to get the content polished to perfection and printed in beautifully laid out text actually reducing its usefulness or memorability? We go to great lengths to make our content easy to assimilate, using language and phrasing defined in our own style manuals and passing it through multiple rigorous editing processes. Would it be better if we just tossed it together, didn't bother with any editing, and then photo-copied it using a machine that's nearly run out of toner? It should, in theory, produce a more useful product that you'd remember reading - though perhaps not for the desired reasons.

    Taking an excerpt from some recent guidance I've created, let's see if it works. I wrote "You can apply styles directly within the HTML of each page, either in a style section in the head section of the page or by applying the class attribute to individual elements within the page content." However, before I went back and read through and edited it, it could well have said something like "The style section in the head section, or by decorating individual elements with the class attribute in the page, can be used to apply styles within the HTML or head of each page within the page content."

    Is the second version likely to be more memorable? I know that my editor would suspect I'd acquired a drug habit or finally gone (even more) senile if I submitted the second version. She'd soon have it polished up and looking more like the first one. And, no doubt, apply one of the standard "specially chosen to be easily readable" fonts and styles to it - making readers less likely to recall the factual content it contains five minutes after they've read it.

    But perhaps a typical example of the way that a convoluted grammar and structure makes content more memorable is with the well-known phrase taken from a mythical motor insurance claim form: "I was driving past the hole that the men were digging at over fifty miles per hour." So that could be the answer. Sentences that look right, but then cause one of those "Hmmm .... did I actually read that right?" moments.

    At the end of the article, the writer mentioned that he asked Amazon for their thoughts on the research in terms of its impact on Kindle usage, but they were "unable to comment". Perhaps he sent the request in a nice big Arial font, and the press guy at Amazon immediately forgot he'd read it...

  • Writing ... or Just Practicing?

    Does Web Matrix Have the Razor's Edge?


    Perhaps I can blame the Christmas spirit (both ethereal and in liquid form) for the fact that I seem to have unwarily drifted out of the warm and fuzzy confines of p&p, and into the stark and unfamiliar world of our EPX parent. A bit like a rabbit caught in the headlights, I guess. I keep looking round to see what's different from the more familiar world I'm used to, but - rather disconcertingly - it all seems to be much the same. I'm even anticipating somebody telling me what "EPX" actually stands for...

    It turns out that the reason I've virtually and temporarily wandered north from Building 5 to some other (as yet unidentified) location on campus is to work on introductory guidance for new-to-the-web developers, based around the new version of Web Matrix. Of course, as I actually work from home here in Ye Olde England I'm not, physically, going anywhere - so the lack of building number precision is not a problem. And some would even say that, at my stage of life, I'm probably not "going anywhere" anyway.

    Still, if you are a fan of classic rock music you'll no doubt be familiar with the Australian band AC-DC. I reckon their 1990 comeback album "The Razor's Edge" is their best work, and their greatest public appearance must be the Toronto Rocks concert in 2003. So what about Web Matrix? Coincidently, in the same year as the Toronto Rocks concert, I co-wrote a book called "Beginning Dynamic Websites with ASP.NET Web Matrix". Unfortunately, I can't find any corresponding event from 1990 that I can stretch into an allegory with Web Matrix. But I can confirm that the greatest public appearance for a brand new web development technology, currently code-named "Razor", is its on-stage performance within the latest version of Web Matrix.

    OK, so perhaps I need to temper the enthusiasm a little, especially the use of contentious statements such as "brand new". In fact, at first, I rather tended to agree with a comment from someone on a blog discussing Razor when they said "You must be joking - are we going back to 1997?" So what is Razor? Quoting from the Web Matrix site, it is "the new inline syntax for ASP.NET Web pages". Just when I thought that we'd seen the back of inline code in web pages, and that everyone and their dog would be using MVC, MVVP, and other similar patterns - where even code-behind is considered a heinous crime against technology (if not humanity).

    And there's more. What about the new Text Template Transformation Toolkit (T4) stuff in Visual Studio 2010? I began my web development career with that wonderful templating technology named IDC (the Internet Database Connector), which was the forerunner to Denali/Active Server Pages 1.0. In those days, we thought that templates and inline code were the height of sophistication, and even went to conferences in T-shirts bearing the slogan <%=life%> to prove that we knew how to get a life.

    Of course, the world (and inline coding with VBScript and JScript) changed as Active Server Pages gradually morphed into COM+2 (I still have the set of engraved poker chips), then into ASP+ (I still have the baseball cap), and then - at PDC in Orlando in 2000 - into ASP.NET. And web developers gradually morphed from script kiddies into real developers. We even started to learn proper languages such as Visual Basic and C#.

    So it was with some initial trepidation that I started to delve into the wonders of Razor and Web Matrix again. Yes, the version I've been using so far is the beta, but some of the annoying limitations of the IDE and quirks of the syntax are annoying. Yet, within a couple of weeks the team (or, to be more accurate, Christof and I) have completed a draft of the first stage of the guidance, and the beginnings of what will (when complete) be a fairly comprehensive sample application. It's based on one of the Web Matrix templates, and uses jQuery within the UI to provide some snazzy sliding/expanding sections.

    Web Matrix Onboarding Guidance Example

    What’s clear is that, even though I still tend to gravitate towards the pattern-based approach drilled into me after some years at p&p, Web Matrix and Razor really does make web development easy and quick for beginners. Yes, we were doing sliding and expanding UI stuff with reams of JScript and VBScript in 1997 in the book "Instant Dynamic HTML" for IE4 and Netscape Communicator (still available on Amazon!), and inline code with proper programming languages in 2002 ("Professional ASP.NET 1.0"). And then, of course, with tools such as the original Web Matrix and - later - Visual Web Developer (VWD); all the time moving further and further away from inline code and towards using a wider range of increasingly complex server-side controls.

    Maybe it's just a reflection of the fact that, as the complexity of our programming technologies increases, the bar to entry and the steep learning curve may tend to inhibit newcomers to the field. Perhaps this is why PHP (also supported in Web Matrix) continues to be a significant web development technology. In the 90's we worried that inline code (especially script code) was inefficient, and searched for new ways to improve performance. Yet even the fully compiled approach of Visual Studio Web Classes (remember them?) failed to solve the issues. I can remember a web site built using these that required the web server to load a 750KB assembly to generate every page. You were lucky to get a response within 2 minutes some days as the server struggled to shuffle memory around and generate some output for IIS to deliver.

    But I suppose that the power and available resources in modern servers means the clever technologies that lie behind Razor can generate pages without even blinking. In fact, I've long been of the opinion that the computer should be able to do all of the really difficult stuff and let us write the minimum code to specify what we want, without having to define every single aspect of how it should do it. Maybe, in terms of web development, Web Matrix is another step down that long and winding road...

  • Writing ... or Just Practicing?



    A few weeks ago I was trying to justify why software architects and developers, instead of politicians, should govern the world. Coincidently, I watched one of those programs about huge construction projects on TV this week, and it brought home even more the astonishing way that everything these days depends on computers and software. Even huge mechanical machines that seem to defy the realms of possibility.

    In the program, P&H Mining Equipment was building a gigantic mechanical excavator. Much of the program focused on the huge tracks, the 100 ton main chassis, machining the massive gears for the transmission system, and erecting the jib that was as tall as a 20-storey building. Every component was incredibly solid and heavy, and it took almost superhuman effort to assemble (though you can't help wondering how much of the drama was down to judicious editing of the video).

    However, according to the program the new excavator contains brand new computing technology that allows it to dig faster, avoid stalling in heavy conditions, and frees the operator from a raft of tasks and responsibilities (though they did manage to avoid using the phrase "as easy to drive as a family car"). Every part of the process of driving and digging is controlled by computers through miles of cable and hundreds of junction boxes and power distribution systems. It even automatically finds the truck it's supposed to be tipping into. I don't know about you, but I wouldn't want to be sitting in the truck when it automatically detects where I am and tips several hundred tons of rock. You never know if the operator has chosen that moment to switch it auto-pilot (or auto-dig) and wandered off for lunch.

    And then later on, when it came time to test it and nothing seemed to work, there was no sign of a gang of oil-spattered brute force workmen - just a guy in shirt sleeves with laptop computer, and a couple of electricians. Getting it to finally work just required a geek to edit a single line of code in the central operating system. I guess it's a lot more satisfying when a successful debugging session results in some mammoth lump of machinery suddenly rumbling into action, compared to just getting a "Pass" message in your test runner.

    Yet Ransomes & Rapier here in England built an equally huge excavator named Sundew way back in 1957, which you have to assume contained nothing even remotely resembling a computer. And it worked until 1984, including walking across country from one open cast mine to another 18 miles away (a journey that took nearly three months). I wonder if, in 40 years time, there will still be somebody around who knows how to debug programs in whatever language they used for the new excavator operating system. Or if, in the meantime, someone will figure a way to hack into it through Wi-Fi and make it run amok digging huge holes all over the place.

    And do they have to connect it to the 'Net once a month to download the latest patches...?

  • Writing ... or Just Practicing?

    The New Charlatans


    It's customary to imagine that the most unpopular establishments in modern society are solicitors and estate agents (in the US, I guess the equivalent is lawyers and realtors; though I can't testify to their level of popularity from here). However, I reckon that the growing use and capabilities of mobile phones has paved the way for a whole new group of industrial charlatans. Aided, no doubt, by the possibilities offered by computerization and automation - something for which we, as developers, are partly responsible.

    If you've tried to buy one of those fancy new smartphones recently, you've no doubt encountered some of the ingenious ways that the mobile phone service companies (the people who connect your amazing new device to the outside world) strive to confuse, obscure, bewilder, complicate, and generally befuddle us. It should be really easy: choose a phone, and then decide how much you want to pay each month for the specific matrix of services and allowances that best suit your requirements and usage patterns.

    But, of course, it's not. You can choose to get the phone for free and pay a higher rate per month, or pay a bit towards the phone and pay a slightly lower monthly fee, or buy the phone outright and choose a package at much lower price. In theory, if you do the "free phone" thing with a fixed term contract, you'd think that the phone would be their responsibility throughout the life of the contract, just like if you rent a car or some other item. But that's not the case - if the phone goes wrong, it's your problem unless you pay extra for insurance. Yet you still have to pay the rest of the rental for the contract term.

    It wouldn't be so bad, but all of the comparison web sites show that the free phone deals actually cost more over the contract term than buying the phone yourself. And, of course, the terms and conditions clearly state that the monthly fee and inclusive allowances for the contract can change during the term. And you can't replace or upgrade your phone until the end of the contract period either. That is, of course, if they can actually provide the phone and contract that you agreed to buy. Having spent three weeks wandering between delivery depots during the bad weather, my long-awaited Windows Phone finally arrived. Or rather, a package that I didn't order (and is, as you'd expect, a lot more expensive) arrived. Can they just "change it on the computer" to the one I ordered? No, it has to go back to them. Oh well, I suppose I can't actually go anywhere in this weather, so I don't really need a phone...

    However, my wife (an intrepid traveller) just decided to equip herself with a new phone. She can't find anything like the old Motorola V8 that she loved so much, and instead went for a rather nice HTC Desire that does all the modern bells and whistles stuff. And rather than mess about trying to figure which service package she needed, and switch her number from our current provider, she just bought the phone outright to use with the SIM-only package we already have. Quick, easy, and pain free you'd think? No chance. First off, when you buy a "phone-only" package you have to, as the sales guy so eloquently explained, "have it on pay and go". That's what we want - pay for the phone and go home. Oh no, what it means is that you have to have a "pay and go" SIM card with it, and you have to buy at least ten pounds top-up to get the SIM card for free. Despite the fact that we don't want or need a SIM card (and didn't actually get one), we still had to pay the ten pounds charge for a "pay and go top-up" that we can't actually use.

    The service package we have has a "Web Daily" inclusive feature, which - according to the web site blurb - "allows occasional access to data services for email and web browsing without paying for a fixed data allowance". Yes it does, but at 3 pounds ($ 4.50) per MB. OK, so the maximum charge per day is capped at one pound, but as you'll obviously access it every day when the phone automatically synchronizes your email, that's starting to look like an expensive deal. No problem: for five pounds per month you can add a data service to your package that has a 500 MB allowance. And you can add it over the web without having to listen to Greensleeves (or the latest hit from some unknown boy band) on the phone for half an hour.

    I can understand that, when you add the data package, it removes the free "Web Daily" facility. However, it also removes the "Free Calls to Landlines" facility (where calls to non-mobile numbers are not subtracted from your "free minutes" allowance). Nobody I spoke to at the supplier can explain why paying more for the service means that you get less from it. Perhaps they imagine that everyone will send emails through the data package to people who they previously used to call, so they won't need to make non-mobile calls any more. Or maybe it's just another surreptitious way for them to make a bit more profit.

    However, having got it all working, I'd have to admit that I'm amazed at what the phone can do. It took only a marginal effort to get it to work with our Wi-Fi, and talk to the hosted Exchange Server we use for email. It even synchronizes contacts with Outlook, and lets you upload and download music and photos from the phone as though it was a disk drive. As it contains an SD card, I suppose I shouldn't be surprised, but it's a refreshing change after fighting with the awful synchronization application that Motorola provided for the V3 (and which doesn't run on Windows Vista or Windows 7).

    But, best of all, it actually sets the date and time automatically. My old Motorola V3 has never once managed that, so every time I turn it on it shows the date and time I last used it (which is sometimes a month or more ago). I wonder if I can justify buying a new phone just so I don't have to set the date and time manually...?

Page 24 of 41 (326 items) «2223242526»