As I'm perusing various blogs, I'm reminded of two things.  BBS, and “Snow Crash”.

The first one reminds me of my Commodore PET, and my 300 baud modem.  Of course there was FidoNet also operative at the time.  But, it was basically a “community” around various computing subjects.  I even ran my own BBS for a couple of years.  “Snow Crash” is communities gone wild, as Snoop Dog might say it.  Or, it's “The Matrix”.  The ultimate virtual reality “community”.  Some day I expect .Text to turn into .Avatar or some such thing.  Of course when LongHorn comes out, 3D graphics will be the name of the game, and Avatars will make for cool demos.

People, a slight generation older than me, probably look at all this XML, Xen, C# stuff and think, yah, back in the day, we had Cobol, Ada, and whatnot, these kids are just doing it all over again.

This leads me to this nagging thought I've been having over the past couple of years, and it is this.  In the computing industry, we are largely repeating ourselves.  We're doing it faster, larger, and probably more buggy, but we're essentially repeating ourselves.  What haven't we done yet?

We haven't broken out of our John von Neumann enhanced Turing stupor.  We've been striving mightily to make computers and programs that largely operate on the assumption that things will go the way that you expect them to go (please don't tell me about error handling, it's the code that's most ignored).  We also assume that we can maintain complete models of operation within our heads while we program.

I think the world has changed.  When I was programming my Commodore PET, I think I didn't have much more than 8K of RAM to work with.  Later I had a Commodore 64, with 64K of RAM to work with.  On the 8K machine, I could almost memorize the entirety of a typical program.  I knew exactly what was going to be going on from every chip in the machine to every individual instruction in each clock cycle.  Barring cosmic rays, and the occasional spilled beverage, or cat, on the keyboard, things worked exactly as expected.  No internet connection, no multi-tasking/threading, no wayward memory protection boundaries to be violated.

Enter the computer rennaisance, and explosion of exponentially growing power, and with that power, a similar growth in complexity.

A typical developer machine now has 1 or 2 Gb RAM, 3Ghz main CPU, a GPU, ethernet, disk drives/controllers, PCI bus, and a ton of other stuff.  Each of those little subsystems has a complete industry of “standards” to go with it, and they're all supposed to work together.  And that's just the hardware.  Enter the poor programmer.  The environment is anything but certain.  In fact, it's way more uncertain than it was 20 years ago. 

At Microsoft, we have what we call the “Trustworthy Computing” initiative.  Noble goals indeed, make computing more secure.  We try our best, but I fear our best will not be nearly enough.  We are fighting against the laws of chaos, and we're not using the right weapons.  In fact, I don't think the weapons have been invented as yet.

I've tried to imagine what Trustworthy means to me as a programmer, and I find myself dissecting the words looking for deeper meaning.  “Trust”, at least between humans, implies some mutual understanding, implied or explicit contracts, and goals.  It is also born out by history of repetitive delivery on promises.  I trust you will deliver the paper tomorrow because I pay you to deliver the paper.  You have delivered it faithfully for the past 4 years, and even when you were ill you found someone to take your place and I got my paper.  I trust you will do the right thing because we have a contract between us, and we both agree on a level of performance that is implied in that explicit contract.

That's what's missing.  I have no real contract with my computer.  I turn it on, I tell it what to do, I turn it off.  There is no penalty for it not doing what I want it to do.  There is no reward for it doing what I want.  I turn it off, I reformat, and replace hard disks, memory chips, so what.  It doesn't care.  It isn't even trying to do it's best.  It's just doing what I told it to do.  There's no desire, passion, care, nothing.

How can I trust such a thing?  I can assume, and please myself that statistically it is likely to do what I want a certain percentage of the time.  But is that trusty?  Perhaps.  I'd say it's more about “reliable computing“.

In order to fight the chaos of an ever expanding computing universe, I want a new way of “programming”.  I don't want programming at all.  I want a conversation and a relationship with a piece of machinery.  I want to impart my goals, objectives, and desires, and have the machine “want” to help me achieve my goals.  I want it to be able to respond on its own to bad situations.  I want it to be able to call on other resources when it's sick with a virus, and not leave me hanging.  I don't want to tell it how to do all these things, I want it to be equipped with enough reasoning to come up with solutions on its own.  I want it to be resourceful, and call on others when necessary.

Is this an old idea?  Yah, of course it is.  This one gets filed under AI all the time.  Will it come to pass eventually?  Yes, of course it will.  Another thing that is old, and new again is the human ability to innovate.  Just when you think everything has been invented, our collective conscience comes up with a breakthrough, and another golden age emerges.

Between the BBS, Snow Crash, The Diamond Age, Assemblers of Infinity, and Weblogs, we're sure to find some new perspectives on old ideas, and the world of computing will change.