For several decades after the invention of the computer, the dominant constraint was processor cost. While mainframes had, in relative terms, impressive storage and local I/O capacity, they were primarily designed to do useful work on every clock cycle. Indeed the humans who were running around mounting and unmounting tapes according to instructions in a job control stream were trying to keep the processor from ever executing a single very expensive NOP.

 

It was very Tron.

 

With the advent of minicomputers, computers became much more useful to people. The economics of cheaper processors and storage meant that a department level manager could buy a computer rather than waiting for access to corporate iron. Multi-tasking operating systems, in conjunction with terminals – the precursors to networks, began to move from a technology designed to keep the machine busy to a technology that was enabling interactivity in the human computing experience. The economics still required people to share a processor and storage (and experts).

 

Workstations were an early version of what would eventually become the personal computer. The workstation was designed exclusively to make the individuals using them as personally productive as possible, by supplying them with processing, graphics, local storage and then network storage. The magic of the workstation was that the era of optimizing for the machine was coming to an end. I believe that is was the productivity of the workstation that enabled developers to experiment and refine networking and its associated metaphors and practices (virtualized peripherals, remote shells, RPC, collaborative / distributed development, etc.) that paved the way for the Internet itself.

 

The personal computer again fundamentally changed the economics of computing. While the first PCs were a step backwards in functionality relative to the workstation, the price was right and getting more right with each passing year. The first 10 years of the personal computer were driven by the novelty of rapidly cheaper computing and the search for uses for it. Later the PC would combine workstation class operating system sophistication with a TCP/IP stack and affordability and give birth to the modern Internet.

 

It’s interesting to consider that it was roughly 15 years ago that we were debating the merits of adding, or not adding, TCP/IP and Winsock to Windows 95.

 

The next opportunity that affordable personal computing created was access to information. The industry got a few years out of enabling people to type their recipes in WYSIWYG and then print them on newly affordable “graphics” printers. But we wanted more. We wanted to know things. We didn’t know it at the time, but there was a pent up demand to publish. Encyclopedias, universities, newspapers and television were still in control of knowledge and its distribution.

 

The next economic constraint was the cost of storage. I remember when we were generally happy when a new operating system install took less than one quarter of the hard drive, and data loss due to drive failure was an ordinary part of the personal computing experience. Thankfully the term head crash has disappeared from our jargon completely.

 

CD-ROMs started to show the promise of cheap storage. Usenet was showing us that people really wanted to be heard. And email, the power of communication and collaboration within an intimate group controlled by individuals.

 

For 10 years between 1990 and 2000, the capacity and reliability of everything associated with the PC increased at an amazing and increasing rate, even as it got cheaper at an increasing rate. By 1998 I had so much affordable, fast local storage and dirt cheap processor cycles that I could render digital video in real time. At the time it wasn’t obvious, exactly, why this was better than an analog tape but damn it was way cool. PVRs came later and showed us one reason it was cool.

 

“Overnight”, the default directions of everything in the industry started to change.

 

By 2002, we were beginning to take high capacity, small and mechanically robust laptop hard drives for granted. Affordable desktop PCs had storage capacity and reliability undreamed of by workstations in their prime. Video games could assume large amounts of space. Solid state storage was enabling portable devices to store music, photos and video. Indeed the storage wave just kept going even as the clock speed wave crested and began to roll back. This was not conventional wisdom 10 years ago.

 

Of course the biggest trend of the last 10 years has been the browser. The web march has been stunning: from simple access to a huge, barely “administrated�� and human friendly namespace; to forms, catalogs and ecommerce; simple publishing via hosted services and rich client authoring; even more simple (if less rich) publishing via blogs and wikis; to highly interactive (but still not very rich) sites that are the anchor point of sharing, collaboration, publishing and discussion.

 

The web has received a lot of well deserved focus on ease of use enabled by a simple and clever networking layer and presentation layer. Indeed the browser sets a new standard for it just works experiences, and technology that doesn’t meet that bar will not be adopted. As is often the case, the reality is that considerable effort on the part of developers goes into making the experience work and feel great on top of a platform that has, as we say, evolved over time.

 

The evolution of (cheap) storage has had an equally important impact on the web. The combination of the natural partitioning flexibility of the URL space, racks of (cheap) PCs and simple software redundancy have enabled the creation of immense reliable enough stores. Because these stores are “centralized” it is feasible to do data mining, indexing and transformations on them. This is the engine behind search engines. As this technology becomes more approachable to more developers, it is powering more experiences on the web.

 

It’s tempting to assume that web client server wins everything. But I don’t think so.

 

I love the web. I’m using it now. But I still spend more time using Windows applications. At the moment I’m listening to very high quality glitch free music being streamed from my hard drive, through an iTunes client and into my headphones. I have a few folders open where I did some photo editing with Photoshop earlier today. I’m using Office to author this blog and enjoying best in class spelling and grammar checking. Messenger is open. I routinely use Premier, Lightroom, Dreamweaver, Nero, and various scanning programs.

 

The industry’s newest, coolest, ease of use standard setting, cleverest mobile browser ever powered device, the iPhone, has a local application for text messaging, calendar, photo album, camera, YouTube, stocks, maps, weather, clock, calculator, notes, mail, music, the phone itself and of course, the shell. The reason for this is that local applications can have experiences that are highly tuned to the device capabilities and the preferences of the customers using them.

 

Of course what is really happening it that the industry is moving toward the best of both worlds. Many of the local applications I describe above are invisibility connected to the web. This is nature of what we’ve been calling this Software + Services.

 

What doesn’t seem to have happened yet is much coherency when I want an experience that spans mobile, web and PC. Mobile and (typically one) PC work together over USB sync, and Facebook has done a decent job with their iPhone targeted (browser) client but for the most part, I have to “pick” an app or a site or an operator,  use it and accept another silo of (my) information. But since lots of them are truly unique and useful, I use lots of them. And I learn to context switch. A lot. I am forced to maintain a complex mental map of where my data is, how I get to it and how to move it around.

 

Even the web, “the alternative to complexity”, is becoming complex. Sites are competing to store your information and as a side effect they aren’t motivated to allow you to store it across sites.  I don’t think I could find all of the photos I have uploaded to the web if I wanted to.

 

I think that the Live Desktop and Windows Mesh client experience (and more, coming …) are a visceral demonstration of the power of having a coherent (synchronized) view of all of your information from every endpoint. Of course, these are just the initial experiences we have built on Live Mesh Platform. I can imagine many more.

 

The success of Live Mesh depends in large part on the core platform and the developers being successful, in the real world, to build “it just works” experiences.

 

Live Mesh was my first deep exposure to REST. I have to admit that the power of simple operations against an URL space has impressed me. We have tackled very complex problems using the model and it has held up very well. I’m happy to tell you that this elegance will extend to the developer model (soon).

 

I’m looking forward to it.

 

Mike Zintel

Product Unit Manager, Mesh and Storage Platform

 

I want to thank a few people: Dave and Amit for creating the opportunity to work on this. Ray, Jack, George and Colleen for their deep insight. Abhay and his team for making it cool and showing me what is possible.

 

Abolade, Alex, Dave, David, Mike, Tom, Tony, Vlad and many other people on my team for designing and building the right stuff long before I knew what it was. Todd and his team for making it work on the Internet.

 

We’re not done yet and we’re hiring. mikezin@microsoft.com

 

Technorati Tags: LiveMesh