I was thinking to write about how software is made. First, I'll start by describing three methods of developing software. I am sure there are more, and someone has probably written books on this, but hey, this is an informal medium. Here they are:
  1. The "wait until its perfect" school. This approach believes that software needs to be "done" before it is released to the public. Done is defined as having all the conceivable useful features, polished to a gleam, and of course "bug-free" (see my previous posts on that fallacy). These products tend to take a really long time to come to market, generally miss their opportunity since someone with a slightly lower standard than "done" has beaten them to the punch and worst of all, they spent so much effort on getting things perfect they forgot to actually try the product out on real people and make sure they weren't missing the boat and adding things no one cared about. Several now-distant competitors to Office products followed this school (one of my favorites was the release of a Japanese word processor that touted its new version was based entirely on a component architecture! Talk about ivory tower and completely disconnected from customers, who couldn't care less about the architecture - not only that it was slow and a memory hog, like most componentized software)
  2. The "we'll just release a new build" school. This version develops software that sort of works, then sends it out for feedback as version 0.7.1.0145 or whatever. They get some feedback, make changes and make a new build with a slightly higher build number, like 0.7.1.0146. In fact, they make a change and produce an update whenever they hear about a problem. This software usually never actually ships - it just gets slowly better although often only in increments. In fact it may never make a great leap in innovation, since it is constantly in a state of trying to get closer to a specific goal. The response to any problem is simply "we'll just release a new build". This approach is fine for hobbyists or low-usage scenarios, where the customer or user doesn’t mind updating their software all the time, or redeploying to their few machines (e.g. servers). It doesn’t work so well for mass-market software due to the cost of deploying the new build being prohibitive for many customers, and because the channels for getting updates to "normal" people are poor. Also end users expect a product to at least have the appearance of "done" - no one buys a TV that is still under development.
  3. The "ship early, ship often" school. This is the one that most client software that Microsoft makes has followed. The theory goes that if you try to plan too much before you ship your first product (wait until its “perfect”), you will not be able build a truly useful product since you don't really understand who your future users are yet and what they will find appealing in the product. So the best thing is to get something out there, understand what is appealing and what isn't from the “early adopter” feedback, then ship another version that responds to that feedback as soon as you can. Typically, version 2 starts before the feedback from version 1 comes in, so version 2 is usually a polish of the partially misguided version 1, and version 3 is the real re-work to make the product what its prospective customer base really wants. This is where the "Microsoft doesn’t get it right until version 3" axiom comes from. One strength of this school is that it generally beats out the "wait until its perfect" school in the Darwinian world of the free market, since it gets access to real world feedback and goes through more generations than the "perfection" approach does, resulting in a product optimized for real customers sooner.

For OneNote, I wanted to do things differently. It seems obvious, but I wondered why we couldn't ship "version 3" the first time around. Why ship the product before getting substantial feedback on whether it was the right set of features - in fact even a worthwhile product or not? The standard answer is that to do so would be following the “wait until its perfect” approach: you couldn't get substantial feedback until version 1 was more or less design complete. If the user feedback showed massive changes were needed, you'd essentially be skipping version 1 and going on to version 2, taking way too long to get something out for the real world usage that you really need. I agreed, but I felt that the problem that forced the skip to version 2 was that the standard Office way of developing software did not bring in user feedback soon enough. In fact usually broad user feedback is accepted only late in the process, to determine if the software is stable and works well in the huge variety of real-world configurations. The design has been locked down early on. Of course usability tests are done on the designs and they are adjusted to make them work in the lab, but getting deep feedback from real people in real situations is hard for Office - even sending out a beta to hundreds of thousands of people only generates feedback from a few thousand. There are reasons for that - a lot of it is just that people assume we know what we are doing and don't question the design of features even when they don’t work for them. I could go on about the difficulty of getting beta feedback another time. But for OneNote, we decided to try to do things differently, which I'll talk about in my next post.