So, how to make sure we didn't have a typical "v.1" product with OneNote? As I mentioned last post, it isn't as simple working on the product until it is "perfect" in design - that approach doesn’t work. Takes too long and you don’t really understand what "perfect" is until after you ship it (if you ever do).

I also knew that shipping the equivalent of "version 3" as the first version wasn't really possible. If we quantify the amount of development put into a typical release as "X", then a version 3 has something like 3X worth of work put into it. But depending on how far off you were with version 1, that first X might only count for 1/2, and the second X is only 3/4, so in the end due to inefficiencies and rework caused by inaccurate initial design, you end up with something more like 2X-2.5X worth of "features" in version 3 depending on how far off you were with the first idea, and how fast you were at correcting. So shipping a version 1 that was really version 3 wouldn't be possible - we simply couldn't put 2X of dev effort into the first version. Instead my goal was to be as "on track" as version 3 in terms of design with our 1X dev effort. That is, instead of half of X, we'd get as close to a full X worth of useful stuff the first go round. It would feel like version 3, but be much leaner.

Most new software projects seem to be developed by technologists who get excited about a technical solution, then go search for someone who hopefully has a problem this solution solves (the hammer looking for a nail). The problem with this approach is that very often you are solving a problem hardly anyone cares about, or at least does not consider serious compared to the problems they really care about and need to spend their money on. In fact this detail is the bane of most startups. Customers have a fixed pot of cash. If you ask them what they are going to spend money on, they're going to spend it on their top priorities in order, until they run out. Technologists get confused by their "solution" looking for a problem - they think that solving a problem means people will buy their product. But even if you are solving a common problem, if that problem is not high enough on a customer's priority list, there won't be any money to spend on it.

With OneNote, as I described back in OneNote Genesis, rather than being a hammer looking for a nail, I just looked to see what problems people were having without thinking about what technology we might develop. So right there was a better start than most products get.

The next thing that destroys new products is lack of focus on solving a (possibly small) set of problems completely rather than taking a whack at a many more than that (See OneNote and version 1). For OneNote, we built 5 user scenarios that matched the problems we had seen, and determined that the product would meet those scenarios, and do them well, and everything else would wait until next time.

The next tar pit is in designing the features to solve problems in those user scenarios. A typical disaster product solves them the way the designers would personally solve them. But the designers are computer nerds, not normal people. There is a *lot* of software out there that was clearly designed to be useful to the creator of the software, and if anyone else liked it too, bonus! To get around this a good designer collects data and meets with lots of real people, ideally in their natural habitat (cf. Contextual Inquiry and other approaches). You can also do surveys, focus groups, interviews, etc - there are lots of ways to get quantitative and qualitative data.

Ok, now you're swimming in data - but be careful not to drown. The next pitfall is in misusing data, or believing data and not your own common sense. A skilled design team maintains a vision for the product that is "informed" by data, not formed by the data. I have seen numerous projects that were led astray by some factoid or other that the design team took as gospel and lost their way with. Not to be too hard on them, but Bob was like that (usability data can be disastrous without a strong team to interpret it). Office itself has many features that appeared because data took over. e.g. in Word you can find Text Effects (Format/Font/Text Effects tab). This was added back when the web seemed to be the future (1995), and studies showed Word "had to remain relevant" by adding dynamic content to its documents for on line consumption. Wow…

So if you have made it this far, you can write up some specs that represent features designed to support real scenarios that solve problems that matter to potential customers and you have data to show it. Now you turn the dev team loose, and they build something. What did they build? If you assume they built what you designed, you're in trouble - no one writes specs that well. And if you assume that your designs will actually work as you hope even if implemented as you desired, you're really in trouble. So working closely with dev and test to make sure that what is taking shape is what you intended, and then trying out what is there, seeing where it fails, and adjusting the design before you go too far in the wrong direction is key. Too often a product team simply ships what they designed originally, or whatever was built without managing it to turn out right.

OneNote did something different from Office at this point. As soon as we could get the product to stay up and running, we put it in front of a set of real people and asked them to use it. See Field Trial for a description of this. Then, since we still had the flexibility because we had done this trial so early, we made significant design changes to OneNote in time for its first beta - something a typical Office product can't do since it doesn't have the feedback. Then we took a gamble and made significant changes after the beta – something Office never does since it needs to ship at known high quality. OneNote could do this because we decided it was better to ship a correctly designed feature set in our v.1 than it was to lock down for stability like Office does, since Office has decades of valuable features already that people depend on which could not be risked. We didn't have that risk, since we had only a few features and they were all near the surface where they get used a lot. This meant we could rely on Watson telling us if we had added any hidden nasty crashes and hangs, so we weren't totally in the dark. There was a "Technical Refresh" after beta, which allowed us to collect Watson data before shipping, so we did our stabilization by reading off and fixing the top problems reported from “the wild”.

For the next release, since we actually have a functioning product now, we expect to use this technique even earlier during the development of version 2 (or 12, in Office versioning). We expect to have people outside the Office team who are internal users and possibly some limited external people use (a.k.a. “Dogfood”) our first "milestone" build - when we are less than a third done with coding. And we'll do that throughout the project, since we don't have to wait until “alpha“ now. And since we'll be a little more mature as a product that has real customers who rely on us, we'll have to be a little less reckless near the end.

BTW, thanks everyone for your support to continue blogging.