[All the other Parts: History of Visual Studio]

I was going to go forward again in this installment but I got some requests to talk about some older things again before I did that.  You might be getting bored of the distant past so I’ll try to keep that part short.

There were some other key pieces of technology floating around back in the early 90s:  One of them was this thing called Codeview for Windows (CVW).  This thing was substantially written by my racquetball partner (no name dropping, but I haven’t forgotten) and it was the “hard mode” debugger for windows.  Remember how I said you couldn’t debug Win16 the normal way because you can’t stop it?  Well you can if you have a debugger that has a whole alternate UI based on “COW” that runs on its own monochrome character-mode-only monitor (usually a Hercules Graphics Card, remember those?)

You thought multi-mon was a new feature but no, in this flavor it’s very, very, old.  :)

This baby could give you true symbolic debugging for all kinds of Windows programs for which you had source code and debug info, but if you wanted even lower level stuff you could always hook up a serial cable to a dumb terminal and use wdeb386.  Wdeb386 is your buddy!

CVW was the main debugger that we used to debug “Sequoia”, which I alluded to but didn’t go into the details of earlier.  “Sequoia” (that’s a lot of vowels, huh) was a before-its-time IDE that we worked on around the time of PWB 2.0.  Ultimately it was cancelled because it was too ahead of its time I think, but, as I’m fond of telling my wife, being ahead of your time is just one of the more creative ways of being fundamentally wrong.  That quip didn’t make me feel any better then, either.

Anywho, Sequoia was interesting for lots of reasons but one of the most interesting things about it was that it's commanding was entirely based on the same BASIC engine that drives VB. Everything, everywhere, was a basic command in disguise so you could record anything.  It had a flexible editor that used a piece-table structure for space and had cool font support, coloring, and outlining.  It had a source code debugger with the beginnings of a soft-mode solution, and it had graphical build environment, and graphical program visualizations.  It was pretty slick.  Of course it was seriously incomplete and it had a tight dependence on some technologies (especially that BASIC) that were not going to be available on the 32 bit OS anytime soon and we needed those to ship.  We also needed all hands on deck for VC++ 1.0 “Caviar” and so Sequoia got scrapped.

A lot of the Sequoia ideas came back later in different versions of the IDE, some of them are just happening now in VS2010 but I guess that’s par for the course in cancelled projects.  Perhaps, when viewed as a research project, it wasn’t so dumb.  It wasn’t that many people and it wasn’t for that long, and it was quite an education.  Unlike QCW, it was written in C++, with its own framework, so it provided a great test case for the C++ compiler (C7) and it influenced both COM and MFC.

I planted a Giant Sequoia in my backyard in its honor (I called it Sequoia 0.1) and, true to its name, it’s quite big after 15 years. 

Another funny story about Sequoia is the series of codename changes as we cut features before finally cancelling it altogether, it went from “Sequoia”, to “Bonsai”, to “Bonfire” – One of the best things about that project was how nice the specifications were, I still have some. Thanks for those – you know who you are Calibri; mso-hansi-theme-font: minor-latin; mso-char-type: symbol; mso-symbol-font-family: Wingdings;">J

Speaking of codenames, did you know that the various milestones of C7 had interesting codenames?  There were far too many of them of course because that was a hard release to get under control:  John, Paul, George, and Ringo were 4 of them; We were still in need of Beatles after that so we added Stu and then “The Long and Winding Road”  (though I bet the memos all said LWR) and finally shipped with “Let It Be.”

Ok let’s get back to the more recent past.

After “Dolphin” we had achieved a pretty significant miracle I think.  The shell actually worked as intended and we shipped other languages that lit up the splash screen (and more) – multi-language project creation, browsing, and debugging all in the same core bits.  Basically all of the compiled languages could participate but we still had a lot of ambition.

The next release began as “Olympus” Visual C++ 3.0 – but we soon renumbered it to 4.0 to align it with MFC so that we didn’t have to say VC++ 1.0 with MFC 2.0, then VC++ 2.0 with MFC 3.0… This time it was going to be VC++ 4.0 with MFC 4.0.  In retrospect this was highly stupid, especially because MFC ended up locking down on version 4.2 when backwards compatibility became a more valuable feature than anything else we could add – and so it stayed for a very long time.

I handed off responsibility for the debugging components in Olympus and with a few close friends we put together some very cool improvements for the tool chain.  Incremental Linking, Incremental Compilation, and Minimal Rebuild (I was mostly involved with the latter).  We had done something like this before in the original 1988 C# – it had all of those features – so we had some idea how we would want to do them in the mainstream tools, and actually ship them this time.

In case you don’t know what those things are let me quickly describe them:  Incremental Linking is where you make a modest change to some small fraction of your source files, creating a few new .obj files, and instead of totally relinking the resulting executable you “simply” patch it in place.  Sounds not so hard, the thing is already logically organized and the linker knows where it put stuff, you just leave a little room for additions in the .exe and away you go right?  Err, not so much no, what are you going to do about all the debug information, that has to be patched too, new offsets, new line numbers, etc.  What about the browser info?  You could easily spend as much time on other collateral pieces figuring out what to patch as you would have just redoing the whole thing.  Some cleverness is required.

Incremental Compilation is where you make a minor changing in one or more source files and rather than regenerating the entire .obj file you “simply” recompile only those portions (usually methods) that have changed and leave the other object code in place, thereby giving the back end much less work and saving you parsing time.  This is especially tricky because it has the same “what about the auxiliary files?” problem as incremental linking and you already have PCH helping you to do this fast, so the .obj files need to be largish to come out ahead.

And last, but not least, Minimal Rebuild.  This is motivated by the kinds of edits that Class Wizard made. This guy notices that you’ve changed a popular .h file, and you’ve changed a single .cpp file (usually adding a method in both the code and header file) and although many files depend on the .h file only one file actually depends on the specific change you made, usually the one you modified.  The others could be skipped.  This has great potential to save compilation time and paid off big in larger projects but it too has complications associated with the auxiliary files – the debug information has to be patched in files you didn’t touch and so forth.  We actually started with a version of this feature that had Class Wizard telling us what files had been changed for Class Wizard edits and then generalized it so that we could determine what was different by looking at the debug information for the previous version of a build and then the new version of a build and from there conclude what we could skip. 

The slick thing about Minimal Rebuild is that it turns out that keeping the full dependency tree for a large project around is just too expensive.  We kept a set of approximately correct dependencies with a Bloom Filter data structure and by tolerating a small false positive rate we were able to realize the bulk of the savings.  Using the existing debugger information to do the change analysis was probably the most clever bit of all.

While all of this was going on, an equal effort was going into the compiler back-end and they were wanting to unify their system on a tuple representation so that they could use more of the same optimizations in more places and generally get better code quality.  Such is the way of compilers, huge efforts to squeeze out maybe another 5%.  10% improvements are a thing of dreams.

Olympus was getting quite large and it now used enough memory that it was affecting the compiler’s ability to get its job done.  If there isn’t enough RAM (we’re talking about say 4M here) to hold the IDE and the PCH in the disk cache then performance suffers.  I went on a working set jihad and got the IDE’s working set (while building) under 64k (it was 13 pages).  Back then you did really insane stuff to squeeze out every page – I even turned off the carat flashing in the output window when I found it caused extra code to run during the build!

Other things were going on in the industry but one thing was really on our minds often, mentioned at many offsite retreats – C++ programming is too hard.  We really could use a different language/system that would make this a lot easier.  Delphi was being fairly successful.  Visual Basic was still strong.  But now it’s 1995… The web is coming, and with it, Java.

[See The Documentary on Channel 9!]