The C# IDE team talks a lot about the issues that arise when a team doesn't dogfood it's own product. It's very easy for us to get a false sense of confidence in the quality of the product and not realize problems we have caused until they start affecting different teams that use was we create. At that point it's very difficult to fix things (quickly) because the corrections you make take time to filter down to those who can give you teh most valuable feedback. In some cases we might even end up alienating our users so that they end up sticking to what they know is reliable, i.e. emacs/vi/notepad/whatever.
When I started at MS a year ago i was very actively involved in coding projects in C# (including writing a Unit-testing framework). However, once i was more established I started getting a full work load and i found that these opportunities were few and far between. I needed to spend my own time doing them and they ended up interfering with the rest things I wanted to do with my personal life.
However, we've started focusing on these issues once more. We recognize a lot of problems with our current model of software development and we want to know if it's possible to do better. However, we think that just shifting blindly over to some new model will cause a lot of pain due to our lack of experience with it. In order to try to alleviate that pain we want to start expermienting with different models right now. That way when we do move over to architecting and implementing the next version, we'll have a well oiled infrastructure that helps us rather than hinders.
One of our most serious issues is how tightly coupled the C# IDE is to Visual Studio. i.e. in its current incarnation it's pradcically impossible to test our code without being dependent on the rest of the IDE functioning. Not only is this a problem as it makes testing our code dependent on the rest of VS being in a testable state, but it also makes testing orders of magnitudes slower because many of tests are based on how we update the editor. For example, if we have a test to see that we are showing the correct items in a completion list, we actually have to open the full editor, paste in the appropriate code, invoke the completion list, and then read out the contents of the list and test against what we expect to be in there. One could imagine far easier ways to do that, for example, just taking a source file and querying "what would the contents of the completion list be at this line/column". However, as I said before, too much of our code is too dependent on actual components of VS being available (such as the source code window and the completion list window). Now, if we conservatively figure 10 seconds to launch VS run a single test and verify the result, then we can see that rich testing ends up being very costly. Say i want to test 1000 different scenarios; i end up spending about 3 hours on those tests. So we end up creating very monolithic tests. I.e. a test that will actually end up testing about 50-100 different things at once. Of course, when the test fails, narrowing it down to the actual problem is quite difficult.
If we were tightly decoupled though we could see ourselves running tens to hundreds of tests per second, and having tests that checked for very simple things. We could then test a wide range of our code automatically, confident in the knowledge that regressions would be caught. Every time one wasn't caught then we could just add a small test for that case and feel even more confident as our corpus grew.
However, in order to accomplish this sort of pervasive unit-testing, we know that we'd need to be very willing to invest fully in such a model. And that means getting very comfortable with writing tests and considering them as important as the actual code we write. And, IMO, the only way to do that is to start now. In my case that's going to involve dogfooding the product and trying to write a library that is pervasively tested. I'm starting right now by downloading NUnit and I'm either going to work on Purely Functional Data Structures or sometthing similar.
I'll let you know how it goes!