Overall I know it’s a great thing and there are some issues that need to be improved over time for it to be truly successful. I may be a little close to the project since my co-workers are responsible for the Feedback Center, but I think I’m still allowed to have opinions and air them in this space. :-) Recently I was asked some specific questions that do a good job framing my opinions on this project.
Do you like this kind of transparency? I love this transparency and think it’s only the first step towards a more transparent future of developer products at Microsoft. Why do I like it so much?
In case it’s not clear… when you enter a bug through the feedback center it is also opened directly in our internal bug tracking solution alongside of bugs being found by our own testers. Internally we can’t escape these bugs since they are in the database we scrutinize every day on our way to shipping the product. You get to see when, how, and who fixes your reported issues and you get notified of its progress along the way. We also require that team members provide quality responses to every reported issue to further encourage accountability and close the feedback loop.
Historically we’ve had crash reports and statistics that helped us know which crashes we should focus on based on the number of reports. Triaging non-crashing issues has always led to debates over how we think customers might feel about an issue versus what rating and how many customers might actually be affected by an issue. This will eventually help us triage bugs much more efficiently and gain an even better customer understanding of the impact of bugs we find internally.
We will only get more transparent. Soon you may be able to see our actual internal bug counts and even see internal bugs alongside customer reported issues. When we are in a world where you might have access to any build of visual studio this will be even more important.
How does this compare with the closed, semi-private, bug reporting on beta.microsoft.com ? I’ve used both and the MSDN feedback center is much better simply because you can search, see, and vote on bugs reported by other users. You also get to add your comments and workarounds to issues as well. With betaplace (beta.microsoft.com) you only get to see issues that you report.
Microsoft products will still have private beta’s for a long time so there will always be a need for a non-public site, but there are teams talking about creating closed group instances of the Ladybug application moving forward because it is seen as a step up from existing solutions. So you can imagine that beta.microsoft.com could eventually be leveraging the Ladybug solution.
What do you think about the quality of submitted bugs? It could be improved. I’ve spent a lot of time in my career here triaging bugs as a Test Lead and I’ve also been involved in triaging the bugs from the Feedback Center. In general feedback center bugs require more “translation” time to understand what customers are trying to report and what they may have been trying to do when they encountered the bug. Some teams have suggested that the triage time per bug is up to 10 times as much when compared to bugs reported internally. I’m not sure I buy that number (given the statistics you’ll see below), but it does take more time. The extra time required is probably because of the following reasons:
What do the results look like so far? So we’re listening and guaranteeing a response. IMO this is only part of the solution. In the end we’ll also be judged by what we do with what we hear. It’s unfortunate that this feedback mechanism was opened so late in the cycle of Visual Studio 2005 because a lot of this great feedback will only start to get leveraged for the next version. This is especially true for suggestions that would require more than a trivial amount of new development work. In the mean time I’ll share some raw statistics since late June when the feedback center became public that I took a snapshot of today.
For comparison I thought it might be interesting to look how these percentages compare to internal bugs and suggestions opened during the same time period. For these, unfortunately, I don’t have a good measure of time to response so the raw % will have to do. Also, just assume the total numbers for internally reported issues are larger. :-)
MSDN Feedback Center Bug Stats
Internal Bug %’s for the same time period
MSDN Feedback Center Suggestion Stats
Internal Suggestion %’s for the same time period
*If an MSDN Feedback bug is considered a duplicate it is associated with the primary internal bug so customer bugs can only be duplicates of other customer bugs. That number has been very small since customers can “+1” an existing bug rather than opening new entries. So the % duplicate of MSDN Feedback bugs was not worth reporting.
What’s important to note is that the jury is still out on around half of these issues that are still marked as active. Because of this the resolution percentages are very likely to change over time.
What I learned through this is that, so far, we’ve actually fixed a higher percentage of customer reported issues than internally reported ones! It will be interesting to see if this keeps up over time.
Now that you’ve read this far I’d like to know some things: