When I work with folks who are either new to debugging, or else want to sharpen up their chops (perhaps they have gotten rusty by not having to do so for a long time), one of the things I tell them to do is to watch out for things that don't do what you expect, and then figure out why. I mean, does everything on your computer doing exactly what you would want it to do, all of the time?

For example, I was working with a friend who doesn't have as many chances to debug things as he would like. Concomitantly, he was struggling with MS07-052, which (for the first couple of days) would cycle in an infinite loop for anybody who didn't install Crystal Reports as part of Visual Studio. It would install, succeed, and then detect that it needed to install again. He was about to ping a discussion alias about it. So I looked at him, and I said, "why don't you debug it?"

It's that kind of mindset that gets you practice debugging. Now, in this case, the cause was known, so he wasn't going to be helping anybody out. But in many cases, the cause may not be understood (you may be holding a repro that is really hard to get) - take advantage of that opportunity. And, even if you are doing some redundant debugging, if you learn something new and keep your skills sharp, would you begrudge yourself the time? If not, then perhaps...

One of the blogs I love to follow is Mark Russinovich's blog. He takes this exact approach over and over, outlining exactly how he uses tools to follow that investigation. And, obviously, learning how to better use the tools is one great takeaway.

But I think there's more to it than sharpening your skills with tools. Rather, it's a view into the mindset of somebody who is so natively curious. He sees a problem. It annoys him. So he figures out why it's happening, so he can get it to stop. By doing that alone, you can improve your skills exponentially, and Mark is kind enough to share his techniques for how to do that investigation.

One thing that I noticed that caught my attention even more than the great example of when to start debugging and how to do it was his awesome example of when to stop. In Mark's latest entry, The Case of the Frozen Clock Gadget, he proceeds through the investigation, and in the end determines the API that is causing a memory leak, after which he ... (drum roll) ... stops debugging it and files the bug. He doesn't dig into the implementation itself to see where the memory leak is happening. He never gets to the exact cause. But, what he has done is find somebody to assign the bug to, and then he moved on. There will be a single person who owns that API, and he can hand that person the symptoms and a repro. He can now go on to bigger and better things, and that person can fix up the code.

That, my friends, is a hard skill to learn. I mean, look at what I just said. You have to be curious enough to delve deeply into a problem at the first sign of trouble. You have to be creative enough to dig in and figure out what is going on. But then, before you see the actual culprit, you stop. You have to be able to turn off your curiosity, and know that your job is just to find an owner for the bug, and move on. (Don't worry, there are still plenty more bugs out there.) For those who are so natively curious as to get into the nitty-gritty of debugging, this can be a hard lesson to learn.

I see this all of the time working with enterprise customers who are testing application compatibility for a migration to Windows Vista, and one important step is knowing when to stop debugging. The answer, of course, can vary:

  • If the application is one that you will never run without full vendor support, stop debugging before you start. If you're not going to ever bother fixing it, then knowing what's wrong won't help you, will it? This one is a hard one to learn. Many times, I have said, "I can debug this, figure out if I can shim it, and then test it with the shims applied, but if I do, are you going to run an unsupported, shimmed application? If not, then we're just wasting our time."
  • If the application vendor isn't being responsive, or is playing the blame game ("It's the fault of xxx software that I depend on - if they'd just fix their bugs, then my software would run great - go talk to them.") then some debugging time can help you end the circle of blame and keep you talking to the people who can really help you. Remember,you're their customer, and if you come in saying, "I know it's you, you're doing this, this is illegal as documented here, and I need you to fix your code please" then you can circumvent the blame game and start getting some results. Note that, once you determine the culprit and can prove it, your debugging is done. Remember, you aren't going to change the code - the vendor is. Whichever one it turns out to be.
  • If you don't need support, and are willing to shim the third party application, then you need to debug until you can figure out which shim you should use. Once you know this, and you shim it and the tests pass, then you are done. If you prove that it can't be fixed using a shim, and you need to code changed, then you are done.
  • If the application is developed in-house, then you need to debug it until you know which person owns the component that is breaking. Then, assign the bug to them and let them finish debugging - your work is done.

Believe it or not, knowing when to stop is one of the hardest part of debugging applications. You need to encourage your innate curiosity to determine just enough, but then be able to turn it off. It's simply the only way to get everything done. And, believe me, by pointing somebody in the right direction, your help can be beyond invaluable. Happy debugging!