Ideally, an app executes the same whether there's a debugger attached or not.  This derives from very practical motivations:

  1. Bugs usually first occur outside the debugger (some test fails), and then you want to just rerun the test under the debugger to repro the problem. If the debugger changes the behavior, that will impede your ability to repro.
  2. On the flip side, developers may develop their code under an IDE and ensure their code is working by examining it under a debugger. If the debugger is introducing a behavioral change that's hiding a defect, then the dev will miss it here.
  3. People blame the debugger for the app's behavior if it only happens under a debugger. For example, if an app goes throws an exception only when under a debugger, people will assume that the debugger is causing the exception or that the debugger is broken and erroneously reporting the exception. (We get a lot of alleged bugs like this). This misplaced blame can impede the real problem getting fixed.
  4. Behavioral differences undermine people's faith in their debuggers.
  5. When debugging, it shouldn't matter whether the debugger was attached to a running process or whether the debuggee was launched from under the debugger. Behavioral changes can drive a wedge between these two scenarios. This is particularly important for debugger scenarios that can only be attach (eg, debugging ASP.Net or SQL)

So why might apps execute differently under the managed debugger?

  1. User checks: The biggest culprit: A program can explicitly call System.Diagnostics.Debugger.IsAttached to ask if a managed debugger is attached and then behave differently.  (The win32 API IsDebuggerPresent() similarly checks for if a native debugger is attached.). This is the easiest way to cause the most pain. For example, WinForms will explicitly use a different 'debuggable' WndProc if a managed debugger is attached. This debuggable wndproc has an extra try-catch around user callbacks (which the non-debuggable wndproc does not have) to notify users if their callbacks are throwing exceptions. Another favorite seems to be throwing exceptions iff a debugger is attached as way of notifying the user.
  2. CLR / JIT checks: ICorDebug allows the debugger to change the Just-in-Time compiler's codegen options (such as disabling optimizations). This can cause different native code to get generated from the IL, and that can definitely change behavior.  The JITs in the v1.0 and v1.1 CLRs actually automatically generated debuggable code by default if a debugger was attached. This was a very very bad idea. In v2.0, the JIT is explicitly ignorant of whether a debugger is attached, although the debugger can explicitly request that the jit generate debuggable code. 
  3. OS checks: In the rare case, the OS may handle things differently. For example, on Windows, if a native (or interop) debugger is attached, the filters are not executed for an unhandled exception. (Try it and see for yourself)
  4. Exploiting non-determinism: Some parts of a app's execution are non-deterministic, such as timing issues and any references to machine-wide state (such as page sharing).  For example, in a multithreaded app, the debugger will definitely affect timing, and that could expose (or hide) existing race conditions. I've personally found this to be much more common than I'd like. I've had to debug too many bugs that would only repro on an optimized build outside of the debugger for exactly this reason. 
  5. Debugger checks: All the above reasons assume that the debugger is well-behaved. However, we as the produces of an API can't stop the consumers of our API from doing "stupid" things that change debuggee behavior. For example:
    • Launching under a hosting process: Apps are normally launched in their own process. Somebody could write a debugger that launches the debuggee inside of a host process. Now that's a signficant difference between the debugging vs. non-debugging case.
    • Func-eval: Every instance of a Property-evaluation (func-eval), is code that's only run under a debugger. If the func-evals have any side-effects, that can change the debuggee's behavior.  This is particularly important if the debugger does func-eval without prompting the user. Steve Steiner has a great example of this:  http://weblogs.asp.net/stevejs/archive/2004/02/16/73936.aspx

 

IMO, I think the CLR debugger team was too naive about this problem in v1.0 / v1.1. Fortunately, we've taken a lot of great steps in v2.0 to address this:

  1. We've recognized that this is important. Our VS counterparts (particularly Andy Pennell ) helped make this clear.
  2. We've categorically eliminated any non-debugger CLR behavior that depends on whether a debugger is attached. As mentioned earlier, the JIT will no longer automatically generate debuggable code. We also now will always track the jit's debug-information (the IL-->Native maps that the jit produces). See the definition of CorDebugJITCompilerFlags in the v2.0 CorDebug.idl file (which is available in the VS 2005 beta 1 sdk).  
  3. We've tried to identify common patterns for user's calling Debugger.IsAttached and then have ICorDebug provide alternative solutions that don't require the debuggee to behave differently. For example, Winforms has their debuggable callback because they want to notify the user when an exception crosses the boundary between a callback and a handler. If ICorDebug just (a) recognized that this boundary was significant and (b) provided a notification that an exception was going to unwind across it, Winforms would never have needed to add the try-catch in the first place. In v2.0, we do both of these things. We allow such distinctions (via the Just-My-Code model) and provide new debugging events to describe the exception (Check our ICorDebugManagedCallback2 in CorDebug.idl).