I compared source-level stepping perf between Everett and Whidbey here. In both cases, the step operation is implemented by single-stepping through disassembly until you get off the current line.  This can result in a lot of single-step operations and thus can be slow.

Gatsby asked:
     "Why does step-over need to be implemented as a bunch of little single steps? Why not figure out what instruction the end of the 'statement' corresponds to, and add a breakpoint there and run to it?"


There are some issues with trying to predict where a line will finish:
1) The line may have multiple end points. For example:
     if (a) goto A else if (b) goto B else if (c) goto C else goto D
This statement could end up at 4 different places (labels A,B,C and D). This could be an indefinite amount of breakpoints.

2) Predicting where an instruction goes can require a full x86 interpreter. For example, consider this:
     mov eax, ...
     call [eax+8]
You first need to interpret the instructions to determine eax, and then evaluate [eax+8].


Now it’s certainly possible to have an intelligent hybrid between the two methods (single-step vs. predict). For example, the step operation could make a best effort prediction and when the interpreter gets lost, it could run up to the point it got lost at and then repeat. It would be pretty easy to make this method at least skip past basic blocks.