One of the things that causes us grief is that source-level step-in (F11 in VS) is not a well-defined operation.

Some examples:
Consider the following call to static method foo:
     MyClass.foo(...);

Now offhand, you'd expect the step-in to land in the method MyClass.foo(). But what if that invokes a class constructor (cctor) for MyClass? Should the step-in land in the cctor instead?
What if some other interceptor gets in the way, such as a security stub that runs managed code?
What if foo() is a pinvoke that calls out to native code. Should we stop in the native code? Or should we just stop if that native code calls back into managed code.
What if foo() invokes some sort of managed marshalling stub for its parameters. Should we stop in the stub?
Or what if foo() is some cross-thread or RPC call or a call to a web-service? Should it go to the real source-level target, or into the call infrastructure? This is an even bigger issue for languages where calls may get compiled into complex operations such as message passing.
Or what if somebody throws an exception (perhaps evaluating a parameter causes a null-reference) and thus executes a filter (not a catch block) on the stack. Should we stop in the filter or not?

Another caveat is that V2.0 CLR doesn't handle debugging inlined methods well at all, so if foo() is inlined, you won't step into it. However, this is only an issue for optimized code.

There are other examples where the behavior is technically well-defined, but difficult to predict. Consider:
    Foo(Alpha(Beta()), Gamma());
Although this case may be well-defined, unless you know the specific calling convention (which may not be obvious from the source), you don't know which method gets stepped into first.  Even so, if you want to step into Alpha(), you've got to do a lot of step-in and step-out operations. (Though the "step-into which method" feature, which pops up a context menu letting you specify which call you want to step into, helps here).
Similar problems arise when stepping into multicast delegates.

It's a no-win:
One approach is that step-in should stop you at the next bit of code executed with no exceptions. So stop in the cctor, stop in the marshaling stubs, etc. This has a problem in that this is not what most end users actually want. Instead, they most often want to stop in the source at the other end of the call. Anything that happens inbetween is an implementation detail. Furthermore, if they do stop in some interceptor, there may not be a convenient way to resume the step-in operation to step into foo().
Another approach is to arbitrarily skip certain things. A middle approach is to add a bunch of policy knobs to control what is skipped.

What abut ICorDebug?
The CLR implictly chose to tackle this problem by including ICorDebugStepper in our API. We ended up adding a bunch of different flags (see SetUnmappedStopMask and SetInterceptMas) that you can set on ICDStepper to toggle these sort of policy decisions. This has proven painful because 1) it's more arbitrary functionality that we pull into the CLR platform, and 2) the list of policy flags will never be complete.