This morning, I was reading a post on the GDN C# Language board about exception handling, and a reply got away from me, so I thought I'd put it here instead.

I was going to link to the post, but I think that might be unfair to the poster, so I'll skip it.

The discussion is about additional exception handling features in C#.


The vast majority of exceptions should not be caught, except perhaps by a last-chance handler.

In rare and very specific situations, you may want to implement recovery logic for a specific situation. The canonical case of this is when you get an exception when trying to open a file, though even in this case, you only catch it if there's something you can do about it. Can't open a user file? Ask the use for the filename again. Can't open a system file? Well, there's probably no way to recover from that, so catching it probably doesn't do any good.

Without any user-written exception handling, the behavior of your code WRT exceptions is fully specified. Every exception makes it up to the outmost layer, so you know about everything bad that happens.

When you start adding in exception handling, that's no longer true. A catch block that you write may be perfect for the exception you catch, which is a good thing - that's why exception handling is there. Or, it may appear to work but harbor a latent bug. I've used this example a number of times:

catch (Exception e)
   if (e.InnerException.GetType() == typeof(MyFileNotFoundException)
      Recover();   // use your imagination for how this works

This code appears to work fine - the recovery works great. But it swallows any other exception that happens in the try block.

This specific example is an illustration of why wrapping is something to be approached carefully and thoughtfully, but the general point is that exception handling needs to be targetted extremely tightly.  Even then it's possible to get unexcepted behavior.

Or, to put it another way, the less exception handling the better. Even experienced programmers will make mistakes - I inserted 250K records into a database a while back because of a flaw in my recovery logic in a catch clause.

If you've spent time working with return codes in the unmanaged world, my experience suggests that you are likely to use too much exception-handling code in the managed world.

To go on to a couple of the specific proposals:

1) One proposal is to add try(count), which would automatically retry the block up to count times.

There are a few scenarios where this makes sense, but they're pretty rare.

My group came across a situation in unmanaged code recently. The low-level code needed to open a file, but sometimes it wasn't quite closed, so the developer wrote some retry code - it would sleep for a second, then retry - up to 10 times before failing. That code worked fine, but then one day the process hung. It turned out that the file was just missing, and the code tried for 10 seconds, then errored out.

But the calling code also implemented a retry mechanism. As did the calling code of that.

So, the system would wait for about 15 minutes before it finally decided that it couldn't find the file.

I think try(count) would quickly become a crutch for taming behavior that wasn't understood, with lots of weird behavior.

A second proposal was to expand the scope of a catch clause, either making it apply to a whole method or supplying a handler method to be called when an exception arises.

The essence of exception handling is to be able to respond to a specific exception in a specific situation.  The chance that such a handler can do the right thing in all situations seems remote to me, and the chance that it would do the wrong thing seems high.