Making an Exception

Making an Exception

  • Comments 25

Diego DagumAn interesting debate about C++ exceptions took place a few weeks ago in the C++ MVPs discussion list. The trigger was something as innocent and specific as “would you guys just throw std::runtime_exception with some error message string or you define a new exception hierarchy? What’s your opinion on AtlThrow/AtlThrowLastWin32?” But as it typically happens, replies motivated new questions, justifications, etc., thus moving the point around and showing yet another example in this sour software industry on the impossibility to find that “holy grail”, that one-size-fits-all answer for any possible question (current or future). In this case, about exception generation and its later handling. Despite that, the arguments were so enriching that I decided to compile some and share here with the broader community.

 

A Brief Intro

Best practices on exception generation and handling are a matter of debate in any programming language that supports exceptions, although possibly in the C++ case its complexity is aggravated by the fact that exceptions were not present in its first versions, and some alternative approaches emerged until then (some may remember the old, macro-based MFC exception model). Furthermore, C++ is probably the most widely used language across developer generations, as it’s known and used by developer generations pre-Java and/or .NET, while being currently the most widely taught language in academy –ahead of Java and C#.- These together with the fact that C++ is still among the most opened jobs make possible that large C++ projects are being built by developers with different background and approaches, both in exception handling and development as a whole.

 

Exception hierarchy

The original question mentioned std::runtime_error as potential root of a custom hierarchy but somebody preferred to use std::exception instead considering that the standard derivatives such as std::runtime_error have been too abused to be an effective hierarchy. If this practice is project-wide applied, it’s a plus against catch ellipsis usage –catch (...)- as this last also intercepts non-C++ exceptions like Windows SEH (unless this was exactly what you wanted). [UPDATE: Windows SEH exceptions are not caught when compiled with /EH or /EHsc options instead of /EHa.]

By checking my copy of C++ father’s book on C++, he makes no explicit advice regarding root candidates for a custom exception hierarchy, even using plain objects in many examples. Not advice in the opposite either, though. In his 14th chapter about exceptions, he introduces the STL exception hierarchy as a family of exceptions being thrown by STL components, regardless that classes you or other developers in your project could build may be extensions of this family. But what Stroustrup avoided to do was actually made by influential Boost group through a brief article on this matter. There, they consider std::exception a reasonable base class for your own hierarchy, encouraging you to seriously consider it.

Another MVP, instead, mentioned the fact that he works mostly in MFC applications and, in such specific context, basing your exceptions in MFC CException is more convenient than the STL alternative because the MFC message dispatcher has a try / catch that limits the scope of its exception hierarchy. In his own words “there is one school of thought that says if you have managed to let the exception get back to the MFC dispatcher, your code is already erroneous.” He affirmed that he has a strong sympathy with that position.

Most MVPs agreed that, despite not being illegal in C++, throwing primitive types like int, long, etc., or similarly Windows-based ones like HRESULT, etc. is a coding horror as inabilities to catch those in the proper place will make the application crash with a hard post-investigation to determine where they are being originated. One went further, stating that “for me it should be a reason for termination of employment (at least the second time it is done).

Yet, there was a common agreement that while an exception hierarchy help avoid the problem just mentioned, it may still occur when exception messages are not explanatory enough (i.e. “Record not found”, instead of something like “<Entity> <ID> not found.”)

 

Bulletproofing

As a side comment, someone criticized some the MFC design as CFileException::GetErrorMessage() method uses a pointer based buffer as argument instead of an std::string reference as current C++ school recommends (implicitly, the criticism also reaches the design of std::exception as well, as its what() method returns an old, C-style char*). But somebody else reminded some wisdom delivered since the exception handling mechanism was added to C++: exception operations do not themselves throw exceptions. In that sense, the standard string implementation could do it –i.e. it can’t allocate memory,- eventually leading to an std::terminate().

It sounds reasonable, but the response of the original thinker was original as well: if your application is in such state that there’s no remaining memory for a simple exception message, incurring in an unconditional termination wouldn’t be that terrible after all.

Not that crazy: in Java there’s a distinction between Errors and Exceptions. The latter is what we all know and are discussing here, but errors belong to a category considered fatal in a way that catching attempts make no sense. Thus, java.lang.OutOfMemory is an error, not an exception.

In any case, this MVP clarified that he expects such abnormal condition –failure to create an std::string() because there’s not enough memory- to occur with a low frequency, although there was no answer when a third participant asked “even in mobile apps?”

 

Exception Declaration. Comparison with Java

Along the debate, yet more hallways were open every time somebody needed to justify an opinion or provide examples. Thus, a guy dared to defend dynamic exception declaration (that is to say, a specification of the possible exceptions that a function or method might throw), putting –in my opinion erroneously- the Java exception declaration schema as a model to imitate. I say erroneously for a couple or reasons:

  • In Java, declaring those so-called checked exceptions has a different purpose than in C++: to force the caller of the declared method to either catch or declare itself those exception types as potentially thrown. If not done like that, the caller method won’t compile.
    In ISO C++, instead, the purpose of this dynamic declaration is to confirm, in runtime (not during compilation), that any thrown exception belongs to those types (or descendants). Otherwise, std::unexpected() is called –although, in Visual C++, this last behavior is not implemented as the standard specifies.- Exception declaration in C++ doesn’t impose conditions to the caller, unlike the Java case mentioned above.
  • Despite being different species, there’s today an extended belief in both languages that declaring exceptions is a practice to be avoided rather than recommended. In the Java case the reason is mostly related with some consequent coupling that may arise. In the C++ one, it may be a maintenance pain for the function whose exceptions are declared, if new exceptions started being thrown by functions or methods being called by the function or method whose exceptions are being declared.

I’m personally with the current stream of thought, but still found interesting the justification offered by the defender of exception declaration: in his vision, by declaring you keep control of what could fail where, instead of letting anything fail anywhere. Otherwise, the absence of that control turns the problem of finding where an exception was thrown into an exponential problem. In that sense he prefers the penalty resulting from keeping exception declarations updated.

I don’t believe I’d buy the concept, although that approach could have a sweet spot when you have full control of the whole code base of your application (what could possibly be the case of this MVP.) If you depend on components delivered by other teams (i.e. third parties), keeping such degree of synchronism between caller and the methods or functions it calls may soon become a heavy assignment.

This in fact leads to a final aspect I’ll cover in this post, related with exception handling.

 

Exception Handling

So far we were talking about the time an exception is being thrown. But once it happened, who should take care of it? That old notion that its immediate caller should be directly involved in its capture is fortunately gone, and the righter idea of dealing with the exception where and only where it’s possible to reestablish the application state (otherwise let it fly and keep unwinding) is widely spread.

There was also a common consciousness that in an n-layer application (eventually n-tier), if nothing was done at a given layer, some action should be taken before leaving it, as an attempt to reestablish the application state. This action could be eventually partial (like logging the original failure for diagnostic tools) while masking or transforming the exception into something more meaningful to be thrown back to the immediate outer layer. There are special cases where masking is not possible because the “outer world” is a tier where exceptions aren’t supported. Someone offered as example the case when we are about to cross the boundary of a COM method. In such case a possible solution consists in mapping the exception into some form of HRESULT.

At a resource-level, someone reminded, compensations shouldn’t depend on exception handling: by just applying the RAII technique (Resource Acquisition Is Initialization,) compensating actions must happen in the resource destructor as long as local resources lose visibility.

 

Exception generation and handling are topics that covers chapters in books, so despite this post (based on a random discussion initiated by a single question) offers some insights, you may have lots of other recommendations in this matter. Would you like to share those with the rest of us?

 

 

Appendix: Exceptions in C++0x

As a side comment in the discussion, it was mentioned that dynamic exception specifications are to be deprecated in the upcoming C++0x standard. Furthermore, a noexcept declaration is to mean something somewhat similar to the throw() declaration today, although not fully equivalent (it's scope is still being discussed so I'd rather not write anything here that I'll later have to change.) There was an interesting article about it published a few months ago in three parts (I, II and III.)

 

Trivia: Why do try/catch blocks in C++ lack of finally?

If you are familiar with Java and .NET, you may have already wondered why there’s no finally block in C++. Don’t you guess why C++ doesn’t need it, unlike managed languages?

  • The best reason I know not to allocate memory when throwing exceptions is to avoid a crash if the heap is corrupt, leading to a nasty debugging situation where you can't get information about what went wrong.  The same thing goes for freeing memory. It also applies to operations that you might undertake to document what went wrong - for example dumping a file of information. All should be done without involving the heap.

  • What about boost::asio && boost::system::error_code?

    Does anybody remember its trick with exceptions vs error codes? I think it's a great example of how it must be written in "modern C++ style" - use error codes (result codes) for code with (often) variable results (i.e. boost::system::error_code) and write exception-based wrappers for those users who don't need error codes. It's very simple (but mostly reasonable for libraries).

    And, of course, sometimes error codes can't be used (we all know this cases - constructors, operators' overloading etc).

  • The problem I've had with the STL exceptions is that they store the message as a byte (non-Unicode) string (or at least, std::exception::what returns a byte string). So I create my own base exception class:

    class Exception

    {

    public:

    explicit Exception(const std::wstring& message) : m_message(message)

    {

    }

    virtual ~Exception()

    {

    }

    const std::wstring& GetMessage() const

    {

    return m_message;

    }

    protected:

    std::wstring m_message;

    };

  • STL wrote:

    "...with /EHsc being preferable (it's faster because it assumes that extern "C" functions won't emit exceptions - while technically permitted by the Standard, sane code should never attempt to do such a thing..."

    While it would certainly be preferable to avoid this, it sometimes happens.  I'm maintaining a large (several million LOC), old (25+ years) application, written in C and partially converted to C++, where the call stack regularly passes back and forth between the two.  It just wouldn't be practical to try to block exceptions from crossing the language boundary.  We just spent a substantial amount of time tracking down a problem that turned out to be because we were using /EHsc.  I would recommend that all mixed C and C++ applications use /EHs by default, and only switch to /EHsc if they're really certain that their extern "C" functions could never let an exception escape (presumably with a catch(...) clause).  Otherwise the optimizer starts silently eliding catch clauses, resulting in Release-build only crashes that can be hard to track down.

  • I want to thank you all guys for all these comments. They certainly reconfirm once again that there's no one-size-fits-all clue for any potential problem that requires exception handling.

    But that's not a fact to get frustrated: it's just some wisdom to correctly tailor, whatever the approach we take, in its right size (helpful for these contexts, harmful for another ones).

    I learned a lot from you guys. Will keep posting these debates.

  • Finally is sorely lacking from C++. RAII is great but it only really applies to releasing resources wrapped in a class. What if e.g. I want to reliably reset a flag on exit from a function?

    void f()

    {

     executing = true;

     try

     {

       DoStuff();

     }

     finally

     {

       executing = false;

     }

    }

    Instead of the clean code above, in C++ I would have to create a silly class that resets the flag in the destructor.

    No idea why some people are so stubborn on that one. Just add finally, it's not redundant by any means.

  • Internationalisation in std::exception?

    Use UTF-8 (or if you're really pedantic about signed char UTF-7).

    A simple wrapper class can take care of the conversion if you insist on using Unicode strings or you better still use a resource string id (or equivalent) and a look-up/load operation etc. etc.

  • Who needs finally?

    That's what the lines after the catch clauses are for aren't they?

    What do you mean there are some exceptions you just let through?

    Then I put it to you that your supposed "finally" code is (usually) not going to make one whit of difference, you're already in a bad place.

    Who uses global variables directly rather than wrapping them properly?

    and yes, so far I've wrapped a fair percentage of the Win32 API, god help me...

  • >> Should we be forced to wrap every single thing we use

    > Yes. Yes, you should.

    Expecting every single development team to make their own bespoke wrappers seems backwards.

    How about the people who make Win32, the main non-RAII API we are all subjected to, do that C++ RAII wrapper for us, well and in one place, make it cover the entire API, and continue to maintain it as part of the main SDK, not as an afterthought that is updated slowly and all but abandoned when some other framework becomes the current fad?

    Aside from the consolidation of effort, it'd make code and programmers a lot more interchangeable between projects and teams.

    If MS cannot provide that wrapper is it reasonable to expect everyone else to provide their own?

    Smart pointers and other generics are simply not enough to deal with all the weird things Win32 throws at us.

    FWIW, I *do* make RAII wrappers for a lot of things (and not just resource clean-up; e.g. ensuring a threading event is set when a code block exits), and in my own code I almost never use exceptions, but I've still found myself in situations where I've sat wondering why C++ so stubbornly resists having finally blocks. It seems like a head-in-the-sand mentality to me, which is quite strange for a language that bends over backwards to support so many different programming styles (and as a result, supports Win32, which is not RAII).

  • I use scopeguard for simple code that might be handled by finally in Java. I prefer it to finally because (a) the release code only ever needs to be mentioned once and (b) the release code is written immediately after the acquisition code so it's easier to maintain.

    E.g.:

    CDC* pDC = GetDC();

    ON_BLOCK_EXIT(&CWnd::ReleaseDC, this, pDC);

    The original article is at www.drdobbs.com/184403758 but I use the one by Joshua Lehrer (www.zete.org/.../scopeguard.html).

    I've come across a really simple C++0x implementation using lambdas but I'm not sure where I got it from now. But it allows you to handle the example given by kaalus like this:

    void f()

    {

    executing = true;

    ON_BLOCK_EXIT([&]{

    executing = false;

    });

    DoStuff();

    }

    Much more elegant and easier to read than finally.

Page 2 of 2 (25 items) 12