Singletons are evil. I hate them. No, "hate" is a strong word. I dislike them. They're inherently easy to misuse. I guess the problem gets aggravated by the fact that the Singleton is the simplest of the Design Patterns to adopt and implement, so it tends to pop everywhere. I know because I've been there. My first project after learning about patterns was peppered with singletons. When you get a new hammer everything starts to look a lot like a nail. Today I avoid them as much as possible. I guess you should too. Not that you can't use them, but you should know the kind of trouble they can cause.

Software Design is a lot about the management of dependencies. If you're not thinking about dependencies when you're doing design, I'm sorry, but whatever you think you're doing, however fancy you think it is, it's not design.

When designing a piece of software you try to eliminate unnecessary dependencies as much as practically possible, to make the piece have fewer reasons to change. This is not to prevent change, on the contrary, by given fewer reasons for something to change you make change easier elsewhere. The idea is to control how change can propagate through the hierarchy, so changes can happen without triggering a devastating ripple of more and more changes.

Dependencies come not only from other software components, but also from polices, user requirements, decisions, etc. And hence the problem with singletons, they commit to a decision about their cardinality way too early in the design.

Here's an example.

    class CConfigManager
    {
    public:
        ...

        static CConfigManager* getInstance();

        ...
    };

Classic. I see this sort of thing a lot. Now, you may have been thinking back there that a single instance of the configuration manager is all your application is ever going to need, so you may as well bake the trait directly into the class itself, right? That's very convenient too, because now any code that needs the instance can go grab it directly, which means fewer references to pass around. Besides, isn't it better to have code that is smart and doesn't need to be told what to do all the time? There's a catch though: in most cases the decision that there can be only one of something is an application level responsibility. Ultimately it should be up to the application to decide it'll need only one of whatever, ever. Yet you pulled that high level decision all the way down to a lower level layer and cemented it there, and that changed things around. Soon pieces of code everywhere start to take a hard dependency on the fact that CConfigManager is a singleton. The code ought to know that because it ought to call getInstance. This increases the coupling between the code and the singleton class unnecessarily. If in one hand the code doesn't need to be told what to use, on the other hand it can't be told to use anything else.

Then one day a requirement for a new feature comes in. The feature is to consolidate two reports of the thing your thing generates a report of, but in order to do that the new code will have to instantiate two sets of objects, each set pointing to a different configuration manager. Basically the application decided it needs two instances of the object now. It was the application's prerogative all along anyways.

Congratulations, you just got yourself some extensive refactoring to do. And good luck doing that without unit-tests, which you obviously don't have, because had you been using unit-testing you'd have realized singletons make the "units" a lot harder to test, in which case you'd have got rid of it earlier on.

More good stuff on the badness of singletons: