A conversation the other day got me thinking about interfaces.  As the author of an interface, it is easy to blame the users for its misuse.  I think that's the wrong place for the blame.  An interface that is used incorrectly is often poorly written.  A well-written one will be intuitive for most people.  On the other hand, a poorly-designed one will be difficult for many people to operate.  Sure, properly educated users could use even a poor interface and even the best interfaces can be misused by incompetent users, but on average a good interface should intuitively guide the users to make the right decisions.  I'll focus mostly on programming interfaces but the same facts are true for user interfaces.

One of the rules in our coding standard is that memory should be released in the same scope in which it was allocated.  In other words, if you allocate the memory, you are responsible for making sure it is released.  This makes functions like this violations of our standard:

Object * GetObject(void)

{

    return new Object(parameters);

}

Why do we say this shouldn't be done?  It is not a bug.  Not necessarily anyway.  If the person calling this function just understands the contract of the function, he'll know that he needs to delete the Object when he's done with it.  If there is a memory leak, it's his fault.  He should have paid close attention.  That's all true, but in practice it's also really hard to get right.  Which functions return something I need to delete?  Which allocation method did they use?  Malloc?  New?  CoTaskMemAlloc?  It's better to avoid the situation by forcing the allocation into the same scope.  This way there is much less room for error.  The right behavior is intuitively derived from the interface itself rather than from some documentation.  here's a better version of the same function:

void GetObject(Object * obj)

{

    obj->Configure(parameters);

}

This second pattern forces the allocation onto the user.  Object could be on the stack or allocated via any method.  The caller will always know because he created the memory.  Not only that, but people are accustomed to freeing the memory they allocate.  The developer calling this function will know that he needs to free the memory because it is ingrained in his psyche to free it.

Here's another example I like.   I've seen this cause bugs in production code.  CComPtr is a smart pointer to wrap COM objects.  It manages the lifetime of the object so you don't have to.  In most cases if the CComPtr already points at an object and you ask it to assign something new to itself, it will release the initial pointer.  Examples are Attach and Operator=.  Both will release the underlying pointer before assigning a new one.  To do otherwise is to leak the initial object.  However, there is an inconsistency in the interface.  Operator& which retrieves the address of the internal pointer, p, does not release the underlying pointer.  Instead, it merely asserts in p!=NULL.  CComPtrBase::CoCreateInstance behaves similarly.  If p!=NULL, it asserts, but happily over-writes the pointer anyway.  Why?  The fact that there is an assert means the author knows it is wrong.  Why not release before over-writing?  I'm sure the author had a good reason but I can't come up with it.  Asserting is fine, but in retail code it will just silently leak memory.  Oops.  Who is to blame when this happens?  I posit that it is the author of CComPtr.

When someone takes your carefully crafted interface and uses it wrong.  When they forget to free the memory from the first GetObject call above, the natural inclination as the developer is to dismiss the user as an idiot and forget about the problem.  If they'd just read the documentation, they would have known.  Sometimes it's possible to get away with that.  If your interface is the only one which will accomplish something or if it is included in some larger whole which is very compelling, people will learn to cope.  However, if the interface had to stand alone, it would quickly be passed over in favor of something more intuitive.  Let's face it, most people don't read all the documentation.  Even when they do, it's impossible to keep it all in their head. 

Very many times the author of the API has the power to wave a magic wand and just make whole classes of bugs disappear.  A better written API--and by that I mean more intuitive--makes it obvious what is and is not expected.  If it is obvious, people won't forget.  As the author of an API, it is your responsibility to make your interface not only powerful but also intuitive.  Bugs encountered using your interface make it less appealing to use.  If you want the widest adoption, it is best to make the experience as pleasant as possible.  That means taking a little more time and making it not just powerful, but intuitive.

How does one do that?  It's not easy.  Some simple things can go a long way though. 

  • Use explanatory names for classes, methods, and parameters. 
  • Follow the patterns in the language you are writing for.  Don't do something in a novel way if there is already an accepted way of doing it. 
  • Finally, be consistent within your interface and your framework.  If your smart pointer class releases the pointer most of the time something new is assigned to it, that's not enough.  It should be true every time.  To have an exception to the rule inevitably means people will forget the exceptional situation and get it wrong.

As I said at the beginning, this same rule applies to user interfaces.  If people have a hard time using it, blame the designer, not the user.  For those of you old enough to have used Word Perfect 5.x, recall the pain of using it.  There was no help on the screen.  Everything was done by function key.  F2 was search.  Shift-F6 was center, etc.   The interface was so unnatural that it shipped with a little overlay for your keyboard to help you remember.  Is it any wonder that the GUI like that in Microsoft Word, Mac Write, Final Copy (Amiga), etc. eventually became the dominant interface for word processing?  People could and did become very proficient with the Word Perfect interface, but it was a lot easier to make a mistake than it should have been.  The more intuitive GUI won out not because it was more powerful, but rather because it was easier.  Think of that when designing your next API or User Interface.  Accept the blame when it is used incorrectly and make it so the misuse won't happen.  Keep it easy to use and you'll keep it used.