Continuing along the lines of the “say what you mean“ style of programming, let's look at a naming convention for certain types of functions which more accurately reflects what they mean. Last time, we saw a function for adding two 128 bit numbers:

int128 sum_of_128bit_numbers(int128 x,int128 y)
{
    int128 z;
    z.low = x.low + y.low;
    z.high = x.high + y.high;
    if (z.low < x.low)
        ++z.high;
    return z;
}

and some code which calls it:

    z = sum_of_128bit_numbers(x,y);

One way we can improve this is to write the function as an overloaded operator:

int128 operator+(int128 x,int128 y)
{
    int128 z;
    z.low = x.low + y.low;
    z.high = x.high + y.high;
    if (z.low < x.low)
        ++z.high;
    return z;
}

Then the syntax to call it is just:

    z = x + y;

which I'm sure you'll agree is much simpler, especially when you have lots of overloaded operators and a complex expression. Which would you rather read/write/review/maintain, this:

    z = x*x + y*y;

or this:

    z = sum_of_two_128_bit_numbers(product_of_two_128_bit_numbers(x,x),product_of_two_128_bit_numbers(y,y));

?

Programmers who are used to a programming language which doesn't offer operator overloads tend to be rather horrified at this concept the first time they see it (I know I was). They say things like “but this means I can't tell what even the simplest pieces of code really do just by looking at them”. I'd argue that this ability is overrated. Modern programming is all about abstraction. We don't need to know every single detail about what's going on in a particular line of code. We don't need to know the mechanics of how addition works to understand a line of code like:

    z = x + y;

We also don't need to know which registers the compiler decides to assign to which variables (if we did we'd be writing in assembly language, not C++). Nor do we need to know how numbers are represented in binary by the computer's hardware, nor the details of the logic gates and transistors used in the CPU's addition circuitry, nor the voltage levels in the processor nor any one of a million little details which all have to work out in order that we can add two numbers. All we need to know is the highest level concept - we're adding these two numbers. And that fact is best expressed by the statement:

    z = x + y;

Now, I do agree that it is possible to abuse that power and write code which is almost impossible to follow, for example by writing an “operator+” function that does something entirely different from addition. But you can write bad code in any language. You can do similarly evil things even in C:

int128 sum_of_128bit_numbers(int128 x,int128 y)
{
    int128 z;
// Haha! We lied and will actually XOR the two numbers! z.low = x.low ^ y.low; z.high = x.high ^ y.high; return z; }

The problem here is that the function doesn't do what it says, not that operator overloading is intrinsically evil. The name “operator+“ is just that - a name. It's a name which implies certain things about the semantics of the function in question, just like any good name should. It also allows callers of this code to employ certain syntactic sugar - they can write:

    z = x + y;

instead of:

    z = operator+(x,y);

(The latter is completely valid C++ code, by the way, though rarely more appropriate than the former).

There is one other argument against operator overloading - that is that it makes it more difficult to see the performance bottlenecks in a program. Maybe you have an “operator+” which adds large matrices together, and could potentially be quite slow if the matrices are large. You wouldn't necessarily expect a simple “x+y” expression to be slow, if you're used to programming in C where such expressions typically cause the compiler to emit only one or two machine language instructions.

The counter-argument is that we should not expect the names of functions to reflect how long they take to run (otherwise we'd be writing functions with names like “add_two_128bit_numbers_takes_about_20_cycles_to_run”) which would be a maintainance nightmare (imagine if every time you changed the implementation of a function you had to change the name of that function and every function that called it!). No, use the right tool for the job - visually inspecting a program isn't the right way to figure out where the performance bottlenecks are. Use a profiler instead.