Shawn Hargreaves Blog
These days most people represent angles using degrees or radians, but many old-skool game programmers preferred a binary integer format where a full circle is 65536. To convert:
short ToBinary(float degrees)
return (short)(degrees * 65536 / 360);
If you store such a value in a 16 bit integer type, it will automatically overflow any time you increment it past a full circle, so you no longer need special case wrapping code to handle the circular (modulo) nature of angle arithmetic.
When manipulating degrees using floats, you might have to write:
angle += turnAmount;
if (angle >= 360)
angle -= 360;
else if (angle < 0)
angle += 360;
But with a binary angle represented as a short, you can get the same result with just:
angle += turnAmount;
This simplifies many common angle computations. For instance the TurnToFace method from the Aiming sample would no longer need to bother calling WrapAngle if it used this binary format.
Back in the day most games used integer or fixed point math, so this was a major win. But today most people use floating point, and converting between integer and float formats is awkward and slow. So it is debatable whether this format is still useful.
Great post Shawn. I can think of a *very* popular engine that even today, still uses this method of angle representation for many of its rotations.
The other major win with this method of angle representation is the inherent compression, which is a huge win when it comes to sending rotational data over the network.
The rotation a quaternion or 3x3 matrix represents can instead be represented this way and only use 6 bytes (yaw, pitch, roll) instead of 16 bytes for a quat (4 floats) or 36 bytes for a 3x3 matrix (9 floats).
Given a choice between a really smart one-liner code, and an explicit overflow check, I think it's best to code a few more lines.
It's more readable. Readable = easier to optimize, less room for bugs.
I actually think that in this case, the really smart one-liner may be more readable and less error-prone. The bounds checking due to wrapping around at 360 degrees is really a corner-case for most types of functions that accept an angle as an input, and this approach eliminates it.
Aw, you just gave me some Allegro nostalgia :)
It goes back to your optimization article. I tend to optimize for readability and maintainability rather than speed (especially since I'm writing a turn-based game where most of the time I'm waiting for user input).
Have been following your blog for a while. Thought I'd say hello and give my appreciation!
It was only the day before yesterday that was I reading through Doom's source code and came across the curious BAMS (Binary Angle Measurement System), so it's a great coincidence (or scary?) that you mention it :)
I had been using a similar method up until now for breaking a circle up into a custom number of "arcs" for some old school ray casting.
Thanks again! Great posts as always.
> Given a choice between a really smart one-liner code, and an explicit overflow check, I think it's best to code a few more lines.
Such decisions can be a subtle judgment call.
On the one hand, it's certainly not a good idea to rely on cryptic one liners that depend on non obvious side effects.
But on the other hand, it's also not a good idea to write more code than is strictly necessary to solve the problem at hand! The more code you have to write, the more risk you will make a mistake or forget to include something important.
If someone was doing computations with floating point numbers, and every time they stored a result, they were adding a floor() call, I would say that was silly: if they wanted the result to always be an integer, they could simplify their code and reduce the chance of error by just doing the computations directly with an integer data type.
So I guess the question here is whether the modulo nature of integer overflow is an obscure implementation detail that is liable to confuse the reader, or a fundamental characteristic of the data type which the reader can be expected to intuitively understand?
I suspect this mostly comes down to how familiar you (and of course anyone else expected to work on the code in question) are with binary number representations, bit twiddling, etc.
This may come down to a performance issue as well. If the angle remained within the normalised range most of the time, the cost of two false if statements might have some effect if this code was called a lot every frame. Using the implicit wrapping of an unsigned short would eliminate that cost.
Now if only there was a "short BinaryMath.Sin(short Angle)" that didn't involve converting it to a double then back again... But then, doesn't the FPU have a sin instruction that would be faster than trying to compute it on the CPU for an integer?
Looking back, I think dealing with angles and having to re-learn trigonometry was one of the areas that took most of my time when making relspace. The more examples/explanations the better! :)
@Erzengel: If you're that concerned about speed, precompute!
> If you're that concerned about speed, precompute!
Beware: in these days of fast CPUs with relatively slow memory, lookup tables are no longer necessarily a speeed boost over just recomputing values whenever you need them.
Hmm, interesting point Shawn. I do development mostly for mobile devices, so that's not a tradeoff I've considered before!
My original statement is probably best replaced with something along these lines:
"If you're that concerned about speed, try writing a test app to compare the different approaches."
I agree with your analysis of Cardin's comments. I have done more than a little simulation software that dealt calculations involving changing angles. Using the scaled binary angle approach (BAMs/BRADs) greatly simplified my code and eliminated boundary conditions that were always a plague.
Additionally, they forgave a multitude of sins.
Not only did I no longer have to worry about normalizing my angles with every computation, I no longer had to worry about HOW to normalize them. The two most common normalization ranges are [0,360) and [-180,+180). Either can be realized without any computation at all. The first is realized by interpreting the stored integer as an unsigned integer with no sign bit. The second is realized by interpreting the stored integer as a twos-complement signed integer with the traditional sign bit being the high order bit. Conversion to degrees/radians/grads is a simple matter of scaling the integer appropriately (as signed or unsigned depending on the normalization range). This was done in the user-interface subsystem.
The biggest gain though was with trigonometric functions. Transcendental functions can eat your lunch in high-fidelity, real-time simulation systems. The trig functions of BAMS can be computed by dividing the integer into three zones. The high order 2 or 3 bits tell you what quadrant or octant the angle is in (permitting the trig computation to make use of that function's symmetry; the middle set of bits can be used for a table lookup; while the low-order bits can be used for interpolation. I suppose these days a specialized ASIC or FGPA could be assembled that did the computation is silicon. The bottom line is that the trig functions were blazeingly fast, the models built using them had no discontinuities in them, and the user-interface was simple and effective.
The modulo arithmetic of binary integers is also rather easily grapsed.
All in all I find this to be a useful technique for the roaming programmer (or indeed ANY programmer) to have in his bag of techniques.