Fabulous Adventures In Coding
Eric Lippert is a principal developer on the C# compiler team. Learn more about Eric.
A month ago I was discussing some of the
Before I talk about some of the things that can go terribly wrong with floating point arithmetic, it's helpful (and character building) to understand how exactly a floating point number is represented internally.
To distinguish between decimal and binary numbers, I'm going to do all binary numbers in
Here's how floating point numbers work. A float is 64 bits. Of that, one bit represents the sign:
Eleven bits represent the exponent. To determine the exponent value, treat the exponent field as an eleven-bit unsigned integer, then subtract 1023. However, note that the exponent fields
The remaining 52 bits represent the mantissa.
To compute the value of a float, here's what you do. You take the mantissa, and you stick a "
So for example, the number -5.5 is represented like this: (sign, exponent, mantissa)
The sign is
This system is nice because it means that every number in the range of a float has a unique representation, and therefore doesn't waste bits on duplicates.
However, you might be wondering how zero is represented, since every bit pattern has
If the exponent
So the biggest and smallest positive normalized floats are
(0, 11111111110, 1111111111111111111111111111111111111111111111111111)
(0, 00000000001, 0000000000000000000000000000000000000000000000000000)
The biggest and smallest positive denormalized floats are
(0, 00000000000, 0000000000000000000000000000000000000000000000000001)
(0, 00000000000, 1111111111111111111111111111111111111111111111111111)
Next time: floating point math is nothing like real number math.
One of the great things about being the Microsoft Scripting Guy , is answering the hundreds of e-mails