As Timeless As Infinity

As Timeless As Infinity

  • Comments 33

User: Recently I found out about a peculiar behaviour concerning division by zero in floating point numbers in C#. It does not throw an exception, as with integer division, but rather returns an "infinity". Why is that?

Eric: As I've often said, "why" questions are difficult for me to answer. My first attempt at an answer to a "why" question is usually "because that's what the specification says to do"; this time is no different. The C# specification says to do that in section 4.1.6. But we're only doing that because that's what the IEEE standard for floating point arithmetic says to do. We wish to be compliant with the established industry standard. See IEEE standard 754-1985 for details. Most floating point arithmetic is done in hardware these days, and most hardware is compliant with this specification.

User: It seems to me that division by zero is a bug no matter how you look at it!

Eric: Well, since clearly that is not how the members of the IEEE standardization committee looked at it in 1985, your statement that it must be a bug "no matter how you look at it" must be incorrect. Some industry experts do not look at it that way.

User: Good point. What motivated this design decision?

Eric: I wasn't there; I was busy playing Jumpman on my Commodore 64 at the time. But my educated guess is that it is desirable for all possible operations on all floats to produce a well-defined float result. Mathematicians would call this a "closure" property; that is, the set of floating point numbers is "closed" over all operations.

Positive infinity seems like a reasonable choice for dividing a positive number by zero. It seems plausible because of course the limit of 1 / x as x goes to zero (from above) is "positive infinity", so why shouldn't 1/0 be the number "positive infinity"?

Now, speaking as a mathematician, I find that argument specious. A thing and its limit need not have any particular property in common; it is fallacious to reason that just because, say, a sequence has a particular limit that a fact about the limit is also a fact about the sequence. Mathematically, "positive infinity" (in the sense of a limit of a real-valued function; let's leave transfinite ordinals, hyperbolic geometry, and all of that other stuff out of this discussion) is not a number at all and should not be treated as one; rather, it's a terse way of saying "the limit does not exist because the sequence diverges upwards".

When we divide by zero, essentially what we are saying is "solve the equation x * 0 = 1"; the solution to that equation is not "positive infinity", it is "I cannot because there is no solution to that equation". It's just the same as asking to solve the equation "x + 1 = x" -- saying "x is positive infinity" is not a solution; there is no solution.

But speaking as a practical engineer who uses floating point numbers to do an imprecise approximation of ideal arithmetic, this seems like a perfectly reasonable choice.

User: But surely it is impossible for the hardware to represent "infinity".

Eric: It certainly is possible. You've got 32 bits in a single-precision float; that's over four billion possible floats. All bit patterns of the form

?11111111???????????????????????

are reserved for "not-a-number" values. That's over sixteen million possible NaN combinations. Two of those sixteen million NaN bit patterns are reserved to mean positive and negative infinity. Positive infinity is the bit pattern 01111111100000000000000000000000 and negative infinity is 11111111100000000000000000000000.

User: Do all languages and applications use this convention of division-by-zero-becomes-infinity?

Eric: No. For example, C# and JScript do but VBScript does not. VBScript gives an error if you do that.

User: Then how do language implementors get the desired behaviour for each language if these semantics are implemented by the hardware?

Eric: There are two basic techniques. First, many chips which implement this standard allow the programmer to make float division by zero an exception rather than an infinity. On the 80x87 chip, for example, you can use bit two of the precision control register to determine whether division by zero returns an infinity or throws a hardware exception.

Second, if you don't want it to be a hardware exception but do want it to be a software exception, then you can check bit two of the status register after each division; it records whether there was a recent divide-by-zero event.

The latter strategy is used by VBScript; after we perform a division operation we check to see whether the status register recorded a divide-by-zero operation; if it did, then the VBScript runtime creates a divide-by-zero error and the usual VBScript error management process takes over, same as any other error.

Similar bits exist for other operations that seem like they might be better treated as exceptions, like numeric overflow.

The existence of the "hardware exception" bits creates problems for the modern language implementor, because we are now often in a world where code written in multiple languages from multiple vendors is running in the same process. Control bits on hardware are the ultimate "global state", and we all know how irksome it is to have global, public state that random code can stomp on.

For example: I might be misremembering some details, but I seem to recall that Delphi-authored controls set the "overflows cause exceptions" bit. That is, the Delphi implementors did not use the VBScript strategy of "try it, allow it to succeed, and check to see whether the overflow bit was set in the status register". Rather, they used the "make the hardware throw an exception and then catch the exception" strategy. This is deeply unfortunate. When a VBScript script calls a Delphi-authored control, the control flips the bit to force exceptions but it never "unflips" it. If, later on in the script, the VBScript program does an overflow, then we get an unhandled hardware exception because the bit is still set, even though the Delphi control might be long gone! I fixed that by saving away the state of the control register before calling into a component and restoring it when control returns. That's not ideal, but there's not much else we can do.

User: Very enlightening! I will be sure to pass this information along to my coworkers. I would be delighted to see a blog post on this.

Eric: And here you go!

 

  • To give you another perspective on the double vs float argument.

    We develop an application were switching from float to double, and thus doubling the memory footprint, would have a large impact.

    First of all, our floats are used for 3D coordinates which are transformed into device coordinates. A float has got more than enough precision for this. One problem you could face is a model that has two sub-components with a different scale ( or order of magnitude ) : one on a microscopic scale, one on a cargo-ship size scale. When fitting the whole model into the display you would of course not see the microscopic part, but when zooming in, onto that part, adjusting the scale of the modelviewmatrix by incrementally multiplying, it could be very well be that the modelviewmatrix components accumulate a large error relative to the microscopic part. This could result in strange display behaviour.

    But fortunately for us, none of our models is like that :)

    The memory we allocate for the model needs to be contiguous at one point or another - agreed this doesn't scale well, but well enough for our application - so doubling its size decreases the odss of finding such a contiguous section.

    So, we use float for this particular application, and are happy with it.

    For other parts, which involve signal representation and processing, we do use doubles for calculations, but once the values find their way to a serialized format, we convert to float.

  • @Pavel -

    System.Decimal is a wrapper over VT_DECIMAL, at least in the 32-bit "ROTOR" version of the CLR.  I can easliy imagine that in the 64-bit CLR it's be re-implemented.  AFIAK, System.Decimal and VT_DECIMAL are always 100% identical.

  • @Eric: I'm definitely wrong here, but I'm not sure where I've picked the idea that Automation DECIMAL is somehow different from System.Decimal. Now that I look at the description of both in MSDN, it's clear that they are exact same thing. I haven't actually looked at Automation DECIMAL before, though, so apparently I've picked that bit of misinformation from some of the older (and worse) C# books that got me started.

    I'm not terribly surprised that I (and, apparently, someone else) got it wrong, as I recall Automation DECIMAL being a fairly obscure thing - most people knew that it's there, but I never recall seeing a detailed description of what's it for and how it actually works outside MSDN reference articles. Probably because everyone just used VT_CURRENCY for money, and especially because VB6 had Currency type, but didn't have Decimal type - even though it could handle VT_DECIMAL variants.

  • @bypasser, @Kristof

    I'm not arguing that decimal should always be used in favor of float/double. It also exhibits the same rounding problems that are inherent to any limited-precision floating types, it's larger, and it's significantly slower. I was only saying that its behavior is "more common sense" to vast majority of people out there, and therefore it's a better candidate for a "default" real type, if there can even be such a thing. I'd rather use a slow application that works as specified (because its author understood how it _actually_ works), rather than a fast application that has subtle rounding-related bugs because its author has never read "What Every Computer Scientist Should Know About Floating-Point Arithmetic" (or a suitable replacement, like Jon's C#-specific article on the same thing).

    It's somewhat ironic that you have to repeatedly explain why (float)1.1 + (float)1.2 != (float)1.3 - see SO for plenty of examples - but the same people readily understand why 1.0m/3.0m = 0.33333...m, and why multiplying it back by 3m won't give you 1.0m. Probably because all of us tortured a calculator with such things at some point, and because, somehow, binary integers are much easier to comprehend than binary fractions...

    I definitely wouldn't use Decimal for vertex coordinates in a graphical application, or for measurements in an engineering application - perf is much more likely to be an issue there, and double is good enough (in fact, quite often, even float is plenty).

    All in all, it would probably be best if the choice always had to be explicit, with no default at all. If you specifically want float or double, say so ("d" and "f"). If you specifically want decimal, say so ("m"). If you do not know which one you want, then you should probably stop and think for a moment, because the choice may have some far-reaching implications.

  • As a curiosity of note, here's a quite real bug in .NET Framework that I've ran into just now, that got there because someone, somewhere, forgot about INF and NAN values. This one is interesting because it is fairly unusual - it has nothing to do with IEEE floating-point arithmetic, or, indeed, with numbers at all...

    Compile the following code as a DLL:

       public class Foo

       {

           [System.ComponentModel.DefaultValue(double.NaN)]

           public double Bar;

       }

    Next, try running sgen.exe (XmlSerializer precompiler) on it, with /k option to keep the generated code (or, alternatively, just try to create an XmlSerializer instance for typeof(Foo)):

       sgen.exe /k foo.dll

    You'll get the following cryptic error message:

       Microsoft (R) Xml Serialization support utility

       [Microsoft (R) .NET Framework, Version 2.0.50727.3038]

       Copyright (C) Microsoft Corporation. All rights reserved.

       Error: Unable to generate a temporary class (result=1).

       error CS0103: The name 'NaN' does not exist in the current context

    If you look at generated code, sure enough, you see this:

       if (((global::System.Double)o.@Bar) != NaN)

    Which, of course, references an in-scope variable, field or property "NaN", which is undefined. It should clearly be double.NaN here, but apparently someone just did Double.ToString(), forgetting about the corner cases. Similar problem exists if you put +INF or -INF there.

    I was actually quite surprise to see that there, because I always thought that XmlSerializer uses CodeDOM to generate its output, and CSharpCodeGenerator handles NaNs and other special values properly (try it!). Apparently, I was wrong.

  • In my opinion the only thing that's really wrong with the IEEE floating-point specification is the names. +INF, -INF, +0, -0, have little to do with infinities and zeros in practice. The very concept of positive or negative zero makes no sense mathematically. But the values themselves make perfect sense within the context of floating point arithmatic. The +0 and -0 values do not mean zero at all, but rather they mean "a value too small to be represented." Similarly, +INF and -INF do not mean infinity. They simply mean "a value too large to be represented." So division by zero is never an issue with floating point, because floating point doesn't have a zero. It just has two small values called (unfortunately) +0 and -0.

    Understood in this way, the way mathematical operations are defined to work on these values make perfect sense.

    Anyway, that's just the impression I'm under. I'm not an expert on this subject.

  • Where do you find the VT_DECIMAL spec? All I could find was this: http://msdn.microsoft.com/en-us/library/ms221061.aspx which really doesn't say anything about rounding or how exceptions are handled.

    And I would like to add that I prefer my divisions by zero to not raise exceptions.

  • There's a bunch of API functions for DECIMAL arithmetic, but their documentation seems to be very laconic:

    http://msdn.microsoft.com/en-us/library/ms221612.aspx

  • Jeffery L. Whtiledge -- Nice post, I didn't realize that there is no plain zero in floating-point, only positive or negative.  It does make since when you think of it that way.

    Pavel -- Talking about defaults, maybe we should also have to explicitly specify the sign of zero.  0.0 would be illegal, it would have to be +0.0D or -0.0D

  • Isn't it sad to see how often Decimals are labelled "exact" while Doubles are labelled "approximate"? Of course, as some of you already pointed out, both are approximate and Doubles are more precise (per unit of storage) and more efficient than Decimals. The only "advantage" of Decimals is that they are highly biased (and as a result compromised) towards financial calculations. I think it is better to educate people than to fudge numbers towards the expectations of the not sufficiently educated.

    With regards to "zero" and "infinity" I agree there is probably a naming issue here that has a negative contribution to the discussion: perhaps we should be talking about plus and minus underflow and overflow, rather than zero and infinity.

  • Alex - Decimals are EXACT in that what you see is what you get.  As has already been discussed, if you set a Decimal to 1.1 it is EXACTLY 1.1.  With a Double this is NOT the case, 1.1 is only approximately 1.1 and this can lead to some unintuitive comparisons such as 1.1 + 1.2 not equal to 1.3.

    I agree with you that Decimals are highly biased to financial calculations, but that is the whole point.  A large percentage of code is geared to finance.  I also like the terms you suggest: plus and minus underflow and overflow.

  • > I agree with you that Decimals are highly biased to financial calculations, but that is the whole point.

    I would actually disagree. I think that Decimal isn't specifically biased towards financial calculations. Rather, it is biased towards any calculation wherein input is supplied by the user in form of a decimal number, output is also expected to be provided to the user in form of a decimal number (hence the name "decimal", rather than "currency" of VB - the latter, being fixed point, was quite specifically biased towards financial), and precision matters. It just so happens that financial calculations are a very typical scenario where this is the case, but by no means not the only one.

  • I hope I don't have to explain why 1.1 + 1.2 != 1.3 :) Still though, Pavel is right: Decimal calculations just make more sense, and not just for financial calculations, but for all calculations. The fact that I know why float and double calculations produce unintuitive results does not make them any less unintuitive. And I am still likely to miss the nuances of those results in my applications.

    The fact is that the IEEE floating point standard was created specifically for situations where a small amount of correctness is worth sacrificing for a significant gain in performance. However, most applications written today do not fall into that category. Most applications are not CPU-bound on mathematical operations, and most cannot tolerate discrepancies such as 1.1 + 1.2 != 2.3. If your application is so bound and can tolerate such discrepancies, by all means use double. Otherwise, do yourself (and those who support your application in production) a favor and use Decimal instead.

  • @DRBlaise - No, both Decimal and Double are inexact, but both can represent certain numbers exactly.  It just happens that the numbers represented exactly by Decimal have a base-10 representation that terminates after fewer than 20 or so (don't recall the exact number) digits, while those represented exactly by Double have a base-2 representation that terminates after fewer than 56 or so (binary) digits.

    This makes Decimal a natural choice for financial work, since currency values from the real world are always chosen to have values that a reprented exactly in base-10.

    @Pavel - aren't Decimal and Currency different types?  They're different thing in OLE automation - I'd imagine that the VB currency type maps to the OLE automation currency type, not to decimal.

  • > @Pavel - aren't Decimal and Currency different types?

    They are. That's precisely my point: VB6 Currency was really mostly useful only for money - IIRC, it was a fixed-point decimal float with 4 decimal digits after the point - whereas Decimal is more much generic than that (though also covers all scenarios Currency did), and its name reflects that.

Page 2 of 3 (33 items) 123