As Timeless As Infinity
User: Recently I found out about a peculiar behaviour concerning division by zero in floating point numbers in C#. It does not throw an exception, as with integer division, but rather returns an "infinity". Why is that?
Eric: As I've often said, "why" questions are difficult for me to answer. My first attempt at an answer to a "why" question is usually "because that's what the specification says to do"; this time is no different. The C# specification says to do that in section 4.1.6. But we're only doing that because that's what the IEEE standard for floating point arithmetic says to do. We wish to be compliant with the established industry standard. See IEEE standard 754-1985 for details. Most floating point arithmetic is done in hardware these days, and most hardware is compliant with this specification.
User: It seems to me that division by zero is a bug no matter how you look at it!
Eric: Well, since clearly that is not how the members of the IEEE standardization committee looked at it in 1985, your statement that it must be a bug "no matter how you look at it" must be incorrect. Some industry experts do not look at it that way.
User: Good point. What motivated this design decision?
Eric: I wasn't there; I was busy playing Jumpman on my Commodore 64 at the time. But my educated guess is that it is desirable for all possible operations on all floats to produce a well-defined float result. Mathematicians would call this a "closure" property; that is, the set of floating point numbers is "closed" over all operations.
Positive infinity seems like a reasonable choice for dividing a positive number by zero. It seems plausible because of course the limit of 1 / x as x goes to zero (from above) is "positive infinity", so why shouldn't 1/0 be the number "positive infinity"?
Now, speaking as a mathematician, I find that argument specious. A thing and its limit need not have any particular property in common; it is fallacious to reason that just because, say, a sequence has a particular limit that a fact about the limit is also a fact about the sequence. Mathematically, "positive infinity" (in the sense of a limit of a real-valued function; let's leave transfinite ordinals, hyperbolic geometry, and all of that other stuff out of this discussion) is not a number at all and should not be treated as one; rather, it's a terse way of saying "the limit does not exist because the sequence diverges upwards".
When we divide by zero, essentially what we are saying is "solve the equation x * 0 = 1"; the solution to that equation is not "positive infinity", it is "I cannot because there is no solution to that equation". It's just the same as asking to solve the equation "x + 1 = x" -- saying "x is positive infinity" is not a solution; there is no solution.
But speaking as a practical engineer who uses floating point numbers to do an imprecise approximation of ideal arithmetic, this seems like a perfectly reasonable choice.
User: But surely it is impossible for the hardware to represent "infinity".
Eric: It certainly is possible. You've got 32 bits in a single-precision float; that's over four billion possible floats. All bit patterns of the form
?11111111???????????????????????
are reserved for "not-a-number" values. That's over sixteen million possible NaN combinations. Two of those sixteen million NaN bit patterns are reserved to mean positive and negative infinity. Positive infinity is the bit pattern 01111111100000000000000000000000 and negative infinity is 11111111100000000000000000000000.
User: Do all languages and applications use this convention of division-by-zero-becomes-infinity?
Eric: No. For example, C# and JScript do but VBScript does not. VBScript gives an error if you do that.
User: Then how do language implementors get the desired behaviour for each language if these semantics are implemented by the hardware?
Eric: There are two basic techniques. First, many chips which implement this standard allow the programmer to make float division by zero an exception rather than an infinity. On the 80x87 chip, for example, you can use bit two of the precision control register to determine whether division by zero returns an infinity or throws a hardware exception.
Second, if you don't want it to be a hardware exception but do want it to be a software exception, then you can check bit two of the status register after each division; it records whether there was a recent divide-by-zero event.
The latter strategy is used by VBScript; after we perform a division operation we check to see whether the status register recorded a divide-by-zero operation; if it did, then the VBScript runtime creates a divide-by-zero error and the usual VBScript error management process takes over, same as any other error.
Similar bits exist for other operations that seem like they might be better treated as exceptions, like numeric overflow.
The existence of the "hardware exception" bits creates problems for the modern language implementor, because we are now often in a world where code written in multiple languages from multiple vendors is running in the same process. Control bits on hardware are the ultimate "global state", and we all know how irksome it is to have global, public state that random code can stomp on.
For example: I might be misremembering some details, but I seem to recall that Delphi-authored controls set the "overflows cause exceptions" bit. That is, the Delphi implementors did not use the VBScript strategy of "try it, allow it to succeed, and check to see whether the overflow bit was set in the status register". Rather, they used the "make the hardware throw an exception and then catch the exception" strategy. This is deeply unfortunate. When a VBScript script calls a Delphi-authored control, the control flips the bit to force exceptions but it never "unflips" it. If, later on in the script, the VBScript program does an overflow, then we get an unhandled hardware exception because the bit is still set, even though the Delphi control might be long gone! I fixed that by saving away the state of the control register before calling into a component and restoring it when control returns. That's not ideal, but there's not much else we can do.
User: Very enlightening! I will be sure to pass this information along to my coworkers. I would be delighted to see a blog post on this.
Eric: And here you go!
Comments
Anonymous
October 15, 2009
That's one more reason why I strongly prefer Decimal over Double "by default" (i.e. when there are no other clear reasons to prefer one over another), and recommend the same to those new to C# - because Decimal has no INF or NAN values, and all arithmetic (including division by zero) is always checked. (The other reason is that there's no such Decimal value x for which (x+1)==x, while there are plenty such Double values. Regardless of the rationale for such values, people often forget about this little peculiarity of float/double, and it can be extremely confusing for them when they actually run into it.)Anonymous
October 15, 2009
Say hello to Mr. User from me! As always, he's got some interesting questions...Anonymous
October 15, 2009
"Positive infinity seems like a reasonable choice for dividing a positive number by zero. It seems plausible because of course the limit of 1 / x as x goes to zero is "positive infinity", so why shouldn't 1/0 be the number "positive infinity"?"
While your further statements about limits are all reasonable, this paragraph is simply untrue: the limit of 1/x as x decreases to 0 (a so-called right-hand limit) diverges toward 'positive infinity'; however the limit of 1/x as x increases to 0 (left-hand limit) is negative infinity. Good point. I'll clarify the text. -- Eric Based on this I'd reject 1/0 := positive infinity simply for the inconsistency, all arguments about what it means to have a limit, divergent or not, aside. My personal opinion is that it would have been better to have a specific NaN reserved to mean "undefined"; this NaN would logically be none of positive, negative, zero or infinite. But then again, I understand that the infinity value almost always crops up in scientific calculations exactly at the point where something really is diverging to positive infinity, so I see why this was a reasonable, if not entirely formally justifiable, choice. -- Eric
Anonymous
October 15, 2009
Actually INFs are not NaNs if you go by the _isnan function in C++ ( MS VC8 that is ).Anonymous
October 15, 2009
@Deskin Miller : a good way to illustrate this is the euqation x*y = 1, which is the formula for the standard hyperbola. http://demo.activemath.org/ActiveMath2/search/show.cmd?id=mbase://LeAM_calculus/functions/ex1_rat_functionAnonymous
October 15, 2009
The comment has been removedAnonymous
October 15, 2009
The comment has been removedAnonymous
October 15, 2009
It doesn't even have to be different languages to create problems. If you use Direct3D you need to specify a specific flag to the initialize function in order not to mess with the FPU state (although it's not the exception flags but another FPU flag).Anonymous
October 15, 2009
The comment has been removedAnonymous
October 15, 2009
@Pavel Minaev, I agree that using Decimal would be a better choice or, in any case, on a more familiar ground, for most people who started at the time of CC++, Pascal (not yet Delphi), and Assembler, because the "zero-divide error" being a hardware, or at least a very low-level, "gut response", of a computer, is almost as traditional as the notion of the Earth revolving around the Sun. But for the same very people, the types (or, rather, the words) "float" and "double" are the first to come to mind when thinking the floating-point arithmetic (isn't the Decimal type new to .NET/C#? I'm not too sure) So my question is: why not use Decimal for all that "new age", 16-million-NaN-values sutff, and let the "good old" float and double behave "as before?" But then again: the answer is, as Eric points out, "because the specifications, and the standards, say so," I guess...Anonymous
October 15, 2009
Speaking from a mathematical perspective, it actually is possible and sometimes useful to extend the real field R to include the limits of sequences like 1,2,3... and -1,-2,-3... This can be done in two different ways — by introducing two "infinities" (+∞ and -∞), giving R the topology of a closed interval, or a single infinity ∞ to which both sequences converge, giving R the topology of a circle (this construction is analogous to the Riemann sphere). In both these constructions, operations like 1/∞ can be defined to have a definite result, although this breaks the algebraic structure of R. http://en.wikipedia.org/wiki/Extended_real_number_line Yes, I know. That's why I deliberately called out that I was not considering mathematical systems in which infinities are numbers, like Cantor's system of transfinite ordinals, or geometries that have "a point at infinity", and so on. -- EricAnonymous
October 15, 2009
The comment has been removedAnonymous
October 15, 2009
@Pavel, Ok, I'm convinced now; besides, I personally use Decimal quite often (compared to others whose code I have read so far). One little question that remains is: does the System.Decimal's implementation utilize the 80x87 floating-point co-processor (or whatever part of the modern mlti-core CPUs plays that role?) If it does not, then here you have the reason why the System.Decimal is so badly undervalued and, as you say, does not get the praise it deserves... apart from the legacy code that has been "translated" from C++ or Java and has "float" and "double" all over the place, and no Decimal at all. Decimal arithmetic is actually done in integers behind the scenes. -- EricAnonymous
October 15, 2009
The comment has been removedAnonymous
October 15, 2009
Just realized you may have wanted decimal literals rather than double by default! While I understand your love for the decimal type, it is a lot slower than double. And by a lot, I mean as much as 100x slower for addition (and "only" 10x for division). So since most software works well with double, it makes sense to keep it the default (again consider WPF as an example and imagine how it would perform if it used decimal all over the place).Anonymous
October 15, 2009
The comment has been removedAnonymous
October 16, 2009
@Pavel - System.Decimal is a wrapper over VT_DECIMAL, at least in the 32-bit "ROTOR" version of the CLR. I can easliy imagine that in the 64-bit CLR it's be re-implemented. AFIAK, System.Decimal and VT_DECIMAL are always 100% identical.Anonymous
October 16, 2009
@Eric: I'm definitely wrong here, but I'm not sure where I've picked the idea that Automation DECIMAL is somehow different from System.Decimal. Now that I look at the description of both in MSDN, it's clear that they are exact same thing. I haven't actually looked at Automation DECIMAL before, though, so apparently I've picked that bit of misinformation from some of the older (and worse) C# books that got me started. I'm not terribly surprised that I (and, apparently, someone else) got it wrong, as I recall Automation DECIMAL being a fairly obscure thing - most people knew that it's there, but I never recall seeing a detailed description of what's it for and how it actually works outside MSDN reference articles. Probably because everyone just used VT_CURRENCY for money, and especially because VB6 had Currency type, but didn't have Decimal type - even though it could handle VT_DECIMAL variants.Anonymous
October 16, 2009
The comment has been removedAnonymous
October 16, 2009
The comment has been removedAnonymous
October 16, 2009
In my opinion the only thing that's really wrong with the IEEE floating-point specification is the names. +INF, -INF, +0, -0, have little to do with infinities and zeros in practice. The very concept of positive or negative zero makes no sense mathematically. But the values themselves make perfect sense within the context of floating point arithmatic. The +0 and -0 values do not mean zero at all, but rather they mean "a value too small to be represented." Similarly, +INF and -INF do not mean infinity. They simply mean "a value too large to be represented." So division by zero is never an issue with floating point, because floating point doesn't have a zero. It just has two small values called (unfortunately) +0 and -0. Understood in this way, the way mathematical operations are defined to work on these values make perfect sense. Anyway, that's just the impression I'm under. I'm not an expert on this subject.Anonymous
October 16, 2009
Where do you find the VT_DECIMAL spec? All I could find was this: http://msdn.microsoft.com/en-us/library/ms221061.aspx which really doesn't say anything about rounding or how exceptions are handled. And I would like to add that I prefer my divisions by zero to not raise exceptions.Anonymous
October 16, 2009
There's a bunch of API functions for DECIMAL arithmetic, but their documentation seems to be very laconic: http://msdn.microsoft.com/en-us/library/ms221612.aspxAnonymous
October 16, 2009
Jeffery L. Whtiledge -- Nice post, I didn't realize that there is no plain zero in floating-point, only positive or negative. It does make since when you think of it that way. Pavel -- Talking about defaults, maybe we should also have to explicitly specify the sign of zero. 0.0 would be illegal, it would have to be +0.0D or -0.0DAnonymous
October 18, 2009
Isn't it sad to see how often Decimals are labelled "exact" while Doubles are labelled "approximate"? Of course, as some of you already pointed out, both are approximate and Doubles are more precise (per unit of storage) and more efficient than Decimals. The only "advantage" of Decimals is that they are highly biased (and as a result compromised) towards financial calculations. I think it is better to educate people than to fudge numbers towards the expectations of the not sufficiently educated. With regards to "zero" and "infinity" I agree there is probably a naming issue here that has a negative contribution to the discussion: perhaps we should be talking about plus and minus underflow and overflow, rather than zero and infinity.Anonymous
October 18, 2009
Alex - Decimals are EXACT in that what you see is what you get. As has already been discussed, if you set a Decimal to 1.1 it is EXACTLY 1.1. With a Double this is NOT the case, 1.1 is only approximately 1.1 and this can lead to some unintuitive comparisons such as 1.1 + 1.2 not equal to 1.3. I agree with you that Decimals are highly biased to financial calculations, but that is the whole point. A large percentage of code is geared to finance. I also like the terms you suggest: plus and minus underflow and overflow.Anonymous
October 19, 2009
> I agree with you that Decimals are highly biased to financial calculations, but that is the whole point. I would actually disagree. I think that Decimal isn't specifically biased towards financial calculations. Rather, it is biased towards any calculation wherein input is supplied by the user in form of a decimal number, output is also expected to be provided to the user in form of a decimal number (hence the name "decimal", rather than "currency" of VB - the latter, being fixed point, was quite specifically biased towards financial), and precision matters. It just so happens that financial calculations are a very typical scenario where this is the case, but by no means not the only one.Anonymous
October 19, 2009
I hope I don't have to explain why 1.1 + 1.2 != 1.3 :) Still though, Pavel is right: Decimal calculations just make more sense, and not just for financial calculations, but for all calculations. The fact that I know why float and double calculations produce unintuitive results does not make them any less unintuitive. And I am still likely to miss the nuances of those results in my applications. The fact is that the IEEE floating point standard was created specifically for situations where a small amount of correctness is worth sacrificing for a significant gain in performance. However, most applications written today do not fall into that category. Most applications are not CPU-bound on mathematical operations, and most cannot tolerate discrepancies such as 1.1 + 1.2 != 2.3. If your application is so bound and can tolerate such discrepancies, by all means use double. Otherwise, do yourself (and those who support your application in production) a favor and use Decimal instead.Anonymous
October 19, 2009
@DRBlaise - No, both Decimal and Double are inexact, but both can represent certain numbers exactly. It just happens that the numbers represented exactly by Decimal have a base-10 representation that terminates after fewer than 20 or so (don't recall the exact number) digits, while those represented exactly by Double have a base-2 representation that terminates after fewer than 56 or so (binary) digits. This makes Decimal a natural choice for financial work, since currency values from the real world are always chosen to have values that a reprented exactly in base-10. @Pavel - aren't Decimal and Currency different types? They're different thing in OLE automation - I'd imagine that the VB currency type maps to the OLE automation currency type, not to decimal.Anonymous
October 20, 2009
> @Pavel - aren't Decimal and Currency different types? They are. That's precisely my point: VB6 Currency was really mostly useful only for money - IIRC, it was a fixed-point decimal float with 4 decimal digits after the point - whereas Decimal is more much generic than that (though also covers all scenarios Currency did), and its name reflects that.Anonymous
October 20, 2009
> a fixed-point decimal float Oops. That's what happens when you start using terms without remembering what they're actually supposed to mean. Scratch the "float" there, please, and pretend that you never heard that bit from me, ever ;)Anonymous
October 20, 2009
It's even possible to handle infinite ranges in C#: http://alicebobandmallory.com/articles/2009/10/20/infinite-ranges-in-cAnonymous
May 12, 2010
I'm surprise to see so much animosity towards NANs and INFs. The nice thing about the IEEE floating-point standard is that if you don't like them then you are supposed to be able to adjust the runtime environment to cause things like x/0.0 and sqrt(-1.0) to fire exceptions -- in C/C++ you use _controlfp. But they are definitely useful. You can use NANs to mark a variable as uninitialized. And x/0.0 giving infinity is critical in some calculations to make them come out right without excessive special casing. If you are calculating the resistance of parallel circuits then it's something like 1/(1/R1+1/R2) and if either R1 or R2 is zero then the correct answer is zero. With x/0.0 giving infinity, and x/infinity giving zero this all works magically. That, in a nutshell, is why IEEE math was carefully designed that way.