Freigeben über


Long division

A thing that makes a reader go hmmm is why in C#, int divided by long has a result of long, even though it is clear that when an int is divided by a (nonzero) long, the result always fits into an int.

I agree that this is a bit of a head scratcher. After scratching my head for a while, two reasons to not have the proposed behaviour came to mind.

First, why is it even desirable to have the result fit into an int? You'd be saving merely four bytes of memory and probably cheap stack memory at that. You're already doing the math in longs anyway; it would probably be more expensive in time to truncate the result down to int. Let's not have a false economy here. Bytes are cheap.

A second and more important reason is illustrated by this case:

long x = whatever;
x = 2000000000 + 2000000000 / x;

Suppose x is one. Should this be two integers, each equal to two billion, added together with unchecked integer arithmetic, resulting in a negative number which is then converted to a long? Or should this be the long 4000000000 ?

Once you start doing a calculation in longs, odds are good that it is because you want the range of a long in the result and in all the calculations that get you to the result.

Comments

  • Anonymous
    January 28, 2009
    There is one case where it fails if the result must be an integer, but not if the result is a long. long x = -1; x = Integer.MinValue / -1;

  • Anonymous
    January 28, 2009
    Makes sense. Also, what about -2147483648 / ((long)-1)? If the result type of the expression was an int, the answer would be -2147483648. Since the result is long, you get 2147483648. Igor

  • Anonymous
    January 28, 2009
    Aren't all divisions natively cast to the whatever type is largest in the division itself? int / decimal = decimal? byte / float  = float? long / int = long? Fiddling with this code seemed to confirm that very idea. --- to test, made a simple console app exe --- fiddling around with the number and the types of vars a and b using System; namespace NamespaceOrama {    class Program    {        static void Main(string[] args)        {            byte a = 5;            int b = 23;            var c = (a / b);            Console.WriteLine(c.GetType().ToString());            Console.ReadLine();        }    } } --- EOF So, after all that, I say, "So what, and where's the C+C music factory reference?" Then I wait and say... "Oooh, better yet... let's try Marky Mark, just to stay fresh."

  • Anonymous
    January 28, 2009
    C/C++ have integral promotions. C# specification also says http://msdn.microsoft.com/en-us/library/aa691330(VS.71).aspx almost the same thing. However, which one is the egg and which one is chicken here?

  • Anonymous
    January 28, 2009
    @Christopher: The divisions are not cast. These are the defined operators: int / int long / long byte / byte decimal / decimal When you try to divide long / int (or int / long) the int is cast to a long because that's the best match the overload resolution can find, and there's an implicit cast defined from int to long.

  • Anonymous
    January 29, 2009
    hey Eric, you ask "First, why is it even desirable to have the result fit into an int?" One thought that immediately came to mind is that if I'm working with ints that came from an SQL DB, divide them for some reason, and want to write the result back to the DB, then what I'm writing back darn well better be an int or the write/update will fail.  Sometimes it's harder to make changes to old DB schemas (esp if it was made when memory was expensive) than it is to change data types in a program.

  • Anonymous
    January 30, 2009
    private const char c = 'C'; private const string abc = "AB" + c;

  • Anonymous
    January 30, 2009
    Hm, my recent post got a bit to short. That my little code snippet is illegal makes me go hmmm. Here is the snippet again: private const char c = 'C'; private const string abc = "AB" + c;

  • Anonymous
    January 31, 2009
    Thank you for submitting this cool story - Trackback from DotNetShoutout

  • Anonymous
    February 01, 2009
    There is another important reason: to minimize mental load of understanding the language. The rule "Any operator, when fed a mix of ints and longs, always returns a long" is much simpler and easier to remember than the rule "Any operator, when fed a mix of ints and longs, always returns a long except for division, where an int divided by a long returns an int because the result must necessarily fit into an int." -- Michael Chermside

  • Anonymous
    February 03, 2009
    This is well covered in computer science's Compiler 101 course.

  • Anonymous
    March 19, 2009
    Now, I'm no C++ or Visual Studio/Express coder, I prefer PureBasic myself a nice procedural language that easily match C in most cases. But I have looked at quite a bit of C and C++ code, and I have looked up the Windows SDK type definitions. An int and a long are both actually a signed __int32. From: http://msdn.microsoft.com/en-us/library/aa383751(VS.85).aspx INT is a 32-bit signed integer. The range is -2147483648 through 2147483647 decimal. This type is declared in WinDef.h as follows: typedef int INT; LONG is a 32-bit signed integer. The range is –2147483648 through 2147483647 decimal. This type is declared in WinNT.h as follows: typedef long LONG; And in http://msdn.microsoft.com/en-us/library/s3f49ktz.aspx long is 4 bytes, other names is long int, signed long int, range is –2,147,483,648 to 2,147,483,647 __int32 is 4 bytes, other names are signed, signed int, int, range is –2,147,483,648 to 2,147,483,647 int is 4 bytes, other names is signed int, range is –2,147,483,648 to 2,147,483,647 So any compiler that treats INT and LONG differently (they are both a 32bit signed integer) is bugged and unpredictable per the definitions. "why is it even desirable to have the result fit into an int? You'd be saving merely four bytes of memory" Sounds to me like you are talking about int as if it was a LONG LONG or __int64, and don't forget LONG_PTR (32bits on x86, 64bits on x64) Of course, I could be wrong and in C# a long is actually a signed __int64 rather than a signed __int32 as the Windows SDK states... they can't both be right can they? (I trust the SDK more than C# compiler in this case) Yes, you are wrong. Indeed, in the 32 bit windows SDK for C/C++ both INT and LONG are aliases for a 32 bit signed integer. That has nothing whatsoever to do with C#, a completely different language that targets the .NET Runtime, not the Win32 SDK. The C# compiler has nothing whatsoever to do with the Win32 SDK. There's not a contradiction there; they are just completely different systems. In C#, an int is 32 bits and a long is 64 bits. -- Eric Maybe time for back to basics adventure in coding article to highlight this typedef mess that has stayed with C through C++ to C# (and bled into some other languages as well) and the two MSDN links I gave which everyone should have bookmarked at the very least. It hasn't bled through to C# at all. These definitions for C/C++ programmers have nothing whatsoever to do with C#, a completely different language that targets a different platform. -- Eric