Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

3
  • 1
    $\begingroup$ This sounds plausible. Is there any reason it needs to be the case, especially on a 64-bit system? I realize these are almost meaninglessly large numbers but Mathematica handles other fringe cases. $\endgroup$
    – Mr.Wizard
    Commented Jan 22, 2012 at 20:37
  • $\begingroup$ Given that even on 64 bit systems, C-derived languages use 32-bit int, I can imagine that they once decided to implement the floating point format using int, and then didn't see the need to revise that decision because, after all, the numbers are large enough that you generally won't exceed them anyway. And every change has the potential to introduce new bugs. Of course that's just a guess; probably only Wolfram could give you a definitive answer. $\endgroup$
    – celtschk
    Commented Jan 22, 2012 at 20:49
  • 4
    $\begingroup$ I believe you are correct, except that in C an int can be any size greater than 16 bits. WRI probably chose 32 bits because that's the size of an int on all the platforms they currently support. Incidentally, in Mathematica 5.2, which was the first 64-bit version, the multiprecision implementation is not GMP as it is in current versions, and as a result $MaxNumber is larger (2.0931173222699136*^646456781). $\endgroup$ Commented Jan 31, 2012 at 23:07