Skip to main content
5 events
when toggle format what by license comment
Jan 31, 2012 at 23:07 comment added Oleksandr R. I believe you are correct, except that in C an int can be any size greater than 16 bits. WRI probably chose 32 bits because that's the size of an int on all the platforms they currently support. Incidentally, in Mathematica 5.2, which was the first 64-bit version, the multiprecision implementation is not GMP as it is in current versions, and as a result $MaxNumber is larger (2.0931173222699136*^646456781).
Jan 23, 2012 at 10:06 vote accept Mr.Wizard
Jan 22, 2012 at 20:49 comment added celtschk Given that even on 64 bit systems, C-derived languages use 32-bit int, I can imagine that they once decided to implement the floating point format using int, and then didn't see the need to revise that decision because, after all, the numbers are large enough that you generally won't exceed them anyway. And every change has the potential to introduce new bugs. Of course that's just a guess; probably only Wolfram could give you a definitive answer.
Jan 22, 2012 at 20:37 comment added Mr.Wizard This sounds plausible. Is there any reason it needs to be the case, especially on a 64-bit system? I realize these are almost meaninglessly large numbers but Mathematica handles other fringe cases.
Jan 22, 2012 at 20:34 history answered celtschk CC BY-SA 3.0