8
$\begingroup$

What are the disadvantages of not being able to handle huge numbers also knows and an Integer limit like in JS? I know it's not a super common thing to run into problems like that while programming so are there no disadvantages really?

Credit to justANewbie from some inspiration.

$\endgroup$
1
  • $\begingroup$ In the language or in the compiler/interpreter? $\endgroup$
    – Pablo H
    Commented Jul 13, 2023 at 14:47

6 Answers 6

14
$\begingroup$

Applications of "big numbers"

The last paragraph of my answer to Did the 2019 discovery of O(N log(N)) multiplication have a practical outcome? gives some examples of cases in which large integers are useful. Those cases, and other cases in which "big numbers" are useful, are listed in separate headings below (in no particular order).

Cryptography

One of the most widely known cryptographic protocols is the RSA system, and all RSA numbers are at least 100 digits long. However, RSA-100 (100 digits long) was factored in 1991 and RSA-250 (250 digits long) was factored in 2020. The recommended key size for RSA is 4096-bits or 616 digits! An enormous amount of computer security from which you benefit every day, involves computer programs that deal with numbers that are hundreds of digits long.

Working with polynomials with large coefficients/powers

Polynomials are used in many areas, including number-theory/cryptography, but also for approximating other functions (Chebyshev polynomials, Taylor expansions, etc.). The more terms in a Taylor series (for example), the more accurate the approximation is to the original function. However, when a polynomial in a Taylor series has a lot of terms, it's because the variables are getting raised to large powers.

Even if we only have a 10-term polynomial, which goes from $Ax^0$ to $Ax^9$, what happens when we try to plug $x=1000$ into $x^9$? We get $10^{27}$ which is quite a huge number, and will certainly need "large integer arithmetic" capabilities if you want to keep things "exact" as we do in symbolic computation, but even if we resort to "numerical computation", the difference between $x=1000$ and $x=1000.000001^9$ is still about $10^{18}$ which is quite big!

Quadruple precision and arbitrary precision arithmetic

I personally needed quadruple precision when I published this paper on the ZnO molecule. Measurements on such molecules can be done with a precision of $10^{10}\,\mathrm{cm}^{-1}$ and what I was doing, which was fitting a potential energy curve to describe the internuclear interaction between the Zn and O atoms in ZnO, to reproduce the experimental data, did not work with the state-of-the-art software at the time, so we had to switch from double to quadruple precision. Stack Overflow helped me with that, after I asked Converting a working code from double-precision to quadruple-precision: How to read quadruple-precision numbers in FORTRAN from an input file and later I asked a follow-up question on the same topic How do I declare the precision of a number to be an adjustable parameter?.

In my answer to How much does it cost (RAM and CPU) to calculate the energy of H2? you can see another example in which arbitrary-precision arithmetic is needed. I'll provide the screenshot here so that you can see how many digits are needed before you see differences in the numbers (this is the convergence precision that the authors wanted):

enter image description here

There's many other applications that need arbitrary-precision arithmetic. Another example is calculating the roots of a Rys polynomial for calculating electron repulsion integrals, which is crucial for tens of thousands of scientists. See the mpmath package and where it's used, to see more examples of uses for multi-precision arithmetic or arbitrary-precision arithmetic.

Fun math

Using computers for "fun math" like breaking records for the largest number of digits of $\pi$, or computing Mandelbrot fractals, would also require support for "big numbers".

$\endgroup$
2
  • 5
    $\begingroup$ The bignum implementation that you'd want in a standard library is not suitable for cryptography. In most environments, cryptography requires protection against side channels such as timing. You can't protect a generic bignum implementation against leaking the size of numbers, and that's bad for cryptography. Cryptography generally needs large fixnums as a primitive (e.g. work on numbers that are exactly 4096 bits to implement 4096-bit RSA). $\endgroup$ Commented May 18, 2023 at 22:24
  • $\begingroup$ As a cryptography implementer, I do often make use of Python's convenient access to bignums, but only to generate test data. I would never use that in production code even if my application was written in Python. $\endgroup$ Commented May 18, 2023 at 22:25
9
$\begingroup$

The question asks a double negative, so I'll answer the simpler question of what advantages there are to supporting big numbers.

To me, the main benefit is peace of mind ─ sure, my auto-incrementing ID will probably never overflow, but what if someone runs my program for a year without rebooting? I don't want to have to think about this kind of edge case, it wastes brainpower. Then there are the situations where my integers will definitely overflow and I don't want them to, but the code to handle it when it happens is more complicated than it would need to be if there was no overflow.

If my integers are fixed width and can overflow, then everywhere I use arithmetic but don't want overflow (which is a lot of places!), I need to justify why the arithmetic in that place will give the correct result, and possibly change the code to ensure it will. In contrast, when I write in Python I don't need to think about this. In many circumstances, as a programmer I'm willing to trade a bit of performance for the benefit of knowing my program will produce correct answers without having to account for arithmetic operators which don't behave like their mathematical ideals.

Another upside is that big numbers can easily be used to simulate fixed-width numbers ─ just write (x + y) & 0xFFFFFFFF instead of x + y when you want overflowing behaviour. It's much easier to discard extra bits than it is to recover those discarded bits, or even detect that they were discarded. So just one integer type (e.g. bigint) is enough for the language to support whatever integer arithmetic behaviour you want.

$\endgroup$
8
$\begingroup$

"Super common" is a relative thing; I don't often use large numbers in my code, but I know people who do. It mostly depends on what your language is meant to be used for; tiny esoteric languages probably don't need to worry about it, but ones meant for general use should, because someone will inevitably end up needing it.

$\endgroup$
8
$\begingroup$

Actual high-magnitude numbers aren't usually needed, but precision is much more often an issue, and the two often go together.

For example, using 64-bit floats to represent money amounts can quickly lead to imprecision.

An example of limited range being a problem is seen when using 32-bit integers to index an array. This limits the array size to 2³² which is fewer elements than, say, the number of humans on earth.

$\endgroup$
30
  • $\begingroup$ +1 but do you have a reference for "For example, using 64-bit floats to represent money amounts can quickly lead to imprecision" ? There's a saying in the numerical computing community: single-precision is good enough 90% of the time, double-precision is enough 99.99% of the time (or something like that). Anything beyond 64-bits is beyond double-precision, which is rarely needed for anything (but I'm not saying it's never needed, because there's many examples that do need quadruple precision for example). $\endgroup$ Commented May 16, 2023 at 20:20
  • $\begingroup$ @NikeDattani Yes, I've heard directly from one of our major customers in the finance industry that they in turn have portfolios with amounts that approach the precision of 64-bit floats. $\endgroup$
    – Adám
    Commented May 16, 2023 at 20:24
  • $\begingroup$ That's hearsay. I've heard many people claiming that they need quadruple precision, when in fact they were just using a bad algorithm or a poor implementation of a good algorithm. If quadruple-precision or anything beyond double-precision is needed in any financial application, then there ought to be a good reference for it that we can find online right? $\endgroup$ Commented May 16, 2023 at 20:28
  • 1
    $\begingroup$ @NikeDattani Sayings in the numerical community are not relevant to finance. Finance has very precise rules about calculations: you can't just round stuff the way you want. Precision is never a concern because all financial calculations are exact. (They might implement rules like “round down now to the nearest ¤.001 at this step”, but you have to follow this rule exactly, not round at the next step or round down to the nearest ¤0.0009765625.) $\endgroup$ Commented May 18, 2023 at 22:31
  • 1
    $\begingroup$ @NikeDattani (1) No, it's a legal requirement. (2) Indeed this doesn't mean that you need a larger range for numbers (most financial calculations don't need more than 64 value bits). It means that if you need a larger range, floating point is not a solution. $\endgroup$ Commented May 20, 2023 at 21:47
6
$\begingroup$

Almost none

Most major programming languages, like C#, Java and as you mentioned, JavaScript, don't support arbitrary large integers in their default namespaces/packages/... However, for all of those languages, it is possible to support that kind of arithmetic in another way (provided in another namespace/package, a third party library, or even by yourself with custom data structures).

It does make sense to include it by default in a language designed for problems expected to deal with large integers (computer algebra systems come to mind).

$\endgroup$
2
  • 4
    $\begingroup$ Modern JavaScript does have the “bigint” type, actually. I think it was introduced in ES 2017 or 2018? $\endgroup$
    – Bbrk24
    Commented May 16, 2023 at 20:14
  • 8
    $\begingroup$ C#, Java, and Javascript all have big integer datatypes built-in. It's true that they don't gracefully promote regular numbers to them like some languages, but they're definitely present. $\endgroup$ Commented May 16, 2023 at 20:14
2
$\begingroup$

Computers are designed to be maximally efficient at dealing with objects of known size, or at least a known fixed maximum size. By way of analogy, imagine one is managing a workshop with a large open floor area where people will be building various project of different sizes.

If requesting space to build a project must specify in advance how much space they will need, then one can assign a location for each project when space is involved, and anyone who needs to order supplies will be able to say when placing the order where they should be delivered, allowing the delivery agent to go directly to that location.

If it isn't possible to specify the space requirement for a project in advance, and the project grows to the point that will no longer fit in the space that's "boxed in" by the projects around it, it will be necessary for it to move somewhere else. If this might happen when anybody has a copy of the object's location and is expecting to go there or deliver something, it will be necessary to either:

  1. Find all copies of the location that exist anywhere in the universe and change them to so they instead report the new location.

  2. Leave a marker at the old location, refrain from reusing the space as long as any references exist to the old location, and require that anyone making a delivery look for a note to see if the object has moved, rather than blindly dropping off the shipment.

To simplify #1, one might specify that nobody other than the main office is allowed to persist any information related to the project's location, and require that the delivery agent only ask for the project's location when it's ready to make an immediate delivery, and say that if a project would need to expand between the time a delivery agent has inquired about its location and the time delivery is complete, the expansion would need to wait until the delivery is done.

It's possible for memory-management systems to use a combination of the two approaches to manage flexibly-sized objects, but it makes many kinds of operations much more complicated than they would be if all objects were assigned a certain amount of space when they were created and would never need to expand beyond that.

$\endgroup$
2
  • $\begingroup$ I don't see how this is relevant to numbers, which are normally immutable. If e.g. a bigint object is allocated on the heap, then the number that object represents can't get bigger or need to be reallocated, so references to it won't need to be updated. Instead, if a bigger number is needed then a new bigint object will be allocated for it, and pre-existing references to the old number will stay referring to the old object. $\endgroup$
    – kaya3
    Commented Aug 15, 2023 at 16:04
  • $\begingroup$ @kaya3: Suppose a project has five numbers associated with it. If there's space within the project to hold all of the numbers, someone who wants to read them can go there and see them all. If numbers are variable-sized, someone visiting the project to read the five numbers would need to go first to the project, and then go to five more places to read the numbers. $\endgroup$
    – supercat
    Commented Aug 15, 2023 at 16:13

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .