Applications of "big numbers"
The last paragraph of my answer to Did the 2019 discovery of O(N log(N)) multiplication have a practical outcome? gives some examples of cases in which large integers are useful. Those cases, and other cases in which "big numbers" are useful, are listed in separate headings below (in no particular order).
Cryptography
One of the most widely known cryptographic protocols is the RSA system, and all RSA numbers are at least 100 digits long. However, RSA-100 (100 digits long) was factored in 1991 and RSA-250 (250 digits long) was factored in 2020. The recommended key size for RSA is 4096-bits or 616 digits! An enormous amount of computer security from which you benefit every day, involves computer programs that deal with numbers that are hundreds of digits long.
Working with polynomials with large coefficients/powers
Polynomials are used in many areas, including number-theory/cryptography, but also for approximating other functions (Chebyshev polynomials, Taylor expansions, etc.). The more terms in a Taylor series (for example), the more accurate the approximation is to the original function. However, when a polynomial in a Taylor series has a lot of terms, it's because the variables are getting raised to large powers.
Even if we only have a 10-term polynomial, which goes from $Ax^0$ to $Ax^9$, what happens when we try to plug $x=1000$ into $x^9$? We get $10^{27}$ which is quite a huge number, and will certainly need "large integer arithmetic" capabilities if you want to keep things "exact" as we do in symbolic computation, but even if we resort to "numerical computation", the difference between $x=1000$ and $x=1000.000001^9$ is still about $10^{18}$ which is quite big!
Quadruple precision and arbitrary precision arithmetic
I personally needed quadruple precision when I published this paper on the ZnO molecule. Measurements on such molecules can be done with a precision of $10^{10}\,\mathrm{cm}^{-1}$ and what I was doing, which was fitting a potential energy curve to describe the internuclear interaction between the Zn and O atoms in ZnO, to reproduce the experimental data, did not work with the state-of-the-art software at the time, so we had to switch from double to quadruple precision. Stack Overflow helped me with that, after I asked Converting a working code from double-precision to quadruple-precision: How to read quadruple-precision numbers in FORTRAN from an input file and later I asked a follow-up question on the same topic How do I declare the precision of a number to be an adjustable parameter?.
In my answer to How much does it cost (RAM and CPU) to calculate the energy of H2? you can see another example in which arbitrary-precision arithmetic is needed. I'll provide the screenshot here so that you can see how many digits are needed before you see differences in the numbers (this is the convergence precision that the authors wanted):
There's many other applications that need arbitrary-precision arithmetic. Another example is calculating the roots of a Rys polynomial for calculating electron repulsion integrals, which is crucial for tens of thousands of scientists. See the mpmath package and where it's used, to see more examples of uses for multi-precision arithmetic or arbitrary-precision arithmetic.
Fun math
Using computers for "fun math" like breaking records for the largest number of digits of $\pi$, or computing Mandelbrot fractals, would also require support for "big numbers".