6

I need to multiply several big long integers as efficiently as possible.

I am trying to implement the Harvey & van der Hoeven 2019 algorithm for integer multiplication, but I am stuck on understanding the definition and mathematics behind it, especially the Agarwal–Cooley algorithm.

Any help to understand this algorithm, like a practical example or some pseudo-code would be highly appreciated.

0

2 Answers 2

14

Remember that Big O notation is defined such that there exists some x≥x₀ for which some function |f(x)|≤εg(x) for all such x.

The problem with the Harvey & van der Hoeven (2019) algorithm is that the x₀ involved is quite large. Therefore, for most inputs, their algorithm gives a way to multiply integers inefficiently. For very large, numbers, though, the algorithm does give an O(n log n) algorithm.

But how big are those numbers? David Harvey, one of the authors states:

The new algorithm is not really practical in its current form, because the proof given in our paper only works for ludicrously large numbers. Even if each digit was written on a hydrogen atom, there would not be nearly enough room available in the observable universe to write them down.

On the other hand, we are hopeful that with further refinements, the algorithm might become practical for numbers with merely billions or trillions of digits. If so, it may well become an indispensable tool in the computational mathematician's arsenal.

Therefore, if you are serious about your stated goal---multiplying big numbers quickly---this algorithm is not the way you should go about doing it.

2
  • 1
    Just to say, if my calculations are correct it's somewhere near 10^36 digits :D For comparison, largest prime number currently know (and this is the place largest numbers multiplications are commonly used) is less than 10^8 digits long. I don't think we will ever reach computing power enough to make that algorithm usefull. Commented Sep 17, 2022 at 19:09
  • @ElChupacabra Actually, to be precise, the number of digits of the input would itself need to be $10^36$ digits long, i.e. you need inputs with at least $2^{10^36}$ digits.
    – Max Meijer
    Commented Sep 26, 2023 at 9:40
5

If your long integers are less than about 10000 bits and you are using a regular 32- or 64 bit computer, I suggest Karatsuba-Offman. It can be sped up using parallelism, e.g. multi-threading or a GPU. If you want to make a custom chip to do it fully parallel, use 4XY = (X+Y)^2-(X-Y)^2 and build a Karatsuba-Offman squarer. That takes less chip area because the squarer has only n input lines instead of 2n

2
  • This does not help Understanding Harvey & van der Hoeven 2019 algorithm (or 2021, at that).
    – greybeard
    Commented Jun 27, 2022 at 8:58
  • 1
    the accepted answer also doesn't help (on the algo understanding) .. but every input that helps OP to move on (efficient multiplication) should count. imho. /(^_^)
    – p._phidot_
    Commented Jun 15, 2023 at 11:26

Not the answer you're looking for? Browse other questions tagged or ask your own question.