I thought it would be interesting to see how the number of bits of precision affects the error. I wrote the following using arb, a library for interval arithmetic (and in particular a library that tracks precision loss due to machine arithmetic). Instead of working with numbers, arb works with intervals, and when a machine instruction causes a loss of precision the intervals get larger.
// arblib in cpp
#include <iostream>
#include "arb.h"
int main() {
for (slong prec : { 32, 64, 128, 256, 512 } ){
arb_t x;
arb_init(x);
arb_set_si(x, 5); // x == 5
arb_printn(x, 50, 0); // print x to 50 decimal digits if available
std::cout << "\n";
for (int i = 0; i < 73; i++) {
arb_cos(x, x, prec); // set x = cos(x), computed with `prec` precision
arb_mul_ui(x, x, 6, prec); // x = 6 * x
std::cout << i << " ";
arb_printn(x, 50, 0);
std::cout << "\n";
}
arb_clear(x);
}
}
With 32 bits of precision, the first lines of output look like
5.0000000000000000000000000000000000000000000000000
0 [1.70197311 +/- 3.48e-9]
1 [-0.78480545 +/- 8.22e-9]
2 [4.2451546 +/- 6.14e-8]
3 [-2.702513 +/- 4.55e-7]
4 [-5.43086 +/- 2.41e-6]
By the 10th iteration, it's completely hopeless.
9 [5.7 +/- 0.0451]
10 [5e+0 +/- 0.191]
11 [2e+0 +/- 0.809]
12 [+/- 7.01]
With 64 bits of precision, it works a bit better.
5.0000000000000000000000000000000000000000000000000
0 [1.701973112779357587 +/- 5.61e-19]
1 [-0.78480545237593126 +/- 2.53e-18]
2 [4.2451546016343455 +/- 2.28e-17]
3 [-2.7025128979996596 +/- 9.55e-17]
4 [-5.430859490891791 +/- 5.27e-16]
...
22 [5.0 +/- 0.0819]
23 [2e+0 +/- 0.325]
24 [+/- 3.62]
Even though this is a different library, the loss of precision due to arithmetic precision will always be present.
With 128 bits of precision, we have
5.0000000000000000000000000000000000000000000000000
0 [1.70197311277935758679983502908134385000 +/- 9.16e-39]
1 [-0.7848054523759312613033562894196438204 +/- 9.13e-38]
2 [4.245154601634345486781290325089041750 +/- 7.45e-37]
3 [-2.70251289799965963668557149619904016 +/- 4.82e-36]
...
46 [5.66 +/- 8.81e-3]
47 [4.9 +/- 0.0749]
48 [+/- 1.14]
Much further! With 256 bits of precision, we have
5.0000000000000000000000000000000000000000000000000
0 [1.7019731127793575867998350290813438500065355535133 +/- 4.34e-51]
1 [-0.78480545237593126130335628941964382041108157876186 +/- 3.49e-51]
2 [4.2451546016343454867812903250890417494462853687055 +/- 1.75e-50]
3 [-2.7025128979996596366855714961990401635527246365811 +/- 1.16e-50]
...
70 [5.758713845316529420111 +/- 2.17e-22]
71 [5.193532385598491745861 +/- 8.95e-22]
72 [2.77675840848277346492 +/- 6.34e-21]
That is, the first several digits on an exact calculator should read 2.776757... But even using 256 bits only gives 21 decimal digits after all the iterations.
With 512 bits of precision (which we now expect to be far more than enough), we confirm
5.0000000000000000000000000000000000000000000000000
0 [1.7019731127793575867998350290813438500065355535133 +/- 4.34e-51]
1 [-0.78480545237593126130335628941964382041108157876186 +/- 3.49e-51]
2 [4.2451546016343454867812903250890417494462853687055 +/- 1.75e-50]
...
70 [5.7587138453165294201111165200927171983191371489036 +/- 1.65e-50]
71 [5.1935323855984917458612965663072073056805703605339 +/- 3.50e-51]
72 [2.7767584084827734649172502497247816301002663316858 +/- 1.81e-50]
The e-50
error terms here are because I was printing the first 50 digits, and are not an indicator of the actual precision bound in the end.
double
? $\endgroup$