"double-double" / "compensated" arithmetic uses unevaluated sums of floating point numbers to obtain higher precision.
One of the basic algorithms is TWOSUM
(a,b) = (x,y) where $x,y$ don't overlap each other (while $a,b$ might overlap), with all operations in machine-precision floating point:
x := a + b
z := x - a
y := (a - (x - z)) + (b - z)
I have seen it claimed that this is an "error-free transform". I have not been able to prove it: my attempt by multiplying each operation by $(1 + \epsilon_k)$ in (wx)Maxima yields:
x : (a + b) * (1 + e[1])$
z : (x - a) * (1 + e[2])$
y : ((a - (x - z) * (1 + e[3])) * (1 + e[4]) + (b - z) * (1 + e[5])) * (1 + e[6])$
expand((x + y) - (a + b));
-> -e[3]*a + terms multiplied by more than one e[k]
That is, I think it's only error-free if e[3]
is zero, that is, when x - z
is exactly representable. Is this always the case? Why?