Most proof assistants assume that their hardware is correct. However, there are many recorded instances of CPUs outputting incorrect answers for various calculations (especially, but not limited to, floating point calculations).
It's impossible in principle for a computer program to prove its CPU correct: consider a CPU that, if it detects that it's running such a program, instead just outputs a success message. However, most real-world CPU errors are inconsistencies, which can in principle be detected by a program running on the CPU; indeed, most of these errors (when discovered) are demonstrated by presenting computer programs that behave incorrectly. The human brain is not infallible, yet humans are nonetheless capable of noting errors in their own calculations and proofs – even when it's due to mis-memorised arithmetic results, like 7×8=42.
Are there any theorem provers (or similar programs) that mitigate the impact of running on incorrect / faulty hardware? (Perhaps by verifying the correctness of particular aspects of their hardware before relying on it, or by double-checking that an intermediate result doesn't imply contradiction.)
Well-known CPU bugs include:
- Rowhammer: certain memory access patterns can corrupt other bits of memory.
- Xbox 360's
xDCBT
bug: if your code contains a certain sequence of bytes, the branch predictor might decide to make your L1 cache inconsistent. (Or it might not. It depends on the pattern of code execution.) - Pentium FDIV bug: Intel accidentally zeroed part of the division lookup table, so $\frac{4 195 835}{3 145 727} = 1.333\color{red}{739\dots}$.
- The x86 trigonometric functions are very wrong; to do range reduction correctly, it would need to know 128 bits of pi, but x86 only knows 68 bits. (This is the case in modern processors, too; pretty much everyone implements their own trig in software.)