First, it is helpful to understand the various reasons that definitions are marked noncomputable
.
Axiom of choice: If a definition uses choice, then it is noncomputable. This isn't surprising necessarily since choice lets you choose an arbitrary element from a non-empty type so you don't know what that element is.
Unique choice: A particular case of using choice is unique choice where we still use choice to get a value, but we can also prove there is only one such value. There are some type theories, namely cubical type theory, where unique choice doesn't break computation, but in Lean it does (and it is just implemented with choice).
Quotients: While quotients don't necessarily break computability, they do make it more difficult. For example, let's say we define IntMod7
as a quotient (that is the integers mod 7). Then any function foo (x : IntMod7) : A
has to have the property that whatever underlying representation of x
we have, then it computes the same a : A
as any other equivalent representation.
Ease of defining functions: Many definitions in Mathlib are noncomputable because (1) the most natural definition was not computable (like the definition of polynomial), or (2) any practical computation is near impossible so it is best just to mark it noncomputable to speed up mathlib.
Computability on reals
Now, since you seem interested in the real numbers, let's talk about some of the unique challenges there. While Lean just marks the reals as noncomputable, there is a field of computable analysis (closely related to constructive analysis) and it does make sense to say that a function is computable on the reals, but what this means is that one represents a real as a (quotient of a) sequence of fast converging rationals. A computable definition (besides being computable) has to give the same answer for all equivalent fast-converging sequences. Note, many things are not computable with this representation of the reals. You can't (uniformly) compute the decimal expansion of a real, since if it is say close to .99999 one can't be sure if the first digits are 0.9 or 1.0. Similarly, you can't have a computable function approx (r : Real) (n : Nat) : Rat
which approximates r
within 2^(-n)
since you would need an algorithm which comes up with the exact same rational approximation regardless of the representation of r
.
Worse, there are real numbers that we can express in Lean, which are just not computable at all. A simple example is to take the real which encodes the halting problem in its bits. A more interesting example is Solomonoff's universal prior probability. (It should be noted that such definitions are inherently fiddly and dependent on the specific representation of the universal Turing machine. I don't think there is a natural example of a non-computable real number.)
There are also real numbers that are likely computable, but we don't for sure. My favorite example is the area of the mandelbrot set.
Also, even for numbers we could compute, like Real.pi
, it doesn't mean the original definition is naturally computable. For example, Real.pi
is defined in Lean as $2x$ where $\cos x = 0$ and $x\in [1,2]$.
So all this means that even if one changes the definition of the reals, one still couldn't build a universal approximation function on the reals as a pure Lean function.
Working solutions
But hope is not lost! Even with the current way that Lean represents the reals and other definitions, there are workable solutions, albeit ones that are not yet available and require some work to build.
Metaprogramming
Let's say we want to approximate Real.log (2 * Real.pi)
in Lean. One could do this if one has access to the expression itself. The expression is not a real, but just some formal Lean object specifying the formula. Then one could make a bunch of rules for how to compute with special common functions (say all the functions on the reals specified in the floating point standard). So again the input is an expression and the output is some approximation (in floating-point or rational numbers say). This could be implemented as a tactic or a user command. Indeed this isn't that different from the norm_num
tactic which can do some routine computations on the reals to simplify an expression.
One could also create tactics to give provably correct bounds (say using interval arithmetic or verified floating point arithmetic) along with proofs of correctness. For example, it could add h : Real.pi < 3.1415927
to the hypotheses of your goal. Indeed, Lean already does have tactics like this for Real.pi
, namely pi_lower_bound
and pi_upper_bound
which can be used to automatically prove bounds like 'Real.pi < 3.141593`.)
External implementations
For some examples of cases where Mathlib has a definition (like polynomials) that is marked as noncomputable but doesn't have to be, one could create a computable instance of the definition (using say a fast algorithm). That instance can be used for computation in place of the original definition. One also has some nice options:
- One can prove their fast computable version is equivalent to the original.
- One can have Lean replace the original with the new version using
[@implemented_by ...]
- One can even have Lean replace the original with a function implemented in another language like C using
[@extern ...]
. (This is how natural numbers are so fast in Lean.)
It should be noted that @implemented_by
and @extern
are used only when executing code in Lean (running Lean code from the command line, #eval
, and metaprogramming) but cannot be used when proving things about the function itself (which prevents inconsistencies).
Using other data types instead of real numbers
Lean has Rat
for rationals and Float
for floating point numbers. So with these you can do practical computation a la Python.
I don't see a Float.pi
yet, but there is for example Float.exp 1
which you can execute to get 2.718282
. Of course, it would be great if there were a complete set of theorems and tactics connecting Float
to Real
, but that is also a lot of work since it involves proving stuff about the IEEE standard. (And it may depend on the computer implementation, I'm not sure.)
Conclusion
Blending provability and computability on real-world mathematical objects like real numbers and polynomials requires a lot of interesting mixes of motivation, and I think the best solutions are still being worked out and discussed. One day a language like Lean will have all the power of a proof assistant, a programming language, and a CAS, but right now there are rough edges, especially on the CAS part. Using metaprogramming, it is possible to work directly with Lean expressions. Using foreign function interfaces it is possible to replace a noncomputable or slow definition with a fast one. But given that some definitions are just non-computable, and that computability on the real numbers is subtle, one needs to be careful in their expectations.