This answer by Kevin Buzzard talks, among other things, about incremental progress made in Lean's mathlib vs Isabelle.
There's a lot in that answer that I don't understand, but one thing that seems pretty clear is that progress formalizing the canon of mathematics in different provers is somewhat slow and that there are a lot of benefits to monolithic libraries. Here's a quote below from the conclusion of the answer.
I don't see any reason why there can't be a Coq version of mathlib; again the reason it's not there is I suspect sociological rather than anything else. The point about making the library monolithic is that then you are not allowed to have more than one version of rings (or more than one version of the natural numbers; the last time I checked there were four or five variants available to Coq users) -- in a monolithic library you need a consistent collection of definitions all of which play well with each other.
This may be wrong, but the impression I get is that formalization seldom uncovers mistakes in widely-accepted mathematical theorems. For cutting edge stuff, maybe formalization does frequently uncover mistakes, I'm not sure.
This makes me wonder if there are any current attempts to produce useful monolithic libraries that formally state a large collection of mathematical theorems, but then simply admit them (in the Coq sense), with citations to the mathematical literature.
One practical application for such a library would be to try to prove novel results using proof assistants ... using the proof assistant to formally check only the last mile between known results and the novel result.
My question is severalfold.
- Do any libraries like this exist?
- If libraries like this did exist, would they have any value?
- Would attempting to do this actually save time? In the linked answer, it seems like a lot of time is spent picking definitions in such a way that they interact nicely within a monolithic library ... rather than just producing proofs.