Your question probably needs some clarification.
Which theorems make sense in both ZFC and Lean
It isn't 100% non-ambiguous what it means for a Lean theorem to "hold in ZFC". Sometimes it comes down to just how stuff is defined. For a silly example, in ZFC with the usual encoding, $\mathbb{Z} \neq \mathbb{N}$, but in Lean it is well-known that this is independent. Of course in both ZFC or Lean, one could have defined $\mathbb{Z} := \mathbb{N}$ and then we have $\mathbb{Z} = \mathbb{N}$.
For a normal mathematical theorem, there is no ambiguity and it doesn't come down to definitions. In practice, a mathematician has a good sense of what a "normal mathematical theorem" means. However, just to be pedantic, for the rest of this answer I'll just talk about theorems of higher-order arithmetic (which is a common approach in logic to deal with these sorts of issues). This eliminates the ambiguity, and surprisingly you can encode a huge amount of mathematics into arithmetic.
Not enough to look at the theorem statement itself
As François G. Dorais's answer suggests, you can't just look at the statement of the theorem itself. We can encode theorems like $\text{Con}(\mathsf{ZFC} + \text{$n$ inaccesibles})$ into arithmetic and they are provable in Lean for any given $n$, but they are not provable in ZFC.
(Also, this comes down to Lean's Prop
being impredicative. If it was not, then each universe would have its own Prop
level, and it would likely be true that (arithmetic) theorems in Prop 0
also hold for ZFC, and likely weaker from my limited understanding.)
What if we restrict the universes used in a proof?
The picture changes if we can also look at the proof. In Lean, to prove $\text{Con}(\mathsf{ZFC} + \text{$n$ inaccesibles})$ it takes roughly $n+1$ universe levels, Type 0
, Type 1
, ..., Type n
. Mario Carneiro's thesis roughly proves that if a theorem (of arithmetic) is provable in Lean with $n$ universe levels, then it is provable in ZFC plus $n$ inaccessible cardinals "up to a fence post error" as Mario says in a Zulip discussion inspired by this PA.SE question.
And it probably shouldn't be that hard to get an upper bound on how many universes go into a proof. Indeed, a few years ago, Mario Carneiro wrote a script to measure this and determined that Mathlib used only four universe levels. I could even imagine having a command analogous to #print_axioms
that counts how many universe levels are needed for a theorem.
But this doesn't fully answer your question. The "fence-post error" above hasn't been sufficiently worked out, so knowing something uses 0 universes doesn't alone necessarily guarantee it holds in ZFC.
Also, Lean doesn't record every step of the proof. Some parts are computed on the fly through type checking, and as François G. Dorais pointed out in that Zulip discussion above, it might be possible that hidden universe bumps may happen there which are hard to measure. (There is more discussion on that Zulip thread about this.)
Also, if one does Mario Carneiro's naive method to compute an upper bound on the universe levels, a lot of things would bump the level up unnecessarily. I think just using List Nat
would already bump the universe level since List.{0}
has type Type 0 -> Type 0
which lives in Type 1
. (Note, the analog to List
in ZFC is not a set, but a proper class because it is too big.) But that universe bump is a bit silly if we never use more than List Nat
in the proof, since that lives in Type 0
. Mario discussed (again in that Zulip thread) better ways to bound the universe levels to try to truly capture if something is provable in ZFC.
Practically speaking
Given the practical concerns I mention above, I doubt (at least right now) you could easily figure out if a theorem in Lean holds in exactly ZFC. I think without doing a lot of meta-logic and meta-programming work, the easy upper bounds that would pop out would likely include at least one universe level for most theorems in Lean's Mathlib.
If you care about this sort of thing, I think it would be better right now to use HOL-Light or Isabelle/HOL whose logic is (I believe) strictly weaker than ZFC. (Isabelle's AFP is a very large theorem library.)
Also, this concern is not new to math. Lean's users have embraced universes and use them freely, just like most mathematicians have embraced the axiom of choice and the law of excluded middle. This may still be odd in some circles of math. I remember once someone telling me Fermat's last theorem wasn't currently a "theorem" since it used universes in its proof. (This has since been worked out.) But what counts as a "real theorem" is a social norm that may change over time. For example, some categorical fields of math have already widely embraced universes.
Still one can also of course ask what the minimal axioms needed to prove a theorem are. There are entire fields of logic devoted to these sorts of questions. But that investigation becomes harder and harder as you restrict the logic more and more. (One of the most extreme examples I've seen is Harvey Friedman's conjecture that all theorems in Annuals which can be stated in first-order arithmetic, including Fermat's Last Theorem, hold in elementary finite arithmetic, which is an exceedingly weak theory. I doubt this will be proven any time soon.)
Type u
can be used to construct a model of ZFC. So it might be that if you only use types inType 0
, then you are ZFC compatible, but I don’t know the subtleties enough to give a full answer. (If this or similar was true, you could probably write a Lean linter to check that you aren’t using large types.) $\endgroup$