18
$\begingroup$

This document showed that Lean's impredicative universe of strict propositions breaks normalization (of proofs) in a way that canonicity and logical consistency are unaffected, because the counterexample lives in Prop. I wonder how much of trouble it will be to have this counterexample, like both syntactically (in terms of Lean type checking) and semantically (in terms of Lean metatheory)?

To me, it seems that you simply need to prevent reduction for terms in Prop and assume them to be equal. This will at least recover the termination of type checking (well, at least refute this particular counterexample of termination of type checking). This can solve the obvious trouble pointed out by the paper.

Pierre-Marie Pédrot said that:

Impredicative SProp breaks SN only when you can eliminate the SProp equality into a non-SProp sort. Otherwise, it's fine.

I'm also unsure about that. I think it is established that equalities in SProp cannot be eliminated into non-SProp (like boolean in SProp has true = false, but if we can eliminate this into non-SProp booleans, we get logical inconsistency), so I assumed this is disallowed in Lean, but still the paper I linked in the beginning claimed that normalization is broken by their counterexample.

$\endgroup$
11
  • 3
    $\begingroup$ My guess is Pierre-Marie was referring to the results of Gaëtan Gilbert's thesis. Also, what is the question? How are we supposed to answer "how much trouble"? Lean's type theiry has a model (a set-theoretic one will do), so can i just say "no trouble semantically"? $\endgroup$ Commented Mar 27, 2022 at 20:56
  • 1
    $\begingroup$ I meant to say subsingleton elimination (instead of small elimination). But still see the small elimination section in that document. $\endgroup$
    – Jason Rute
    Commented Mar 27, 2022 at 21:31
  • 5
    $\begingroup$ One has to be very careful when talking about SProp, because the stuff you can eliminate from it into Type matters a lot. In addition to the elimination of equality, Lean also allows elimination of the accessibility predicate from SProp to Type. Regardless of whether impredicativity is allowed, this is an immediate breakage of either SN or SR since it allows arbitrary unfolding under an inconsistent context. $\endgroup$ Commented Mar 27, 2022 at 21:44
  • 1
    $\begingroup$ Which begs the question, why does Lean work at all? I suppose that's what the OP is asking. $\endgroup$ Commented Mar 27, 2022 at 21:59
  • 2
    $\begingroup$ @JasonRute: Whay I am puzzling over is why Lean algorithms are so good, given that they are incomplete and can be tricked into looping. Having a model, we know the algorithms won't lie to us, but they could not give answers at all. Wait, I do have to randomly restart Lean every once in a while. I wonder... $\endgroup$ Commented Mar 28, 2022 at 6:48

1 Answer 1

17
$\begingroup$

To address a most important point, as suggested by Andrej Bauer, Lean's intended model is one where types are sets. Mario Carneiro showed in his master's thesis that Lean has such a set-theoretic model (specifically using ZFC plus infinitely many large cardinals). So that means Lean is consistent (assuming no bugs in its implementation), and a theorem in Lean means what a classical mathematician would think it means (once they interpret types as sets and understand universes).

The failure of normalization however means that one can't give a more computational model of Lean, which isn't a large deal since Lean is mostly used as a classical mathematics proof assistant. Also, it should be pointed out that Lean's reduction (in Lean 3 at least) is painfully slow anyway. It can barely add numbers with #reduce, so non-termination and slow termination are basically the same in the eyes of the user. This is discussed in this Zulip thread.

A related issue which you also address, comes down to elimination outside of Lean's Prop. Unlike your expectation, Lean does have a limited form of large elimination called (syntactic) subsingleton elimination.

As a quick aside, my understanding is that there are two interpretations of definitional equality in Lean. One is what is actually implemented in Lean. Mario's thesis has an approximation of that which is decidable, but it breaks transitivity. Broken defeq transitivity implies broken subject reduction. The other definitional equality which appears in Mario's thesis is a more ideal one. It satisfies transitivity and subject reduction, but it is undecidable.

The reason for this failure of subject reduction in Lean's implemented type theory, to my understanding, is that Lean has proof irrelevance (similar to Coq's SProp) and a syntactic form of subsingleton elimination as described in this document. In particular, the latter lets you eliminate "syntactic subsingletons" such as true, false, and, =, and acc (which is related to well-founded) to types in Type. These are called "syntactic subsingletons" since they have at most one type constructor. As Pierre-Marie Pédrot pointed out, this leads to the issues with definitional equality that I mention above. On Zulip, Mario gave a MWE of how to use proof irrelevance of Prop and the syntactic subsingleton elimination of acc to explicitly break transitivity of defeq, break subject reduction, and fool Lean tactics into producing false terms (but which the kernel still correctly rejects).

These issues are relatively small for Lean users. Again, they don't break consistency or change the set-theoretic model, just make some things more annoying for users. Lean users rarely notice these issues are there (and likely couldn't point to them at all), but they do occasionally come up as discussed here. Mario, however likes to point out that it is possible to run into the same issues even in a system without explicitly broken definitional equality. In particular, it is easy to construct examples where a = b and b = c are fast to compute definitionally, but a definitional proof of a = c is quite slow and will time out. While not technically a failure of definitional equality, it is the same sort of issue in practice. One must resort to an alternate proof of a = c in some cases.

(Also, it should be pointed out that even Coq has some issues with subject reduction. I think Mario would say the Coq issues are similar in scope to those Lean in that they aren't a large issue, but should be watched out for. But I think Coq users would counter that they have a plan to fix SR in Coq whereas it is an accepted trait of Lean. I also have the sense that the SR issues in Coq are more compartmentalized to certain Coq features.)

Note: I got a lot of my information from this thread on the Lean Zulip.

The big question of course now is why does Lean have proof irrelevance and syntactic subsingleton elimination if they knowingly lead to these (small) issues? And what big benefits does proof irrelevance and syntactic subsingleton elimination provide to Lean users? I'm trying to figure that out, and then I'll update this answer when I do.

$\endgroup$
6
  • 5
    $\begingroup$ "why does Lean have proof irrelevance": Without proof irrelevance, I think it is hard to argue that your logic acts like classical mathematics, because subtypes don't act like subtypes anymore. Things are much more like HoTT at this level. (It doesn't have to be definitional proof irrelevance though. Coq gets by just fine with axiomatic proof irrelevance. But definitional proof irrelevance is unquestionably convenient, and even Coq recognized this in adding SProp.) $\endgroup$ Commented Mar 28, 2022 at 3:35
  • 5
    $\begingroup$ "why... syntactic subsingleton elimination": The main application of subsingleton elimination is constructing definitions by well founded recursion where the proof of well-foundedness is a Prop and hence erased by the VM/compiler, which is important in practice. If well_founded was data, then the noncomputable checker would have to get more sophisticated with an effect system and ghost annotations so that noncomputable proofs of well foundedness can be used in computable recursive functions. $\endgroup$ Commented Mar 28, 2022 at 3:39
  • 1
    $\begingroup$ @MarioCarneiro I'd say casting types along equalities is also a somewhat frequent use of syntactic subsingleton elimination in dependently typed code. We try to avoid it where possible, but sometimes it's not avoidable. $\endgroup$ Commented Mar 28, 2022 at 12:24
  • $\begingroup$ @SebastianUllrich I was excluding Eq.rec from consideration because AFAIK it does not cause undecidability / SR problems because it is not a recursive type. You could imagine restricting subsingleton elimination to only Eq and things like it ("K-like eliminators") and most of the problems would go away, so the question is what we would lose in so doing. $\endgroup$ Commented Mar 29, 2022 at 1:06
  • 1
    $\begingroup$ @MarioCarneiro Yes, that might be sufficient. It's just hard to justify such an involved change right now if we expect basically zero impact in practice from it. $\endgroup$ Commented Mar 29, 2022 at 8:09

Not the answer you're looking for? Browse other questions tagged or ask your own question.