6
$\begingroup$

Related question on semantic side: How much of trouble is Lean's failure of normalization, given that logical consistency is not obviously broken?

Suppose we have an impredicative universe of strict propositions (as in Lean), which, if we try to compute the terms, it may loop.

So, I wonder, if we just erase them as some kind of dummy terms in the type checker, would it prevent the loop? An obvious problem comes to my mind is that eliminating a proposition into an ordinary type can be problematic, because we can no longer apply a proposition to a pattern matching or something, because we do not know anything about them except their existence. Is such elimination necessary for usability?

$\endgroup$
0

1 Answer 1

7
$\begingroup$

The ability to have this kind of "erasure" for propositions is indeed one of the major arguments in favour of having a proper sort of strict propositions (see e.g. Section 9.3 of this keynote).

As you noticed, the main question for this to be well-behaved is what kind of eliminations you allow for your propositions. If you cannot eliminate propositions into Type, then they are useless, since you are in effect building a separate universe with no communication with the Type one. But if you do not do things properly, you might endanger canonicity (because if you do not reduce enough, some terms might become stuck), normalization (if you reduce too much, you might diverge), or subject reduction (if the way you allow reduction is not uniform enough, you might allow conversion in one context but not in another, leading to terms being well-typed before reduction but not after).

This is an active research topic, see the first paper by Gilbert et al., the subsequent counter-example to normalization in a setting with strict UIP (which is part of the current experimental implementation in Coq), and the work around Observational Equality in that setting. I know that there is more to come, but I don’t think there is a preprint of that online yet. Let me try and summarize what they have learned.


I am not sure how it is in Lean, but I guess that as Coq it allows only for sub-singleton elimination, i.e. elimination from Prop to Type only for those inductive types that have "at most one inhabitant". Some of these are OK to eliminate with a non-computing Prop, some aren’t.

The simplest one is elimination of falsity, i.e. having False : Prop and exfalso : forall T : Type, False -> T. If you are in a consistent context (and in particular the empty context if your theory is consistent), then there cannot be any inhabitant of type False, so that a term t : T : Type cannot be blocked on exfalso, and so you get canonicity out of consistency, without ever reducing a proposition.

The borderline one is elimination of equality, i.e. having eq : forall A : Type, A -> A -> Type and transport : forall A B : Type, A = B -> A -> B. The usual inductive equality that computes on its sole constructor refl is dangerous here, see the counterexample by Coquand and Abel. But if you are happy with an equality that computes on its type arguments rather than its proof of equality, aka Observational Equality, then all is well: again you never need to reduce the propositional content to ensure canonicity.

The one you simply cannot have is the accessibility predicate Acc : forall (A : Type), (A -> A -> Type) -> A -> Prop and its unique constructor

acc_intro : forall (A : Type) (R : A -> A -> Prop) (a : A), (forall a' : A, R a' a -> Acc R A a') -> Acc A R a

Basically, this is the stereotypical case where you need to look at the proof to know when to reduce, but if you disallow that then the only solution is to always reduce, meaning you will be non-terminating. A precise account is given in section 2.3 of the first paper.

You can try and have a smart way to reduce a bit but not too much, but this is bound to ensue in subject-reduction issues. As far as I understand, this is one of the reasons it is broken in Lean 3 (not sure how Lean 4 treats the issue). But basically, this means that if you want to have a well-behaved notion of strict propositions, you cannot use it for definition of functions by well-founded induction. Still, if you do things properly, most of the rest can be salvaged. So I guess the question boils down to whether you are ready to abandon accessibility being definitionally irrelevant.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.