18
$\begingroup$

Impredicativity greatly increases the logical strength of a formal system, and impredicative propositions are also a consequence of various axioms including LEM and Zorn's Lemma. An impredicative sort of propositions is built-in to CiC. A propositional resizing axiom may also be included in type theories for its own sake.

On the other hand, Andrej Bauer wrote that "those who feel uneasy about impredicativity, and those who want to carefully callibrate the logical strength of their formal systems, look for ways of avoiding impredicativity [...]"; Lawrence Paulson has "personally heard Martin-Löf criticise CIC, in particular because it is impredicative"; and unlike Coq, Agda's sort of propositions is predicative.

The advantages of adopting impredicative propositions seem clear, but despite the apparently widespread concern, it's not clear to me what the trade-off is. For example, classical axioms are powerful, but come at the cost of breaking properties like canonicity and strong normalization; and Cubical TT is powerful, but is a trade-off against alternative systems like Setoid TT.

What is the trade-off to accepting impredicative propositions?

$\endgroup$
2
  • 2
    $\begingroup$ This is probably a minor trade-off in the grand scheme of things, but I remember that impredicativity breaks the usual foetus-style approach to termination checking based on strict subpatterns because strict subpatterns are no longer necessarily smaller than the original pattern. $\endgroup$
    – James Wood
    Commented Mar 6, 2022 at 10:37
  • $\begingroup$ The advantages of impredicativity aren't really obvious to me. $\endgroup$ Commented Mar 12, 2022 at 4:51

2 Answers 2

9
$\begingroup$

I think this is mainly not a question about usability, but rather a form of discomfort so as to the foundational status of theories incorporating impredicativity.

Indeed, as you mentioned, impredicativity greatly increases the logical power of your system. While this is a blessing when you are trying to use the system, this is a curse when you are trying to show properties of the system. For instance, while induction recursion can be used to prove meta-theoretical properties of MLTT (such as decidability of conversion), it is of no use when attacking an impredicative sort, because it crucially relies on having a nicely stratified hierarchy of universes, something that impredicativity breaks by enabling quantification over any sort in the hierarchy in the impredicative level.

Now couple this with the fact that impredicativity is a known recurring source of non-termination and inconsistencies (on the top of my hat, let me mention System U, issues around impredicative Set or more recently around impredicative proof irrelevant equality), and you get a feature that is tricky to study and the meta-theoretical level, but begs for such a careful meta-theoretical study because it is so risky. So I guess you can understand why one would want to avoid it.

Moreover, while it is certainly nice to have more logical power, there are not that many proofs that really require that much power. For instance Agda seems to do perfectly well with its predicative propositions. So the cost for avoiding impredicativity is also not that high.

$\endgroup$
3
  • 3
    $\begingroup$ If you try to prove normalisation of System F in Agda, or prove the correctness of closure-conversion (you need impredicative existentials for this), then you're stuck! $\endgroup$ Commented Mar 7, 2022 at 11:05
  • 1
    $\begingroup$ I did not know about closure-conversion, thanks for the example! Do you have a reference for this? But as far as System F goes, it feels to me a bit like « pushing the issue in the meta-theory », in the sense that afaik the idea is to use impredicativity in the meta to reason about impredicativity in the object language. Of course this is an example, but it feels a bit weak to me. $\endgroup$ Commented Mar 7, 2022 at 12:44
  • 1
    $\begingroup$ See Minamide, Morrisett and Harper's POPL 1996 paper, Typed Closure Conversion. $\endgroup$ Commented Mar 7, 2022 at 14:04
6
$\begingroup$

There's a general principle to keep in mind:

Weaker theories have more models

Martin Escardo, Paul Taylor or Andrej Bauer will probably be able to supply some cool geometric examples of this, but there is actually a very computer-science example as well.

The realisability topos is a standard way of producing a model strong enough to interpret dependent types + prop. Morally, it's a fancy logical relations model over an untyped universe of computations.

But it turns out that if consider realisability over a typed universe of computations, then you only get a topos if the typed language has a universal type (something like Scott's $D \simeq D \to D$ or full recursive types). Otherwise you can interpret pi-types (and hence interpret dependent types) but you don't get the full topos structure.

$\endgroup$
16
  • 2
    $\begingroup$ Why would you like a topos in the first place? Toposes validate unique choice, which is pretty much an extraction killer. By the same metric, the less the better so MLTT is better than your next door topos with universes. $\endgroup$ Commented Mar 7, 2022 at 14:01
  • 3
    $\begingroup$ Because it validates unique choice! It's hard enough to do everything constructively, but you're going to have to rewrite a lot of textbooks if functional relations aren't functions. $\endgroup$ Commented Mar 7, 2022 at 14:11
  • 1
    $\begingroup$ @Jonathan It is. It means that hProp cannot be erased, which means that there is no way to get rid of purely logical reasoning. So you get an extracted term cluttered with proofs, which are typically very inefficient. $\endgroup$ Commented Mar 8, 2022 at 8:22
  • 1
    $\begingroup$ @JonathanSterling It's not just type theory! In general, programmers want optimisations which can change the computational complexity of a program to be specified in simple, predictable ways. Otherwise the optimisations are too hard to use. (Eg, tail-call optimisation is specified by a syntactic criterion rather than a flow analysis telling you to eta-shrink continuations.) $\endgroup$ Commented Mar 8, 2022 at 9:50
  • 2
    $\begingroup$ @NeelKrishnaswami I like your comment about explainability. My understanding however is that nobody is doing a serious compiler where optimizations are only type directed and transparent to the programmer --- I assume such a thing just wouldn't work. With that said, I like how in OCaml you can write an annotation that checks whether a given optimization is actually going to be applied! This seems to be a happy medium. Let my compiler do the smart things it plans to do, but let me have some visibility into whether that is happening and where. With all that said, I'm not a compiler expert. $\endgroup$ Commented Mar 8, 2022 at 10:06

Not the answer you're looking for? Browse other questions tagged or ask your own question.