10
$\begingroup$

We can derive eliminators (as functions) for inductive types and translate (structurally) recursive functions into invocations of eliminators. However, eliminators seem to be essentially recursive pattern matching. So, it seems that the only benefit of translation to eliminators is that it guarantees structural recursion, and the result of this translation is equivalent to recursive pattern matching.

Am I right? Do we have other applications of eliminators?

$\endgroup$
1
  • 1
    $\begingroup$ When you can make your own eliminator that's different... $\endgroup$
    – Trebor
    Commented Feb 12, 2022 at 4:02

2 Answers 2

8
$\begingroup$

Indeed, there are parallels between definitions by pattern matching and eliminators. A typical eliminator is just a shallow pattern. For example, the simple recursor for natural numbers can be defined by the shallow pattern matching

rec x f 0 = x
rec x f (S n) = f n (rec x f n)

A non-shallow pattern would be something like

even 0 = true
even (S 0) = false
even (S S n) = even n

because it stacks the constructor in the third clause.

One gets the idea that we can translate patterns back to eliminators. (Exercise: write down even using rec.) However, the question how precisely this can be done is not trivial, especially in type theories without Axiom K. As it turns out, for Agda the answer is positive, see Pattern matching without K by Cockx, Devriese and Piessens.

Eliminators are more convenient from a semantic point of view, as they correspond more closely to universal properties of semantic constructions. For example, the simple recursor for natural numbers is the direct translation of the fact that $\mathbb{N}$ is the initial algebra for $X \mapsto 1 + X$.

As Neel points out in the comments, patterns are generally preferred in proof-theoretic analysis, because the normal forms need many fewer commuting conversions.

For the best of both worlds, we would like to describe new constructions in terms of eliminators, as they are easier to specify, but then to use them with pattern matching. (And this answers your question: eliminators are useful in practice because it is easy for a user to explain what an elimnator is, but it's a lot harder for the user to extend pattern matching.)

It would be nice to have a proof assistant that could automate the passage from an eliminator to (non-shallow) patterns. I believe Epigram 2 was going in that direction?

$\endgroup$
2
  • 3
    $\begingroup$ Saying that eliminators are more useful in theory is too quick -- you have to say which theory! From the perspective of proof theory, pattern matching is a significantly better-behaved primitive, because the normal forms need many fewer commuting conversions. $\endgroup$ Commented Feb 12, 2022 at 8:01
  • $\begingroup$ @NeelKrishnaswami: point well taken. I changed the answer to say that they are more convenient from a semantic point of view (rather than theoretical), and I took the liberty of incorporating your comment. $\endgroup$ Commented Feb 12, 2022 at 8:17
2
$\begingroup$

One practical use of induction principles is the Scott encoding of datatypes.

This isn't so useful directly in theorem proving because useful use of Scott encoded datatypes usually requires some form of extensionality. A bigger problem is "Induction is not derivable in second order dependent type theory" (see the paper by Herman Geuvers.) I don't really understand the full details of the result and how or whether you can hack around the problem with a stronger or weaker system but it does at least put a roadblock to a simple and straightforward use of Scott encoded datatypes.

However, Scott encoded datatypes are a reasonable extraction target or implementation detail for theorem provers.

I should point out that while induction principles can be encoded in dependent product types in type theories with sufficient tools the core concept doesn't really require dependent products.

For example you can have a type theory directly axiomizing induction over lists without even possessing dependent products. Something like

$$ \frac{ \Gamma , x \colon \textbf{list}(A) \vdash P \colon * \quad \Gamma \vdash e_0 \colon [x := \textbf{nil}_A] P \quad \Gamma, \, H \colon A, \, T \colon \textbf{list}(A) \, t \colon [x := T] P \vdash e_1 \colon [x := \textbf{cons}(H, T)] P \quad \Gamma \vdash e_2 \colon \textbf{list}(A)}{\Gamma \vdash \textbf{ind}_A(x. P, e_0, H \, T \, t. e_1, e_2) \colon [x := e_2] P} $$

In practice you'd probably axiomise gadgets like W types or identity types this way. Kind of silly and difficult to get right but technically the concepts of dependent product and induction principle aren't quite the same.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.