11
$\begingroup$

Coq includes let-expressions in its core language. We can translate let-expressions to applications like this: let x : t = v in b ~> (\(x:t). b) v I understand that this does not always work because the value v would not be available when typechecking b. However this can be easily fixed by special casing the typechecking of applications of the form (\(x:t). b) v. This allows us to remove let-expressions at the cost of a special case while typechecking. Why does Coq include still include let-expressions? Do they have other advantages (besides not needing the special case)?

$\endgroup$
1
  • 4
    $\begingroup$ Your suggestion sounds like adding a hack to avoid needing let expressions, but there's a) no reason to avoid let expressions and they are also convenient, and b) adding hacks to your core language is not a great idea. $\endgroup$ Commented Sep 11, 2019 at 19:02

2 Answers 2

27
$\begingroup$

It is a common misconception that we can translate let-expresions to applications. The difference between let x : t := b in v and (fun x : t => v) b is that in the let-expression, during type-checking of v we know that x is equal to b, but in the application we do not (the subexpression fun x : t => v has to make sense on its own).

Here is an example:

(* Dependent type of vectors. *)
Inductive Vector {A : Type} : nat -> Type :=
  | nil : Vector 0
  | cons : forall n, A -> Vector n -> Vector (S n).

(* This works. *)
Check (let n := 0 in cons n 42 nil).

(* This fails. *)
Check ((fun (n : nat) => cons n 42 nil) 0).

Your suggestion to make application (fun x : t => v) b a special case does not really work. Let us think about it more carefully.

For example, how would you deal with this, continuing the above example?

Definition a := (fun (n : nat) => cons n 42 nil).
Check a 0.

Presumably this will not work because a cannot be typed, but if we unfold its definition, we get a well-typed expression. Do you think the users will love us, or hate us for our design decision?

You need to think carefully what it means to have the "special case". If I have an application e₁ e₂, should I normalize e₁ before I decide whether it is a $\lambda$-abstraction? If yes, this means I will be normalizing ill-typed expressions, and those might cycle. If no, the usability of your proposal seems questionable.

You would also break the fundamental theorem which says that every sub-expression of a well-typed expression is well-typed. That's as sensible as introducing null into Java.

$\endgroup$
0
$\begingroup$

For anyone whose first reaction to Andrej Bauer's response was to wonder, like me, whether this is caused by inductive types or also present in the baseline calculus of constructions without inductive types, here is another example that doesn't make use of inductive types at all:

Definition T := forall P : Prop, P -> P.
Definition x : T := fun P x => x.
                                                
Check (let P := T in x P x) : T.
Fail Check ((fun P => x P x) T).
$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.