10
$\begingroup$

Is there something special about weak head normal form used with proof assistants?

What I am seeking does not have to be specific just something to act like a compass to give me an indication of what to pay attention to as I progress in learning.


This use to be before the question. Based on a comment it was moved here rather than deleting it.

In learning the inner workings of a proof assistant based on type theory one learns they need type rules.

One of the best/better means to learn type theory for programming languages is from "Types and Programming Languages" by Benjamin C. Pierce (Site) (WorldCat)

The book is essentially Lambda-calculus on steroids. (ref)

There is also a follow-up book "Advanced Topics in Types and Programming Languages" (Site) (WorldCat)

In learning Lambda-calculus one learns about reduction strategies and normal forms and so becomes attentive to those words.

In looking at the source code for Lean or following along in the forum weak head normal form starts to appear more frequently.


At the time of the edit, two questions were already provided. I have no problems with those and see no reasons for the authors to change them.

  • The first by Neel Krishnaswami
  • The second by Maximilian Doré
$\endgroup$
2
  • $\begingroup$ I think the main reason is efficiency - as normalization is used for typechecking, computing full normal forms can be quite costly, leading to sluggish experience. $\endgroup$ Commented Feb 10, 2022 at 12:54
  • 2
    $\begingroup$ Just a suggestion: the first five paragraphs of this question have nothing to do specifically with the actual question being asked, and the question isn't very clear what exactly it is you want to know about WHNF and why. I would suggest leaving off the initial stuff and saying some more about what you've learned so far about WHNF and what you want to know about it. $\endgroup$ Commented Feb 10, 2022 at 18:41

3 Answers 3

15
$\begingroup$

The application rule in a dependent type system looks a bit like this:

$$\newcommand{\judge}[3]{{#1} \vdash {#2} : {#3}} \newcommand{\jeq}[3]{{#1} \vdash {#2} \equiv {#3}} \frac{\displaystyle \judge{\Gamma}{e_1}{A} \quad \jeq{\Gamma}{A}{\Pi x:B.\,C[x]} \quad \judge{\Gamma}{e_2}{B} }{ \displaystyle \judge{\Gamma}{e_1\,e_2}{C[e_2]} } $$ If type checking tells you that $e_1$ has type $A$, you have to convert $A$ to a function type before you can check the argument.

Weak head reduction is intended to do as little evaluation as you can get away with while still getting to a function type constructor.

$\endgroup$
0
6
$\begingroup$

Despite their quite technical appearance, weak head normal form also have a philosophical genesis. In Martin-Löf's meaning explanations, the canonical form of a term (its meaning) is defined by evaluating it such that the outermost term constructor stems from an introduction rule. For instance, suc(add(suc(0), suc(0))) as well as λx.(λy.add(x,y))0 are considered canonical forms. In particular, weak head normal forms are considered canonical, so their meaning is already fully laid out (the converse does not hold, as we can see with the second example).

Weak head normal forms are central not only to theorem provers, but all functional programming languages. This is pointed out by Abramsky in his paper on the lazy lambda calculus (p.2). This paper also nicely lays out some tensions between theory and implementation of functional programming languages.

$\endgroup$
0
2
$\begingroup$

Note that weak head normal form is not important in all proof assistants. The HOLish systems (including Isabelle/HOL) implement a more-or-less simply typed lambda-calculus, and their kernels don't rely on being able to normalise these terms at all.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.