6
$\begingroup$

I'm reading Practical Foundations for Programming Languages by Robert Harper and am confused by the exercises in Chapter 4, "Statics", where he discusses a (monomorphic) type system.

The exercise instructs to provide an "operational" reading to typing judgements by refining $e : \tau$ relation into two: an "analytic" $\Gamma \vdash e \downarrow \tau$ indicating that the judgement is derivable, and a "synthetic" $\Gamma \vdash e \uparrow \tau$ saying that the type can be determined from the context $\Gamma$ and the expression $e$.

I'm unsure if I've grasped the author's intent correctly. I replaced $:$ of the typical rules with "synthetic" $\uparrow$, with an additional rule saying that "analyzable given synthetic", i.e., $e \downarrow \tau$ given $e \uparrow \tau$.

For the following exercise, I constrained the rules so that all the elimination forms are "synthetic" while the introduction forms are "analytic".

My interpretation is that the whole purpose of this set of exercises is to augment the syntax with "type-casting" annotations in the let-construct to streamline the type-checking procedure, avoiding the standard unification algorithm. Is this a correct understanding of the author's intentions?

$\endgroup$

1 Answer 1

5
$\begingroup$

Your understanding is right but not quite the full picture.

What this is leading up to is bidirectional typechecking, where typechecking is split into these two judgements. This helps with mechanisation because it annotates the typing rules with modes. That is, in the checking judgement $\Gamma \vdash e \downarrow \tau$, the context $\Gamma$, term $e$, and type $\tau$ all have input mode, while in the synthesis judgement $\Gamma \vdash e \uparrow \tau$, the type $\tau$ is an output. These modes help derive a syntax-directed algorithm that tells you whether to do checking or synthesis at each step.

You switch between synthesis and checking when you hit an annotation, as you’ve seen. You also correctly surmised that in a “pure” bidirectional system, you should do checking of introduction forms and synthesis for elimination forms. But in practice, you will want to add some extra rules, otherwise you won’t be able to derive “obvious” things like $\Gamma \vdash \mathrm{true} \uparrow \mathrm{Bool}$ without annotations.

You may still do unification to solve congruences or subtyping constraints between types involving unknown metavariables. However, the difference is that in the traditional Damas–Hindley–Milner / Algorithm W style, you first accumulate a set of constraints with no particular direction, and then solve them by unification. A folklore result is that bidirectional type checking tends to make it easier to produce helpful type error messages, and this is the reason: with HM, the flow of type information is harder to follow because it can change direction at any point during unification and travel far from where it originated.

$\endgroup$
1
  • $\begingroup$ Thank you for the comprehensive answer and especially for providing the pointer "bidirectional typechecking"! I found this thorough survey Bidirectional Typing on this topic. It would have been great if the intent was a little bit more explicit in the textbook though. $\endgroup$
    – Jay Lee
    Commented Apr 21 at 15:18

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .