15
$\begingroup$

I have used Coq's bidirectionality hints (placement of & in a call to Arguments) to some effect, mostly by trial and error. Despite familiarity with bidirectional type checking, I can't connect the explanation given in the documentation with what I actually observe. So, precisely what change does & make to the checking of an application? To start, I understand the default bidirectional typing rule for applications to be, more or less (read clockwise, with denoting type synthesis and denoting subtyping (including unification)):

f ∈ ∀ (x1 : A1) ... (xn : An), B
M1 ∈ A1'    ...    Mn ∈ An'
A1' ≤ A1    ...    An′ ≤ An
----------------------------------
f M1 ... Mn ∈ B[M1/x1, ..., Mn/xn]

Some questions I'd like to be answered as corollaries of the explanation:

  1. Is Arguments foo & x y. equivalent to Arguments foo x y.?
  2. Is Arguments foo x y &. equivalent to Arguments foo x y.?
  3. Is Arguments foo {x} & y. equivalent to Arguments foo & {x} y.?
  4. What happens for partial applications (of Curried functions)?
  5. Is the Arguments f ... & ... . mechanism reasonably complete for bidirectional typing strategies you may want/expect?
$\endgroup$

1 Answer 1

8
$\begingroup$

I'm familiar with bidirectionality hints in Coq but with superficial knowledge of bidirectional typing, so some details may be off.

As our running example consider this binary function with a bidirectionality hint in the middle:

Parameter f : forall (x : A), B x -> C x
Arguments f x & y.

Bidirectionality hints affect type checking of an application.

Normally, function application is associated with a synthesis (↑) rule, which turns into checking (↓) via the standard subsumption/unification rule (denoted ⊑, but in Coq it's more like an =, except that coercions make it asymmetric again):

f : forall (x : A), B x -> C x
a ↓ A     b ↓ B a
----------------- app-syn
  f a b ↑ C a

e ↑ T'     T' ⊑ T
----------------- syn-check
      e ↓ T

They combine into the following rule for checking an application f a b:

f : forall (x : A), B -> C x
a ↓ A     b ↓ B a    C a ⊑ T
---------------------------- (1)
         f a b ↓ T

Notably, the unification with the required type T happens at the end.

A bidirectionality hint moves that unification earlier, to the &:

f : forall (x : A), & B x -> C x      (* ← f with a bidir. hint after 1 argument *)
a ↓ A     C a ⊑ T     b ↓ B a
----------------------------- (2)
         f a b ↓ T

Of course, rules (1) and (2) are equivalent when read purely declaratively. The difference becomes observable when those rules are interpreted in a stateful context. In Coq, we are actually type checking terms containing metavariables (holes), which get instantiated by unification, possibly creating redexes, hence the order in which premises are evaluated matters.

In this example, if we type check f ?a b, containing a hole ?a, then the standard type checking algorithm (rule 1) will try to check b ↓ B ?a first, which may get stuck on that hole. If we take the bidirectionality hint into account (rule 2), we first unify C ?a ⊑ T, which may hopefully instantiate ?a so that subsequently checking b ↓ B ?a succeeds.


As a concrete example, consider the following definitions. So is a predicate on Booleans which is inhabited by I when the Boolean is true, and uninhabited when the Boolean is false. From a proof of So b, So_eq extracts a proof that b = true. We use So as a means of proof automation when using auto_fin – we can cast a concrete natural number m into a bounded natural number of type Fin.t n whenever m <? n computes to true.

Require Fin.
Require Import PeanoNat.

Definition So (b : bool) : Prop := if b then True else False.

Definition So_eq {b : bool} : So b -> b = true :=
  match b with
  | true => fun _ => eq_refl
  | false => False_rec _
  end.

Definition auto_fin {n} m (prf : So (m <? n)) : Fin.t n :=
  Fin.of_nat_lt (proj1 (Nat.ltb_lt _ _) (So_eq prf)).

We want to apply auto_fin as follows, checking it against a finite type with concrete bound (5 in this case). However, from the error message we can see that this fails because the implicit argument n (which we can see should eventually be 5) has not been instantiated by the time we check the prf argument I.

Fail Check auto_fin 3 I : Fin.t 5.

(* The term "I" has type "True" while it is expected to have type "So (3 <? ?n)" *)

To fix this, we want to make sure that n will propagate from the type we're checking against to the type of prf before we check prf. A bidirectionality hint can provide this behaviour:

Arguments auto_fin {n} m & prf.

The old and new type checking rules for auto_fin are, respectively, as in the following preformatted block. With the instantiations n = ?n, m = 3, prf = I, T = Fin.t 5, we see that the new rule is able to solve for ?n by unifying Fin.t ?n with Fin.t 5, and then proceed to reduce So (3 <? 5) to True, against which I checks. The key feature is that solving n by unification against the result type allows for reduction in the later argument prf.

Old:
n ↓ nat    m ↓ nat    prf ↓ So (m <? n)    Fin.t n ⊑ T
------------------------------------------------------
                @auto_fin n m prf ↓ T

New:
n ↓ nat    m ↓ nat    Fin.t n ⊑ T    prf ↓ So (m <? n)
------------------------------------------------------
                @auto_fin n m prf ↓ T

  1. Is Arguments foo & x y. equivalent to Arguments foo x y.?
  2. Is Arguments foo x y &. equivalent to Arguments foo x y.?

As explained above, bidirectionality hints move a unification step from the end to the middle of an application. So (1) no, (2) mostly yes. Only "mostly" because with dependent types the "arity" of a function might depend on the arguments, so in full generality there is no way to specify a bidirectionality hint (essentially a number of arguments) that is equivalent to no hint.

  1. Is Arguments foo {x} & y. equivalent to Arguments foo & {x} y.?

Type checking happens after implicit arguments have been elaborated into metavariables (foo y becomes @foo ?x y). Furthermore you can always ignore implicit arguments with the @foo syntax. So these hints differ.

  1. What happens for partial applications (of Curried functions)?

Since the effect of a bidirectionality hint is to insert a unification after some number of arguments, this works smoothly with currying. If a function is applied to fewer arguments than its hint, then it will be checked as if there was no hint.

  1. Is the Arguments f ... & ... . mechanism reasonably complete for bidirectional typing strategies you may want/expect?

🤷

$\endgroup$
5
  • $\begingroup$ Thanks a lot! On a skim read, this looks like a great answer. I'll make one or two further comments this evening. $\endgroup$
    – James Wood
    Commented May 25, 2022 at 8:22
  • 1
    $\begingroup$ Fun fact: thinking about this question led me to find an unhandled exception in the implementation of bidirectionality hints github.com/coq/coq/issues/16063 $\endgroup$
    – Li-yao Xia
    Commented May 25, 2022 at 14:51
  • $\begingroup$ (1) Am I right to understand that bidirectionality hints don't affect synthesis rules for that function (e.g in the context Check foo x y.)? (2) Perhaps I should refine question 3 to be “Is ... equivalent to ... when used in the way suggested by the marking of the implicit argument?”. I.e, in @foo ?x y, I don't think it should matter whether the unification happens before or after checking ?x, because checking ?x makes no difference to the state. (3) Maybe this answer would be improved by the addition of a more concrete example. Maybe I can think of one myself. $\endgroup$
    – James Wood
    Commented May 25, 2022 at 22:01
  • $\begingroup$ (4) A partial answer to question 5 could be that bidirectionality hints don't affect synthesis rules, so obviously can't capture the standard synthesis rules for record projections. Whether this makes a difference, or whether conversely unification will always cover for us, is less clear. $\endgroup$
    – James Wood
    Commented May 25, 2022 at 22:10
  • $\begingroup$ (1) right bidir hints only about checking (2) now that I think about it again, I'm not sure of the answer then. $\endgroup$
    – Li-yao Xia
    Commented May 26, 2022 at 0:34

Not the answer you're looking for? Browse other questions tagged or ask your own question.