16
$\begingroup$

Conor proposed a seemingly promising way towards universe polymorphism: https://mazzo.li/epilogue/index.html%3Fp=857&cpage=1.html IMHO, the idea behind this proposal is:

You build things starting at level 0 and going up to whatever constant level you need, but then you can lift the whole thing up any number of other levels.

However, it seems to be not implemented in any system except redtt. Is it flawed somehow, or it's just people are unaware of its existence?

Is it sufficiently powerful? Is it formalized?

$\endgroup$
1

1 Answer 1

9
$\begingroup$

I don’t know for sure, but I don’t think this has been explored, no. At least I can say for sure it’s not around in Coq’s world (yes, time has passed and Coq now has universe polymorphism), although I think the way Coq does things subsumes this.

Basically, how polymorphism works in Coq is that when you have a (polymorphic) definition, the system collects all constraints the universe levels in the definition need to satisfy for it to be well-typed. When you use it later on, the definition gets instantiated with new levels, and the system checks at that point that the constraints on those are satisfiable.

This is quite similar to choosing the right "universe lifting" as McBride proposes, but there’s one lack of flexibility in the proposal: the relation between the universes remains fixed, while in the constraint-based approach you also can make the "distance" between the universes in the definition vary. I'm really not sure how annoying that might get in practice, but it might bite, requiring to duplicate definitions with different distances (rather than at different levels). On the other hand, universe constraints in Coq are becoming a performance bottleneck in quite a few places, so this might be a way to mitigate those…

$\endgroup$
3

Not the answer you're looking for? Browse other questions tagged or ask your own question.