9
$\begingroup$

I got the impression that there is quite a bit of justified hype around the new(ish) strict propositions as proposed by Gilbert, Cockx, Sozeau and Tabareau in their paper Definitional Proof-Irrelevance without K. My personal reason why I find them tantalizing is because they allow us to carry out a presheaf translation.

Strict propositions are related to HoTT's hProps, i.e. types $P$ such that $\prod_{x,y:P} (x = y)$. hProp-truncation satisfies quite a generous elimination principle, we obtain $\|X\| \to Y$ from a function $X \to Y$ already if $Y$ happens to be an hProp. This principle allows us to recover positive information from hProps, for instance deduce that a function $f : \mathbb{N} \to \mathbb{N}$ actually has a zero from the mere proposition that there merely is a zero, by searching for the unique smallest zero.

I'm wondering whether I can mimick that when squashing to strict propositions. A specific question is: Is there a function $$\prod_{f:\mathbb{N}\to\mathbb{N}} \Bigl(\Bigl\|\sum_{x:\mathbb{N}} (f(x) = 0)\Bigr\| \to \sum_{x:\mathbb{N}} (f(x) = 0)\Bigr),$$ where $\|\cdot\|:\mathsf{Type}\to\mathsf{Prop}$ now denotes strict truncation? More generally, I'm wondering whether strict propositions can be used to show termination of functions whose codomain is an arbitrary type (not necessarily a strict proposition nor an hProp).

$\endgroup$

2 Answers 2

4
$\begingroup$

Sadly, a big issue of strict propositions is that in their current form they validate very little choice principles.

For instance, I don't think that your example is provable. Indeed, Pujet and Tabareau argue in their paper on impredicative observational equality that because one can prove normalisation of their type theory CCObs in MLTT extended with an impredicative universe of strict propositions, all functions of type $\mathbb{N} \to \mathbb{N}$ definable in CCObs are also definable in MLTT. In particular, one cannot define a normalisation function for System F (something which takes a System F term encoded as a natural number, and returns a code for its normal form) in CCObs. But one can still prove that System F normalises, using impredicativity of the sort of propositions. Now take $f : \mathbb{N} \to \mathbb{N} \to \mathbb{N}$ to be a function such that $f~t~n$ is $0$ if reducing $t$ (viewed as the encoding of a System F term) for $n$ steps (say, with a fixed deterministic strategy) reaches a normal form, and $1$ otherwise. One should be able to show in CCObs that $$\prod_{t : \mathbb{N}} \left\| \sum_{n : \mathbb{N}} f~t~n = 0 \right\|$$ But if your choice principle was provable, then one could deduce that $$ \prod_{t : \mathbb{N}} \sum_{n : \mathbb{N}} f~t~n = 0$$ and from that build a normalisation function for System F.

More generally, the big issue with Coq's SProp compared to the current Prop is that you lose the ability to eliminate the accessibility inductive type from Prop to Type, which exactly means that you cannot any more define functions by well-founded induction.

Loïc Pujet has thought quite a bit about this sort of issues. I don't think there is a paper out there, but there are slides on his webpage on this subject.

$\endgroup$
4
$\begingroup$

It's consistent to interpret the type of strict propositions as the type of double negation stable propositions (or more generally as $j$-stable propositions for any given Lawvere-Tierney topology $j$). If you do this, then squash is just double negation, and $\prod_{f:\mathbb{N}\to\mathbb{N}} \Bigl(\Bigl\|\sum_{x:\mathbb{N}} (f(x) = 0)\Bigr\| \to \sum_{x:\mathbb{N}} (f(x) = 0)\Bigr)$ is exactly Markov's principle. Hence it's a reasonable axiom to assume (in my opinion) but you can't expect it to hold in general.

$\endgroup$
1
  • 3
    $\begingroup$ Note that this is the sort of addition that causes technical problems, even if your intended model justifies it. If it's just an axiom, then you lose canonicity. And if you use the typical unbounded search that doesn't look at the proof term (since you have to treat every proof term as judgmentally equal), then in inconsistent contexts you can lie about the search actually being terminating. So open evaluation becomes undecidable. Unbounded search seems (to me) a better match to runtime-only computational irrelevance. $\endgroup$
    – Dan Doel
    Commented Apr 27 at 16:33

Not the answer you're looking for? Browse other questions tagged or ask your own question.