5
$\begingroup$

There are many tools regarding the semantics of type theory. On one hand, we may organize the structure of substitutions explicitly, resulting in notions such as CwFs, CwAs, natural models, display categories, etc. These are algebraic notions, and naturally organize themselves into (higher) categories, where the syntactic category is the initial one, giving out induction principles over syntax: We can construct a suitable CwF $\mathcal C$, and the initiality of the syntax guarantees a structure-preserving functor from the syntactic category to $\mathcal C$, with which we can use to prove useful properties.

On the other hand, we may rely on the metalanguage for variable binding, giving us higher order models. We may define these not only using sets, but also in arbitrary categories with enough structure, e.g. topoi. However, we need alternative methods to express the universality of syntax, because models no longer organize into useful categories. A very general way is to produce a category $\mathcal T$ such that functors $\mathcal T \to \mathsf{G}$ preserving suitable structures correspond to internal models in $\mathsf{G}$. Now if we can construct an internal model in a suitable category, there will be a unique functor guaranteed, analogous to the case of first-order models. These "suitable categories" $\mathsf G$ is often produced by a technique called (Artin) gluing.

The above puts the study of semantics in a categorical language. However, explicitly constructing models is still very bureaucratic. Therefore there are numerous frameworks to simplify the process. Synthetic Tait computability is a way to construct new models internal to an Artin gluing of topoi using a simple language equipped with modalities. This paper "Internal Sconing is Enough" reduces general gluing to sconing in some topos.

In this framework, we still need to relate the string-and-rules syntax to the categorical syntax. Here we encounter a major complication, namely that syntax generated from strings naturally form sets, whose elements can be compared for equality. But in the categorical formulation we may only talk about isomorphisms of objects. This discrepancy necessitates the study of "strict" categorical structures, where we talk about equalities stricter than usual, trying to prove coherence theorems which state that a class of structures weakly satisfying some equations can be improved to strict ones. For instance we have the local universes construction.

The question is:

Is the above an accurate summary of the "big picture" of the semantics study in type theory? What else should be added? What part is inaccurate or wrong? What part should be emphasized/de-emphasized?

$\endgroup$

1 Answer 1

6
$\begingroup$

There are many tools regarding the semantics of type theory. On one hand, we may organize the structure of substitutions explicitly, resulting in notions such as CwFs, CwAs, natural models, display categories, etc. These are algebraic notions, and naturally organize themselves into (higher) categories, where the syntactic category is the initial one, giving out induction principles over syntax: We can construct a suitable CwF C , and the initiality of the syntax guarantees a structure-preserving functor from the syntactic category to C, with which we can use to prove useful properties.

So far so good. I would probably revise "syntactic category" to "syntactic model"; there's after all more data in the initial CwF than the base category.

On the other hand, we may rely on the metalanguage for variable binding, giving us higher order models. We may define these not only using sets, but also in arbitrary categories with enough structure, e.g. topoi. However, we need alternative methods to express the universality of syntax, because models no longer organize into useful categories. A very general way is to produce a category T such that functors T→G preserving suitable structures correspond to internal models in G. Now if we can construct an internal model in a suitable category, there will be a unique functor guaranteed, analogous to the case of first-order models. These "suitable categories" G is often produced by a technique called (Artin) gluing.

I think this is a bit unclear to me. The purpose of a logical framework (LFs) with variable binding is to aid in the definition of what a model should be. For instance, one arrives at the definition of a CwF by unfolding what one obtains after specifying type theory in a particular logical framework (that of generalized algebraic theories).

Some LFs are crafted to make variable binding either using a function space in the LF to model binding in the object theory. This, however, is not at all incompatible with the story of models we've told so far. In fact, up to some unimportant points, Taichi Uemura's framework both uses metafunctions to encode binding and generates ordinary CwFs as models. It's defined using the idea of "functorial semantics" just as you alluded to in your paragraph: a model is built as a particular functor from some representable map category to another representable map category. Importantly, however, one still has a syntactic model initial among all models.

I also am not sure the distinction between "models in Set" and "models in a topos" is a useful distinction to draw at this stage. What you seem to be conflating here is "one specific way of obtaining a model through an internal universe in a nice category closed under some stuff" and "the definition of a model induced by the LF".

To summarize this long comment: I think one should view LFs as a machine for (1.) producing a category of models (2.) such that the initial object adequately represents syntax.

NB. In work by Jon Sterling or some of my own work, a "model" of type theory is defined to be an LCC functor from a certain LCCC E to another LCCC, but this a fairly atypical definition. We have shown that it is sufficient for proving syntactical results through an adequacy result on initial objects, but the definition of "morphism of models" one obtains from such "models" is a too peculiar to be regarded as the correct definition of model. In fact, in some of our recent work we've avoided using this approach to use some of Uemura's technology.

The above puts the study of semantics in a categorical language. However, explicitly constructing models is still very bureaucratic. Therefore there are numerous frameworks to simplify the process. Synthetic Tait computability is a way to construct new models internal to an Artin gluing of topoi using a simple language equipped with modalities. This paper "Internal Sconing is Enough" reduces general gluing to sconing in some topos.

Seems good. Indeed, both STC and the new internal sconing paper give ways of constructing a nice internal universe which can then be externalized to a model.

In this framework, we still need to relate the string-and-rules syntax to the categorical syntax. Here we encounter a major complication, namely that syntax generated from strings naturally form sets, whose elements can be compared for equality. But in the categorical formulation we may only talk about isomorphisms of objects. This discrepancy necessitates the study of "strict" categorical structures, where we talk about equalities stricter than usual, trying to prove coherence theorems which state that a class of structures weakly satisfying some equations can be improved to strict ones. For instance we have the local universes construction.

I think this is incorrect. The reference to "syntax generated by strings" is a red herring because in theories specified as generalized algebraic theories, representable map categories, or locally Cartesian closed categories one still has issues of strictness. The fundamental issue is that in type theory, there are equations on types. If you realize types as something involving objects in a category, you almost invariably end up with things being isomorphic when they're supposed to be equal. Strictification theorems basically address this by remanding types into morphisms in some way, so that one has a hope of getting strict equalities again.

$\endgroup$
2
  • 3
    $\begingroup$ I endorse this answer as well. One thing is the model theory of type theory, and the other is the kinds of categories which serve as useful technical means working with type theory and its models. It is by no means required that these two things coincide. Different LFs do different things, and in general, we should be open to using whatever tool exposes most readily the essence of the problem we are solving. Sometimes that means formulating things in the language of the model theory of type theory, and other times that means (e.g.) completing the syntactic category to an LCCC or a topos, etc. $\endgroup$ Commented Apr 2, 2023 at 14:43
  • $\begingroup$ I can't quite parse "either using a function space in the LF to model binding in the object theory", is that a typo somewhere? $\endgroup$
    – Trebor
    Commented Apr 11, 2023 at 14:22

Not the answer you're looking for? Browse other questions tagged or ask your own question.