1

This is a follow-up to a question I had about foundationalism, which seems paradoxical inasmuch as it is a thesis that has been argued for (perhaps it is just the historical argumentation that is paradoxical, not the thesis itself). Here, it seems that coherentism involves rejecting the existence of foundational non-inferred premises; rather, any premise can be viewed as inferred (not necessarily deductively!) from something else, after all.

However, it seems to me that coherentism cannot avoid incorporating some non-inferred claims into itself. For example, we need a definition sentence for talk of "coherence" in the first place. On top of that, we need a sentence stating that entering into the rightly defined coherence relations, provides justification for beliefs in the first place. And then we need a method of exhibiting these relations.

Another way to illustrate the issue is in terms of the graph-theoretic account of regress-solution types. Presumably, we have beliefs about graphs, how they are defined and how they work. Wouldn't defining a regress-solution type, graph-theoretically, pre-found (so to speak) all the types, in graph theory?x So that foundationalism would end up being inescapable, in a sense. (This seems to be along the lines of Alessio Moretti's point of view, regarding the philosophical side of his geometrization of logic.) (I would say that this reasoning does apply to infinitism, too: we will need a foundational definition of infinitism, a proposition of infinitary justifiers, methods of infinite regression...)

Does coherentism collapse into a form of foundationalism where the fundamental premises are about coherence relations?

xAnd then, would such a foundation of knowledge types generally, turn graph theory into the foundation of mathematical knowledge, too, after all? I am not against this thesis, all things considered, but I am not for it in the way that I was a few years back, either.

17
  • 1
    It would be my impulse to say that a coherent formal system relies on a meta-language, and therefore the coherence of the object language is derivative of the axiomatical foundations of the meta-language. Does this appeal to your intuitions?
    – J D
    Commented Dec 12, 2021 at 16:33
  • Of all the many and varied distinctions that philosophers have found suspicious, I find the distinction between an object language and a metalanguage to be one of the suspicious ones. That being said, put in those terms, the issue just seems to be that the coherentism of the object language collapses into the foundationalism of the metalanguage, "eventually"? Commented Dec 12, 2021 at 16:39
  • I'll answer below, but do tell about these suspicions?
    – J D
    Commented Dec 12, 2021 at 16:43
  • Maybe I'm misreading the material (right now I'm looking at "Tarski's Truth Definitions" in the SEP), but it seems the purpose of introducing these language-tiers is to have truth predicates/values on different levels, to avoid generating the liar paradox. However, I have a totally different belief about how to avoid said generation, one which doesn't require different levels of truth. On top of that, the internal content of this belief seems to rule out the formation of Gödel sentences (at least in natural language), leading to compromised (at least) incompleteness theorems. Commented Dec 12, 2021 at 16:53
  • I.e. in the theory I'm working with, the analogue of the Gödel sentence would be something like, "This sentence is not justifiable," or, "S: j(S) = 0." What then of j(S: j(S) = 0)? But so if "this sentence" is unjustifiable, it doesn't "go anywhere," does not have the traditional incompleteness consequences, it seems to me. Commented Dec 12, 2021 at 16:55

1 Answer 1

1

Caveat

I'm not a logician, so this will represent my best effort. Criticism of the claims is encouraged.

Short Answer

Does coherentism collapse into a form of foundationalism where the fundamental premises are about coherence relations?

Yes. A model in mathematical logic is the use of one formal system to ground the truths of a second formal system by translating the truth of the second into the first in a manner similar to use-mention distinctions in natural language. The inner system is the object language of the outer, the meta-language, where the language is taken in a formal sense as a syntactic construction of a formal grammar to ensure well-formedness. The relationship between the object language and the meta-language is that the grammar of the meta-language has to be more expressive than the object grammar. This is the nature of the grounding of truth. The object formal system is used to prove truths deductively, whereas the meta formal system is used to prove the consistency of the deductions of the object system deductively. Reread that because that's confusing just to write.

So, in the prime example, in naive set theory, the basic entities, relations, and operations can be used to prove theorems. But what they cannot do is prove theorems consistently since the system produces contradictions. But the alternative approach is to provide axioms that do not exclude sets containing themselves, ZFC being the historically inspired standard form. This works because set theory is one language, and the logic of the axioms is in a second language; set theory and arithmetic are said to be grounded in logic. Thus, set theory produces consistent set-theoretic truths (philosophical coherence) when it is translated into the foundational truths of FOPC (philosophical foundationalism).

Long Answer

Formal Systems and Languages

Generally, logicians take formal systems as little more than a collection of sentences that through logic output a single sentence, a project started by Frege. But, the notion of a formal system is itself computable, and it might shed some insight, since you talked about signs. Signs in the intuitional sense are best represented by strings of characters for computational purposes, grounding the notion of a sign in that of a string in computer science. We can consider this as one possible formalism to represent a formal system. (It's possible to formalize the notions of alphabets, formal languages, and automata with far more sophistication than what follows, which is a summary.)

Let's start with the formal notion of a formal system. A formal system can be thought of as a collection of grammar-determined strings (sentences) constructed syntactically from a formal language that concatenates a string of characters from an alphabet. In computer science, one popular way to express context-free grammars (you have to examine the Chomskian hierarchy to have a better idea of what that means) is Backus-Naur form. Backus-Naur gives a basic example of how well-formedness can be determined computationally. Once a formal language has logical connectives incorporated into its grammar, it is capable to use something like modus ponens iteratively to arrive at the conclusion iteratively reducing strings, or rather sentences, to a final sentence. Thus from antecedents to consequents we go.

Currently, the aims of mathematical logic to ensure the rigor of a formal system relies on a meta-language whose expressivity is greater than the object language, and therefore the coherence of the object language is established by the axiomatical foundations of the meta-language. The object language is generally characterized as syntactic and uses the syntatic turnstile1, abstracted, and deals in provability instead of satisfiablity, whereas the metalanguage is semantic and uses the semantic turnstile, is more specific, and deals in consistency and decidability of the object language. An object language, therefore, is a deductive tool to examine a claim extending from one axiomatic base which is primarily built to demonstrate satisfiability of sentences, which is philosophically speaking an instance of truth derived from propositions of the system, whereas the metalanguage looks to ensure claims about the claims of the subject language, i.e. consistent (mathematical coherence), but grounded in a system that speaks to the nature of the original truth with an eye not only on the validity of the object-level deduction (provability), but the validity of the entire system over a range of variables in the domain of discourse, showing that the system isn't inconsistent at proving truths (consistency). The bridge between the two languages is from the Tarskian theory of truth which uses the T-sentence to show that there's a translation in truth from the subject language to the object language, which is where the notion of deflationary truth derives.

Now, between two languages, there are necessarily two distinct grammars, and the important thing to remember is that the meta-language grammar has to be more expressive than the object language. In the language of formal languages, this simply means that the well-formed strings of the object language must be a subset of the well-formed strings of the meta-language. Remember, in a T-sentence, the use of string delimiters (sometimes called escape sequences, quotifiers, etc. such as apostrophes, quotation marks, etc.), allows the T-sentence (Tarskian method to ground by bijection truth from one language to another) is an instance of use-mention distinction and is used to contain sentences of the subject string in the sentence of meta-language. Tarski's example from Logic, Semantics, Meta-Metamathematics, p. 156:

(3) 'it is snowing' is a true sentence if and only if it is snowing.

You can see that 'it is snowing' is a proposition and is being evalutated for veracity using the biconditional logical connective that needn't be part of the conversation, that is language used, when discussing the state of the weather. The challenge in parsing this sentence is made easier by quotification, but is obviously not part of spoken language. (In linguistics, the phenomenon is called center embedding and without delimiters, can lead to confusion.)

Now, we can see the advantage of using model theory is obvious. It allows paradoxes of one set of axioms to be resolved by the addition of additional axioms instead of modifying the original axioms of the formal system, and at the same time allows to speak to the range of results of the formal system while accomodating exploring fully the notions of recursion, decidability, computability, and so on. The origin of this increased complexity was a response to the Liar's paradox formalized by Russell, and the attempt to ground set theory in logic of it's axioms resulting in ZF, and later by extension ZFC. From there, other set theories like NBG flourished.

So, it doesn't matter whether you pull an example from set theory or graph theory, or even geometry. When you have one language, for example, FOPC, and you begin to examine whether or not the conclusions arrived at in the language are consistent, you need to introduce new ideas to prove the consistency that necessarily is outside of the FOPC. And the moment you start formalizing this process, you wind up tapping into ideas like meta-mathematics, meta-logic, and meta-languages because of the recursive nature involved in using the propositions of the first language inside a more expressive second language used to evaluate it. So, kudos to you for recognizing there is epistemological coherentism that "collapses" into a form of foundationalism where the fundamental premises are about coherence relations. That's the very essence of using models to evaluate the semantics of a system.

1 The single-double turnstile is the current norm in mathematical logic, but the same ideas might be conveyed in natural language, single-double arrows, or according to WP, single-single turnstile convention.

6
  • 1
    I wish I could confirm this answer twice. As an exposition of the concept/role of metalanguages, it is also a solid defense of the same concept. Commented Dec 13, 2021 at 9:02
  • I expect that as I reflect on this, I will have some better understanding of Hamkins' response to my MathOverflow post about "the justifiable universe." He said something about bisimulation facts undermining the apparent point of V = J, but I was at a loss to respond to that counterproposal... Commented Dec 13, 2021 at 9:15
  • Regarding your above "The object language is generally characterized as syntactic and uses the syntactic turnstile...whereas the metalanguage is semantic and uses the semantic turnstile...", generally the syntactic turnstile is also at the meta level not object language level. See reference: In metalogic, the study of formal languages; the turnstile represents syntactic consequence (or "derivability"). Commented Dec 13, 2021 at 22:45
  • @DoubleKnot I read the entry and text. The metalogic article claimed that mathematical logical and the model-theoretic has largely subsumed metalogic which would suggest that the single tee isn't used any more. I checked Tarski, and he uses a cup, and I have three other works, Chang's text on Model Theory (double turnstile), Boolos et Al on Computability (English, double turnstile). And Ono's text on Proof Theory and Sequent Calculus (double arrow, double turnstile) but there was some use of a subscript to show provability within a system. I could see a system of notation that uses a script...
    – J D
    Commented Dec 15, 2021 at 9:03
  • of course, there's no reason you couldn't just determine from context, but that would be quite the cognitive burden. Anyway, thanks for sharing, but I don't know that a digression into variations of notations to express syntactic and semantic have much value. I will put a footnote in, however. Thx!
    – J D
    Commented Dec 15, 2021 at 9:04

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .