5

It seems that in most everyday cases, taking conditionals to have material truth conditions suffices for us to reason with them correctly (in the sense that using material truth conditions will most often uncontroversially take us from true premises to true conclusions, even if the actual material truth conditions for conditionals are, strictly speaking, incorrect in a generalized context).

No doubt there are many cases where material truth conditions for indicative conditionals seem to be harmful to our ability to reason effectively. For example, if I uttered the conditional "If I had a counter-example to Fermat's Last Theorem, nobody would care", then material truth conditions would force this utterance to be true. There are many other examples in the literature. Fortunately, we don't often deal with bizarre conditionals like these (outside of the logical literature). On top of this, even when we are dealing with bizarre conditionals, we can use natural language to mitigate for ambiguity issues by specifying our own semantics. We could say something like "if, in an impossible world where I had a counter-example to Fermat's Last Theorem, people would definitely care, and not vacuously so." In a sense, we have used natural language to specify our own conditional semantics. Are there philosophical issues with doing this? Sure. Are there are practical issues (even within the context of philosophy minus logic)? It's not so clear me.

My question then is the following: why should we care as philosophers what the true semantics for indicative conditionals are, especially given that we can always mitigate ambiguity/logical issues with natural language (in particular, by specifying our our own semantics for the conditionals we utter on a case by case basis)? I agree conditional semantics is an interesting topic in its own right, but there is only so much time in the day, and there are many other interesting philosophical questions to think about as well. So why this one above others?

PS -- For the record, I've spent a great deal of time thinking about conditional semantics. I'm asking this question in good spirit (not a negative one).

1
  • I assume you mean to restrict attention to indicative conditionals? In which case, the Fermat example isn't even relevant, since it's a subjunctive/counterfactual. (But that's just more grist to your mill.)
    – J.P.
    Commented Jun 30, 2014 at 13:19

2 Answers 2

1

There seem to be two questions implicit here:

  1. Why should we care about the truth conditions of indicative conditionals in particular, especially when potential counterexamples are bizarre?

  2. Why should we care about natural language semantics in general?

1: Why should we care about the truth conditions of indicative conditionals?

I'm not an expert on this matter, but there is some good information on the Stanford Encyclopedia of Philosophy: Conditionals (SEP).

But reasons for being unhappy with the material conditional essentially stem from the 'paradox' of material implication: that 'p→q' is true if p is false.

Note that we don't require that p is necessarilly false for strange results -- as your example of Fermat's Last Theorem seems to suggest -- that is required for typically analyses of counterfactual conditionals to make the conditional vacuously true. Instead, we only need that the antecedent is actually false.

Then consider a family of such conditionals (to pick an unoriginal example):

If Oswald didn't kill Kennedy, then p.

According to the material conditional analysis, all of these are true (since Oswald did in fact kill Kennedy, pace. conspiracy theorists). But that just seems wrong. It seems that of the following, (1) is true, but (2) is false:

  1. If Oswald didn't kill Kennedy, then somebody else did (p='somebody else did')
  2. If Oswald didn't kill Kennedy, then the Queen of England did (p='the Queen did')

So, the argument goes, we should replace the material conditional analysis with something else. Note also that the example isn't particularly bizarre. (OK, (2) is a little bit bizarre, but that's to make it more obviously false -- obviously false statements tend to be a bit bizarre, since nobody is inclined to assert them. But you can replace (2) with other statements which are still just as false -- perhaps less obviously so.)

Now, a potential counterexample to the foregoing might be as follows: If 'Oswald didn't kill Kennedy' is false, and we know that it is false, aren't the conditionals in the family above all a bit, well, bizarre? After all, how can I contemplate who else might have killed him given that I know that it was Oswald? In that case, why does it matter what truth conditions I assign to (1) and (2)?

An answer concerns degrees of belief. I'm pretty darn certain that Oswald killed Kennedy, but I'm not 100% certain, more like 99%. There's a chance I'm wrong. In that case, it's worth me considering the conditionals (1) and (2); I want to know how to revise my beliefs if I were to find out that I were wrong. I have a high confidence in (1) (perhaps higher than 99%, since all that requires is that I'm confident that Kennedy was actually killed). But I have a very low confidence in the conditional (2) (pretty close to 0%). But the material conditional analysis, says that, since I have high certainty in the falsehood of the antecedent, should have a high certainty in the falsehood of the conditional as a whole.

2: Why should we care about the semantics of natural language in general?

Your second bolded question seems to suggest something like this. Why should we care about the semantics of natural language in general, especially if we can avoid ambiguity ourselves?

I don't have much to say on this, but to make one remark, and to give one link.

Remark: There seems to be a presupposition that the reason we should care as philosophers about the truth conditions of sentences is so that we can be clear what we're saying when we say it. This is perhaps the case we want well-defined semantics of formal languages; it helps us be precise when we want to make a statement with particular truth conditions. So, for example, if we assert a conditional 'if p then q' which gets different truth conditions on different accounts of the semantics of conditionals, this would be problematic. But, as you say, in such a case, we can always say something less ambiguous somehow or other.

But why not ask about the semantics of natural language? It's a pretty good question. Perhaps there is some demarcation to be done, such as whether this is philosophy or linguistics, but that, I take it, isn't the issue. And why particular bits of natural language? I think the only good answer here will have to be something like: 'because it's more interesting' -- it throws up more problems, interesting subquestions and so on. There's also no doubt an element of fashion involved.

A link: There's a recent interview with John Searle in which he expresses what I take to be similar reservations to yours. Philosophy of language has a habit of looking at a pice of natural language and then suggesting all manner of formal models of it, supplying truth conditions. In doing so, we can have a lot of fun, but it's easy to miss why we're doing it. Here's a relevant quote:

JS: Well, what has happened in the subject I started out with, the philosophy of language, is that, roughly speaking, formal modeling has replaced insight. My own conception is that the formal modeling by itself does not give us any insight into the function of language.

Any account of the philosophy of language ought to stick as closely as possible to the psychology of actual human speakers and hearers. And that doesn’t happen now. What happens now is that many philosophers aim to build a formal model where they can map a puzzling element of language onto the formal model, and people think that gives you an insight.

He goes on to give theories of counterfactuals as an example, but indicative conditionals could easily be another example. Here's a link to the interview

1

One reason you might find this interesting is if we draw a distinction between Logic as a mathematical field of study and the philosophical study of the foundations of Argumentation. Whatever you might want to say about the apparent formal restrictiveness of the idea of an indicative conditional, there appears to be something incredibly potent about many of the old Aristotelian Syllogism rules that defies a purely formal account.

Specifically, I'm thinking about the convincing nature of Modus Ponens arguments; that is, arguments of the form:

If P then Q

P

Therefore, Q

Unless we think that the "if" here has some kind of mathematical or logical foundational interpretation, this following isn't a logical guarantee - indeed, as has been pointed out in the literature around Truth, Modus Ponens seems to struggle as a law if we think that "If" means material implication and where Truth is taken to be considered in terms of the rules that you can deduce that the sentence 'P' is true whenever P is the case and vice versa. Yet it seems to capture the very point of what it means to say "if" in casual discussion - we want the conclusion to "follow from" the premise in an authoritative mode of assertion.

What we are looking for is some notion of the idea of a Logical Consequence relation suited for argument and reasoning. Picking apart the foundations of Argumentation in some kind of methodical, logical framework that can function as a public standard requires certain modelling assumptions about what each of the supposedly logical terms we appeal to are taken to mean. Alfred Tarski, in his 1983 paper "On the Concept of Logical Consequence" (sorry, I can't find an online version!), discusses how we pick out certain words or fragments of language as logical constants to represent functions in our semantics, which enables us to form and apply a mathematical model for how we do in fact reason using mathematical structurings of the world. Other analytical philosophers have tried to do something similar by using tools in proof theory to represent how we transform statements or propositions in more combinatorial ways.

This was a pretty live debate even fairly recently, though I'm not sure if any definitive consensus has been reached as to whether the consequence work has any useful philosophical applications. One major challenge for this field has been to address Curry's Paradox, which, much like Godel's argument, seems to render a complete capture of a relation of consequence of a particular logical system inaccessible to that same logical system.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .