4

First let's define fallibilism as the view that there's no belief that can't turn out to be false no matter how much credence we lend to it.

This implicitly entails that we take ourselves to not be omniscient. If we are not omniscient there always may be defeaters for our beliefs that we haven't yet considered or that we maybe can't even possibly consider because they are somehow inaccessible to us.

Now we have no clue what the nature of these defeaters is, if they exist, what the probability of them defeating any given belief is etc. So basically every judgement we make has a variable attached to it, that can basically change the result of the whole calculation. And that applies to all beliefs simultaneously.

If there aren't some beliefs we take to be infallible, wouldn't total skepticism (as in global suspension of judgement) be the necessary result of the fallibilist's commitments?

I'm therefore suspecting that describing fallible beliefs as constituting knowledge is a pragmatic decision more than anything else.

1
  • 1
    Skeptics doubt all beliefs and suggest that all of them are groundless. Fallibilists doubt all beliefs but suggest that many of them are true and justified nonetheless. We just cannot state which ones are with complete certainty, see IEP. But there is no need to take any one in particular to be infallible. For limited beings like us, complete certainty is a wrong standard to begin with. There is no collapse because skeptical criteria for it are rejected.
    – Conifold
    Commented Oct 26, 2023 at 19:52

1 Answer 1

3

It comes down to how you choose to define the word, "knowledge."

Naively, we might define "knowledge" as something you can be absolutely sure of with no chance of being proved wrong.

But when we consider that we could always have made a mistake, even when we are dealing with pure deduction/mathematics, this definition of "knowledge" becomes useless. We can never correctly apply the term to anything we believe.

But even fallible knowledge is still a useful idea that we need a term for. We could define a new term, "f-know" (for "fallibly know") and say that we f-know things, never that we know things. We could do the same for the word "judgment": we always suspend judgment (because we're never rationally perfectly certain) but we make f-judgments.

But this is annoying! Why should we introduce a new set of words when we've stopped using the old ones? Plus, we use the new words whenever we would have used the old ones! "f-know" and "f-judge" are drop-in replacements for "know" and "judge," except when we start asking about perfection.

We can define words in any way we like. So instead of saying "f-know," to make our usage of language less annoying, we should just say "know." Instead of saying "f-judge," we should just say "judge." And if we want to talk about perfection (and about the only thing we can say about it is that it doesn't happen), then we can make ourselves clear by saying things like, "perfectly, certainly know" or "infallibly judge."

We can compare this to what is done with compatibilist free will. We find that the naive notion of free will doesn't make any sense and doesn't happen. As a result we could totally discard the term "free will," and perhaps also refuse to say that people make "choices." But this robs us of useful terms. So it is better to instead say that people do make choices, and act freely, but just define these terms in a different way that matches what actually happens in reality.

9
  • Thanks for your response. Two things to that: I understand that we continue these weaker concept of knowledge under the same name as the "original" concept for practical reasons. But I asked for a theoretical justification. I think it is especially problematic, as the weaker concept borrows much of it's authority from the original concept. Second is that it's not even really about absolute certainty. EVERY belief, no matter how certain, is threatened by unknown defeaters in just the same way that includes the degrees of certitude we ascribe. So really no belief is more or less certain.
    – Numa
    Commented Oct 26, 2023 at 19:03
  • @Numa I said "rationally perfectly certain." If you could ever have a rational justification for being perfectly certain (you can't), then there cannot be any unknown defeaters. If there could be unknown defeaters, then it wouldn't be rational to be perfectly certain. Moving on, when you talk about beliefs being "more or less certain," we should understand this as being about f-certainty (fallible certainty) rather than perfect certainty. Two beliefs may be defeasible, but one may be much more certain, because the chance of each belief being defeated is not the same.
    – causative
    Commented Oct 26, 2023 at 19:19
  • The degree of warranted certainty derives from the method used to produce the belief. If the method used to produce the belief produces very reliable results in general, when used in the way we use it to produce the belief, then we may say that we are very certain of the specific belief. If the method produces less reliable results in general, then we may say we are only fairly certain of the specific belief, or just moderately confident. Probabilistic reasoning is an example of a method that is able to evaluate its own reliability. Other methods are typically evaluated by their track record.
    – causative
    Commented Oct 26, 2023 at 19:25
  • That some belief is more certain or that some method is more reliable is precisely what I dispute. Imagine your belief forming processes as calculations with different numerical results. Belief A got a value of 4 belief B got a value of 6789. Belief B has a much higher value, and is therefore seemingly much more certain. But all results have a variable with unknown value attached, this is the unknown defeater. This variable could radically alter the results and you have no idea to what extend. How could you say, that any belief is more or less certain, justified or whatever?
    – Numa
    Commented Oct 26, 2023 at 19:37
  • @Numa You look at the track record. If in the past, your belief forming process reliably (i.e. most of the time) assigns high numbers to beliefs that are rarely defeated, then you can have warranted confidence that a belief with a high assigned number is less likely to be defeated. Another way to look at it is from a reinforcement learning perspective. You select for belief forming processes that have steered you towards high rewards in the past. The better the rewards a belief forming process has - most of the time - steered you towards, the more trust you rationally grant to that process.
    – causative
    Commented Oct 26, 2023 at 20:03

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .