Skip to main content
added 672 characters in body; edited tags
Source Link
causative
  • 14.7k
  • 2
  • 18
  • 59

Here we must be careful about defeasibility as distinct from contradiction. J, remember, may include defeasible judgments where J(A ∪ {a}, b ; θ) = false while J(A, b ; θ) = true. We don't want to throw out a justification graph because we added judgments that can be defeated that weren't in the original graph. Let's more specifically say that evidence A2 defeats evidence A1 with respect to b1 and b2, if J(A2, b1 ; θ) = false, J(A1, b2 ; θ) = true, and A1 ⊂ A2, and C(b1, b2 ; θ) = true. We may say that if we extend a justification graph to form a contradiction, it doesn't count unless none of the propositions we used to extend the graph, beyond the initial propositions in the graph, can be defeated in this way.

The full θ is generally unknown in practice, but parts of it are revealed through the justification graph. If the justification graph contains a judgment that b follows from A (given θ), we might use this information about θ to extend the graph into a contradiction, thus refuting it. For example, if someone reasons fallaciously, then we know their θ allows the fallacious inference. We can then apply the fallacy to a different context to produce a contradiction, thereby refuting the θ that allowed the fallacious inference.

The full θ is generally unknown in practice, but parts of it are revealed through the justification graph. If the justification graph contains a judgment that b follows from A (given θ), we might use this information about θ to extend the graph into a contradiction, thus refuting it. For example, if someone reasons fallaciously, then we know their θ allows the fallacious inference. We can then apply the fallacy to a different context to produce a contradiction, thereby refuting the θ that allowed the fallacious inference.

Here we must be careful about defeasibility as distinct from contradiction. J, remember, may include defeasible judgments where J(A ∪ {a}, b ; θ) = false while J(A, b ; θ) = true. We don't want to throw out a justification graph because we added judgments that can be defeated that weren't in the original graph. Let's more specifically say that evidence A2 defeats evidence A1 with respect to b1 and b2, if J(A2, b1 ; θ) = false, J(A1, b2 ; θ) = true, and A1 ⊂ A2, and C(b1, b2 ; θ) = true. We may say that if we extend a justification graph to form a contradiction, it doesn't count unless none of the propositions we used to extend the graph, beyond the initial propositions in the graph, can be defeated in this way.

The full θ is generally unknown in practice, but parts of it are revealed through the justification graph. If the justification graph contains a judgment that b follows from A (given θ), we might use this information about θ to extend the graph into a contradiction, thus refuting it. For example, if someone reasons fallaciously, then we know their θ allows the fallacious inference. We can then apply the fallacy to a different context to produce a contradiction, thereby refuting the θ that allowed the fallacious inference.

Source Link
causative
  • 14.7k
  • 2
  • 18
  • 59

Model of an argument

I have the thought that an informal argument is fundamentally about building a justification graph: a directed acyclic graph from premise propositions to intermediate and conclusion propositions, where each proposition is justified by its parents in the graph according to some informal, malleable notion of defeasible justification.

The justification graph may be good or bad. It is bad if the premises or inferences are insufficiently justified. It is bad if it contains a contradiction, or if a contradiction can be derived from it using the same informal notion of justification. It is bad if some of the propositions in the graph can be defeated by the introduction of more evidence. It is also bad if the justification rules used in the graph are themselves insufficiently justified, or yield contradictions when applied to other topics.

When people argue, they are trying to show that their conclusion can be supported by a good justification graph, or that the justification graph the other person is building is bad. The point of making a good justification graph is in order to persuade the other person of the conclusion, so each of the premises and justifications in the graph should be previously accepted by the other person, otherwise the graph lacks the power to persuade.

If we are concerned with making a persuasive argument, then we can reduce the question of whether a premise is justified, to the question of whether the other person accepts that premise.

Let's use some notation. Say that A is a set of premise propositions and b is a conclusion proposition. J(A, b) can be a true or false value saying whether a person considers it justified to conclude b from A in the absence of any other evidence. J is defeasible; if a is an additional premise, J(A, b) may be true while J(A ∪ {a}, b) could be false.

An instance of "J(A, b) = true" can be called a "judgment" or "inference." It may be read as, "b is justified by A," or "we judge b, based on A." b is the conclusion of the judgment, and A is the premises of the judgment.

These judgments are not necessarily valid or reasonable, and may be self-contradictory; they appeal to a particular person at the time they were made, and that's all.

One person's J might not be the same as another person's J, or one person's J might vary as he learns more things. To reflect this variation we want to parameterize J. So now instead of J(A, b) we may say J(A, b ; θ) for some parameter vector θ. We could think of J like a neural network, θ being the weight vector. We may also think of θ as a person's "background beliefs" that they bring with them into the argument.

A proposition p that is accepted as a premise would be reflected in J as J({}, p ; θ) = true, i.e. it is justified to conclude p from the empty set.

We need a notion of contradiction between propositions. We might be able to derive this from J, but for now it's easier to use a second symbol C. C(a, b ; θ) = true if propositions a and b contradict, given the parameters θ.

A justification graph that would persuade someone with background beliefs θ, is then an ordering of a set of propositions P, together with a set of inferences. Each inference is a pair (A, b), where A ⊂ P, b ∈ P, b occurs later in the ordering than every member of A, and J(A, b ; θ) is true. Each proposition b ∈ P must appear at least once as the conclusion of each judgment (the graph contains no unjustified propositions).

A justification graph, combined with parameters θ, is bad if it leads to any contradiction. That is, if it is possible to extend the graph with additional judgments in such a way that the extended graph contains two propositions a, b where C(a, b ; θ) = true.

The full θ is generally unknown in practice, but parts of it are revealed through the justification graph. If the justification graph contains a judgment that b follows from A (given θ), we might use this information about θ to extend the graph into a contradiction, thus refuting it. For example, if someone reasons fallaciously, then we know their θ allows the fallacious inference. We can then apply the fallacy to a different context to produce a contradiction, thereby refuting the θ that allowed the fallacious inference.

When a justification graph persuasive to someone with beliefs θ is contradictory, then we understand the person needs to revise their θ so that the contradictory justification graph cannot be formed. I don't describe how this revision might happen.

Is this a good model of how rational argument works, or ought to work? Are there any elements of rational argument that cannot be fit into this structure?