Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

5
  • These are some good points. (1) Degrees of credence can be modeled as propositions expressing the degree, e.g. P = "I am highly confident that the sky is cloudy." Then we just assert P. So there's no need for explicit probability in the argument model, just inside the propositions. (2) "X is likely" can be judged as just black and white contradiction with "X is unlikely." (3) Ultimately we either state a proposition in the argument or we don't state it. That's binary even if the beliefs are a matter of degree.
    – causative
    Commented Sep 19, 2023 at 15:44
  • (4) Despite (1-3), it might be a good idea to have explicit probability in the argument model (as well as fuzzy logic), and partial contradiction, but how? Most importantly, when can the person judge that the argument is "flawed enough" that their θ must be revised? That's ultimately what it's all about. (5) Yes, propositions are defeasible, I mentioned how that fits in. A proposition is defeated if its contradiction can be obtained via a judgment based on a superset of the premise set for the proposition. (6) θ does do a lot of work, it's basically "your whole mind." Seems okay to me.
    – causative
    Commented Sep 19, 2023 at 15:51
  • (7) "The distinction between an unjustified proposition and a proposition whose justification proceeds from the empty set seems a rather fine one" - I mentioned, "If we are concerned with making a persuasive argument, then we can reduce the question of whether a premise is justified, to the question of whether the other person accepts that premise." The idea is here we don't worry about ultimate epistemic justification, but only whether our argument persuades the other person. So if they accept the premise (via J({}, b, θ) ), that's justified enough for the argument to work.
    – causative
    Commented Sep 19, 2023 at 15:53
  • My point about making θ do a lot of work is that without some content, it is not useful. It would be like saying, I have an equation that describes everything that happens in the universe, it's Φ = 0. The value of a successful model is that it exhibits the component parts and the relations between them in enough detail to be highly informative.
    – Bumble
    Commented Sep 19, 2023 at 16:47
  • Well, I'm thinking of θ like the weights of a neural network. In ML it's fine to just represent that with one symbol. It's a black box, internally not understandable, whose meaning is revealed only by plugging in inputs and seeing what comes out (in this case, J() and C() is what comes out). I might imagine some neural network could actually be used to produce judgments and evaluate arguments, and revise its parameters, based on something like this model. To do that, the θ-revision process would need to be fleshed out (probably involving some sort of error function and gradient descent).
    – causative
    Commented Sep 19, 2023 at 16:53