6

This is a question replicated here as advised by the statisticians' StackExchanged - see also https://stats.stackexchange.com/questions/178857/bayesians-positions-on-inductive-skepticism

Philosopher Marc Lange gives an overview (pdf) of the debate on Hume's Problem of induction. Chapter 9 (starting on p. 80) is called "Bayesian approaches". I understand it as: the justification for induction might be updating believes from a Bayesian point of view. Lange continues with a fictional dialogue between a Bayesian (B) and an inductive skeptic (S). I summarize:

B: if you admit Bayesian approaches are valid, what kind of prior do you suggest, which fundamentally makes updating believes a non-justification of induction.

S: any distribution with "no degree of confidence to which we are entitled regarding predictions regarding unexamined cases" (Lange), where "no degree of confidence" does not mean the value zero but no value at all [e.g. a NULL in the statistical programming language R].

B: this prior violates probability axioms - it is not a distribution [and not implementable in R either].

My questions are:

Does B's last claim reflect the working Bayesian's position?

Can the skeptic S consistently defend her skeptical position still including the acceptance of Bayesian techniques by her construction of a prior distribution?

2 Answers 2

2

The way that I could see S being consistent is by defining those cases where a prior cannot be defined from those cases where it can. Then it would only be in those latter cases where he would consider Bayesian approaches valid. This would amount to S "solving" the problem of induction by delineating those cases where (Bayesian, statistical) induction is justified from those where it is not.

3
  • please check my comment on @Bumble's answer, I tried to combine your views in the light of Popper's approach
    – Statos
    Commented Nov 4, 2015 at 20:01
  • @Statos in the original question, the phrase "still including the acceptance of Bayesian techniques", given the rest of the question, implied to me that a close relationship between Bayesian updating and induction was assumed.
    – Dave
    Commented Nov 4, 2015 at 21:11
  • yes, discussing a potentially close relationship and even justification was intended (and discussed in Lange's paper)
    – Statos
    Commented Nov 5, 2015 at 7:31
3

Lange discusses several approaches to justifying induction. The specific one referred to here is that Bayesian reasoning provides us with a mechanism for updating our credences in the light of new information and hence this constitutes a form of inductive reasoning. David Stove in his book The Rationality of Induction takes the argument even further and claims that an inductive skeptic would in effect be committed to believing that for any hypothesis H and evidence E, P(H | E) = P(H), which is absurd.

Lange now considers a counter argument from the skeptic. Can one retain a commitment to Bayesian updating but deny that it leads in general to justifiable credences because Bayesian updating requires that one start from priors, and there is no objectively neutral way to assign priors in the absence of any information?

Lange for some reason doesn't say much here about uninformative priors. It is a commonplace in Bayesianism that one must choose one's priors carefully in order to be 'epistemically modest' - that is, to avoid making assumptions for which one has no evidence and hence ensure one's priors are maximally uncertain or equivocal. There has been quite a lot of work in this area and it is surprising that Lange makes no reference to Jeffreys priors, or to Edwin Jaynes' approach using the maximum entropy principle, or to formal approaches to defining equivocation using Kullback–Leibler divergence.

Given that (I would say) there are pretty good ways of assigning objective priors, at least in a wide variety of typical cases, the skeptic is checkmated here. They can of course refuse to assign any value to a prior, but such a refusal amounts to declining to allow that credences can be represented as probabilities. In effect that would be avoiding checkmate by saying, I refuse to play any more and I'm going home.

Not only that, but convergence results have demonstrated that under a wide range of plausible conditions, posterior priors will converge when updated by information from a large number of independent trials. This means that even in the absence of objectively agreeable priors, one can still progress, inductively, to agreement on a posterior given enough evidence.


Update in response to further question from Stratos: The main point here is that bayesian updating is a process in which adding information in the form of evidence terms, E, allows one to replace a prior P(H) with a posterior P(H | E) and so to progress towards an assessment of the probability of H that reliably reflects the evidence. That is what inductive reasoning is all about. The skeptic is committed to denying that this process works. One way they could do that is to say that a different choice of priors would lead to a different posterior, hence the issue of objective priors.

I said above that for a wide variety of typical cases there are some pretty good ways of assigning objective priors. This doesn't contradict the response on cross-validated that there is no completely general way to assign priors that assumes no information at all. If you would like some more reading on this, there is a good paper here and you might also like Jon Williamson's book, In Defence of Objective Bayesianism.

1
  • 1
    do I understand you right, that you basically view Bayesian updating on grounds of objective priors as a justification of induction? See stats.stackexchange.com/questions/20520/… for a view on objective priors. Can we say (see also @Dave's answer) that we sometimes chose "subjective" priors and are actually still in Popper's framework - and sometimes chose "objective" priors and maybe leave his framework because of a more general justification of induction?
    – Statos
    Commented Nov 4, 2015 at 20:00

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .