Lange discusses several approaches to justifying induction. The specific one referred to here is that Bayesian reasoning provides us with a mechanism for updating our credences in the light of new information and hence this constitutes a form of inductive reasoning. David Stove in his book The Rationality of Induction takes the argument even further and claims that an inductive skeptic would in effect be committed to believing that for any hypothesis H and evidence E, P(H | E) = P(H), which is absurd.
Lange now considers a counter argument from the skeptic. Can one retain a commitment to Bayesian updating but deny that it leads in general to justifiable credences because Bayesian updating requires that one start from priors, and there is no objectively neutral way to assign priors in the absence of any information?
Lange for some reason doesn't say much here about uninformative priors. It is a commonplace in Bayesianism that one must choose one's priors carefully in order to be 'epistemically modest' - that is, to avoid making assumptions for which one has no evidence and hence ensure one's priors are maximally uncertain or equivocal. There has been quite a lot of work in this area and it is surprising that Lange makes no reference to Jeffreys priors, or to Edwin Jaynes' approach using the maximum entropy principle, or to formal approaches to defining equivocation using Kullback–Leibler divergence.
Given that (I would say) there are pretty good ways of assigning objective priors, at least in a wide variety of typical cases, the skeptic is checkmated here. They can of course refuse to assign any value to a prior, but such a refusal amounts to declining to allow that credences can be represented as probabilities. In effect that would be avoiding checkmate by saying, I refuse to play any more and I'm going home.
Not only that, but convergence results have demonstrated that under a wide range of plausible conditions, posterior priors will converge when updated by information from a large number of independent trials. This means that even in the absence of objectively agreeable priors, one can still progress, inductively, to agreement on a posterior given enough evidence.
Update in response to further question from Stratos:
The main point here is that bayesian updating is a process in which adding information in the form of evidence terms, E, allows one to replace a prior P(H) with a posterior P(H | E) and so to progress towards an assessment of the probability of H that reliably reflects the evidence. That is what inductive reasoning is all about. The skeptic is committed to denying that this process works. One way they could do that is to say that a different choice of priors would lead to a different posterior, hence the issue of objective priors.
I said above that for a wide variety of typical cases there are some pretty good ways of assigning objective priors. This doesn't contradict the response on cross-validated that there is no completely general way to assign priors that assumes no information at all. If you would like some more reading on this, there is a good paper here and you might also like Jon Williamson's book, In Defence of Objective Bayesianism.