This answer should be read as a kind of extended comment on alanf's answer, which I broadly agree with, but would like to qualify. Deutsch argues that probabilities can be eliminated from physical theories, in other words, that we have no need of stochastic processes in a physical theory. In particular, he is concerned to maintain that quantum theory, which has often been interpreted to involve fundamental indeterminacies, can be understood instead as a deterministic account of how particles and fields behave across a multitude of worlds, according to the many-worlds interpretation.
Deutsch then proceeds to dismiss the epistemic notion of credences, but this need not follow from the rejection of physical probabilities. We, as cognitive agents with limited and imperfect capabilities, never possess perfect information about anything. Whenever we make decisions, which is all the time, we are forced to make those decisions under uncertainty, and unless we have some way of quantifying that uncertainty, we will be prone to making bad decisions. This is why probabilities show up in decision theory: it does not mean we are making decisions about stochastic events, merely that we are making decisions with incomplete or imperfect information. The probabilities are simply there to quantify the uncertainty. Bruno de Finetti showed how, using Dutch book arguments, we can start from a very innocuous and plausible notion of what consititutes a bad or irrational decision, and proceed to derive probability theory from it. Others, including Richard Cox and Edwin Jaynes, have shown how a primitive notion of inference can be used to derive probability.
The upshot of this is that inductive reasoning using epistemic probabilities is very much alive and well and flourishing in statistical practice, and in particular in the machine learning/articifial intelligence domains. As to your specific question of what kinds of tests statisticians perform, there is no general agreement about methodology. The three main camps are classical (frequentist), Bayesian, and likelihoodist. The classical approach broadly involves forming null hypotheses, designing experiments to test them and rejecting the hypothesis if the results are significant (Fisher), or testing hypotheses according to their false positive and false negative error rates (Neyman and Pearson). Bayesians specify prior probability distributions and use data to update those distributions. Likelihoodists take two rival hypotheses and calculate a likelihood ratio that allows one to say which hypothesis is confirmed relative to the other.