Bayes
If you take the Bayesian view of the world, all knowledge is inherently statistical. It's just that some knowledge involves statistics at the extremes (probability 0 and 1). Human brains seem to be wired for what psychologists call "folk theories". We have "folk physics", "folk biology", "folk chemistry", etc. These are what many people call "intuition". You don't need to teach a child algebra or ballistics to teach them how to catch a baseball. After observation and many attempts, they learn automatically. And the way they learn is inherently statistical.
That's because the learning machine is literally a brain in a vat. It's a little 5 lb. computer swimming in a pool of cerebrospinal fluid, and it's doing its business entirely on the basis of electrical impulses coming in from the rest of the body. When this brain is first trying to catch a ball, it sends signals to the muscles which cause body motions that may or may not get the body closer to catching it. When it succeeds, the signal pathways are strengthened. When it fails, the pathways are weakened. But at no point does the brain start with a precise mathematical description of ballistic trajectories and then issue commands to the muscles in accordance with precise geometry and mechanics. Rather, the brain just issues motor patterns similar to other scenarios in which a comparable task was performed successfully. When the brain has no experience, it just issues motor commands randomly, because that is what its structure is primed to do (meaning, the neural connections themselves are created randomly). That's why babies spend a lot of time just moving their limbs around aimlessly. They are learning to control them.
Eventually, the brain encounters many examples of ballistic motion, and these experiences implicitly form the folk physics that it uses going forward. If you then toss an apple to that brain as an adult, and they deftly catch it absent-mindedly, and you ask: "How were you able to catch that apple?" They will say: "I didn't even think about it. I just knew how to do it. Catching things is just intuitive." But "intuitive" just means: "learned so well it no longer rises to the level of conscious experience", which is why we can't rationalize intuition. The reasons motivating it are lost to us, because they are no longer necessary.
Science
What science does is simply formalize the learning process to make things explicit. Newtonian physics would not be possible without folk physics. A scientist locked in a room, strapped to a chair with no objects to interact with would almost certainly not derive a system of mechanics. All knowledge is based on observation, and all models are a compression of those observations into a more compact representation. That is what folk physics is, after all: a compressed representation of the prediction of the next observation of a ballistic object in motion. It would simply not do to generate all possible ballistic trajectories and objects and store them in one's brain, then consult this table to decide where the object will land. That would take up too much space.
So science is ultimately about patterns: things that occur more than once. The Mongols invading Asia is not a scientific phenomenon because it does not happen regularly. It's a one-time historical event. Science is also about object phenomena: art criticism is not a scientific endeavor because it depends partly on the subjective feelings of the viewer, and scientists cannot observe or measure these feelings, only the way the viewer reports them (which is notoriously unreliable). And finally, science is about claims that can be wrong: religion is in fundamental tension with science because it makes claims that must be accepted and cannot be challenged.
That is why experiment is central to science: it is the test which tells us if a scientific claim is wrong. The problem is that an experiment can only tell us if a claim is wrong. It cannot tell us if a claim is right. That's because the positive outcome might have occurred for a reason having nothing to do with the model. And thus, the most strictly accurate statement one can make about a scientific claim is: "The probability that the hypothesis is wrong given this positive experimental outcome is less than epsilon." When scientists insist that some demonstration proves a model beyond all doubt, what they are really claiming is: "I believe that if this experiment were repeated an arbitrary number of times, you would get the same result." At the end of the day, it's all about making predictions. And experiments are the tailor-made events designed to test those predictions.
But a prediction is ultimately a single event in spacetime. Just because you predicted one event doesn't mean you can predict 2 or 1,000 or a googol. Each successful prediction increases the confidence in your model, which is why Newtonian mechanics is regarded as "true" in its domain of applicability: it has been validated so many times in so many ways that physicists would be shocked to observe a bona fide violation. Basically all of human technology depends on it.
Model Building
When people say that science is more than statistics, they are taking umbrage at the fact that there is some intellectual process going on beyond mere counting of beans. The process of building a model feels like high art. And for some models that is indeed a fair description. But at the end of the day, a hypothesis is just a compression of a large number of observations. The trick is that the model does not represent the entire observation. It only represents some particular dimensions of interest. Science is as much about choosing what to ignore as it is choosing what to observe. And deciding such things is more sophisticated than mere counting. Or is it? Every set of observations contains an innumerable number of irrelevant details. Surely picking the interesting observational dimensions requires deep intelligence? Perhaps not.
At the end of the day, observations must be measured, and there are only so many dimensions we can measure reliably. Thus, our models are constrained to things we can measure in one way or another. When we try to determine how fast a horse can run, we don't bother to measure the effect of the movement of individual tail hairs, because the effect is small, and because we don't have a feasible way to measure that even if we wanted to. In many cases, the relevant dimensions to include in the model are somewhat obvious because there are only a handful of practically measurable observables.
And even if we naively recorded every dimension for which we can measure precisely and reliably, it would still be possible to infer the most predictive dimensions...statistically. Principal Components Analysis does exactly this. It is not beyond reason to think that a robot scientist could simply collect all the data without a starting hypothesis, perform an analysis, determine that several dimensions are strongly predictive, and then build an obvious and straightforward model based on those dimensions. On some level, this is what the human scientists are doing, too. They just aren't being honest about it. The PCA is running in their own heads, below the level of consciousness. But even the process of building a model is ultimately statistical in nature.
Logic
Now, saying that "science is just statistics" is being too glib. There is an entire branch of "data science" that deals with the special edge cases of statistics: things that are always true or always false. And we call that field "Logic". Logic is powerful because it enables deduction. If A implies B and B implies C, then A implies C. But if A implies B 20% of the time, and B implies C 15% of the time, does that mean that A implies C 3% of the time? No. It depends on which cases of B imply C, because there may be a correlation which makes the A->C implication larger or smaller. Trying to do inference over uncertainty is difficult in the best of circumstances.
Quite a bit of model building does not entail observations themselves, but rather combining other models together. And in this case, we can use logic to constrain how the unified model must look, thus allowing us to eliminate a large number of candidates without running a single experiment. I think this process is part of why scientists bridle when you suggest that science = statistics. A pedant could argue that logic is just "degenerate statistics" because it's just statistics over the probabilities {0, 1}, and you could certainly derive logic from that foundation if you so desired. But emergent phenomena are things that can only exist at a particular level of complexity, and logic is one of those things. Framing it in terms of statistics is not helpful, which is why nobody does it.
Conclusion
To the extent that science is about building models that stand up to experimental test, I think it's fair to say that science is statistical. The fact that the process of science involves some things that are not statistical in nature (like using logic to build models) means that it is perhaps too strong to say that science is entirely based on statistics. But if you absolutely wanted to insist that it is, and you are not embarrassed by an excessive degree of pedantry, then you could probably make this case.