Let us clarify some terms, so we do not turn our ankle in some linguistic rabbit-hole before we start. All experimental science used to be called 'Natural Philosophy'. Modern usage re-labels this as 'Science', suggesting that the rest of philosophy may be something else. If science is observation and experiment, perhaps philosophy is - or at least includes - the study of these experimental methods themselves.
What do we know, and how do we know it? We have the word of other people, which we should not wholly trust. We can do experiments ourselves. We can make assumptions for the 'tidiest' or 'most probable' explanation for the parts of the universe we cannot measure. As experiments advance, we modify our views accordingly. This has been a dramatic part of astronomy in my lifetime. But this all relies on human intelligence, which is fallible. It has trouble adopting to new ideas, such as quantum physics. It tends to reject or not process the unfamiliar. Are there other things that our human bias does not allow us to see?
From the earliest days of computer science, people believed a machine could be intelligent if it were sufficiently complicated and fed with enough data. Recent machine learning models are just a collection of simple pattern-matching algorithms, but with more levels they show some intriguing abilities. They are not intelligent as I write this, but they raise our expectations of what machines can do, and possibly lower our reverence for our own smartness. When ML fails though under-training or over-training we see parallels in our own understanding, when we seize on some observations and reject others. With ML we can control the training, and have a true experiment rather than random anecdotes.
I note 'experimental philosophy' is already a term relating to thoughts on the experimental method. This is worthy, but not quite what I had in mind. I wonder, rather, whether the giant AI machines of Google and friends may become the Large Hadron Collider of Philosophy.
A few follow up bits:
I skipped a lot on neural nets. Modern neural nets separate 'training' and 'working'. We can train a network on a giant computer then stick it in our cellphones to clean up pictures. This is also a good thing for our understanding. Our brains cannot separate training from working, but we can do exact experiments on neural nets: we can change any value in the model, repeat an experiment, and see how the results change.
The Turing Test was a thought experiment that tried to identify 'intelligence' by what it does, rather than what it 'is'. Modern neural nets write papers, create case law for lawyers, and argue that they are real people. There is no 'lying' or 'deception'; they just provide more data that matches their training data. AI will pass the Turing Test thanks to its tireless ability to imitate long before it passes due to actual intelligence.
What type of intelligence might we miss? The world telephone network has similar complexity, memory, distributions of long-range and short-range connections, and evolved structure as the human brain. The magnetic fields in the Sun have a lot of fine structure, and a long-term structure to give an 11-year sunspot cycle. Can we prove they are not intelligent? And if so, how?