8

Let us clarify some terms, so we do not turn our ankle in some linguistic rabbit-hole before we start. All experimental science used to be called 'Natural Philosophy'. Modern usage re-labels this as 'Science', suggesting that the rest of philosophy may be something else. If science is observation and experiment, perhaps philosophy is - or at least includes - the study of these experimental methods themselves.

What do we know, and how do we know it? We have the word of other people, which we should not wholly trust. We can do experiments ourselves. We can make assumptions for the 'tidiest' or 'most probable' explanation for the parts of the universe we cannot measure. As experiments advance, we modify our views accordingly. This has been a dramatic part of astronomy in my lifetime. But this all relies on human intelligence, which is fallible. It has trouble adopting to new ideas, such as quantum physics. It tends to reject or not process the unfamiliar. Are there other things that our human bias does not allow us to see?

From the earliest days of computer science, people believed a machine could be intelligent if it were sufficiently complicated and fed with enough data. Recent machine learning models are just a collection of simple pattern-matching algorithms, but with more levels they show some intriguing abilities. They are not intelligent as I write this, but they raise our expectations of what machines can do, and possibly lower our reverence for our own smartness. When ML fails though under-training or over-training we see parallels in our own understanding, when we seize on some observations and reject others. With ML we can control the training, and have a true experiment rather than random anecdotes.

I note 'experimental philosophy' is already a term relating to thoughts on the experimental method. This is worthy, but not quite what I had in mind. I wonder, rather, whether the giant AI machines of Google and friends may become the Large Hadron Collider of Philosophy.

A few follow up bits:

I skipped a lot on neural nets. Modern neural nets separate 'training' and 'working'. We can train a network on a giant computer then stick it in our cellphones to clean up pictures. This is also a good thing for our understanding. Our brains cannot separate training from working, but we can do exact experiments on neural nets: we can change any value in the model, repeat an experiment, and see how the results change.

The Turing Test was a thought experiment that tried to identify 'intelligence' by what it does, rather than what it 'is'. Modern neural nets write papers, create case law for lawyers, and argue that they are real people. There is no 'lying' or 'deception'; they just provide more data that matches their training data. AI will pass the Turing Test thanks to its tireless ability to imitate long before it passes due to actual intelligence.

What type of intelligence might we miss? The world telephone network has similar complexity, memory, distributions of long-range and short-range connections, and evolved structure as the human brain. The magnetic fields in the Sun have a lot of fine structure, and a long-term structure to give an 11-year sunspot cycle. Can we prove they are not intelligent? And if so, how?

8
  • 1
    +1 It would be very interesting to do the Turing test with one participant being current AI. And to repeat the test regularly year by year during the next 5 years.
    – Jo Wehler
    Commented Dec 3, 2023 at 11:02
  • Relevant (not duplicate) question: concerning AI image generators and conceptual analysis. Commented Dec 3, 2023 at 18:02
  • @JoWehler AI is already doing this with generative adversarial networks. The AI models get better at generating fakes, and at detecting the fakes, and then at making better fakes. We have some catching up to do. Commented Dec 3, 2023 at 19:50
  • @RichardKirk Could you indicate some reference; thanks.
    – Jo Wehler
    Commented Dec 3, 2023 at 20:47
  • Note that AI technologies like perceptrons or LLM tend to show bias too, because they are trained on human made data or get feedback from humans. Being based on statistical inference, AI is just as failible as we are, in the end.
    – armand
    Commented Dec 4, 2023 at 1:36

3 Answers 3

3

AI and experimental philosophy are already in motion. This SEP article talks about the multidisciplinary scholars who engage in an experimental approach to philosophy. An example of this is BDI software. From WP:

The belief–desire–intention software model (BDI) is a software model developed for programming intelligent agents. Superficially characterized by the implementation of an agent's beliefs, desires and intentions, it actually uses these concepts to solve a particular problem in agent programming.

The idea is that building formal semantics and executing it gives us insights to human thinking and behavior. Of course, not all philosophers see value in these sort of activities. Do a search for'experimental philosophy' on PhilPapers for some examples.

8
  • Well, these beliefs, desires and intentions are in fact variables that are used in a more complicated way than if-then. An application like a phone center agent or an advanced Callender could be some achievements. The whole thing is a framework for assisting a software developer to handle planning in a more productive way. Commented Dec 3, 2023 at 23:50
  • When I was in the university, a long ... time ago, one of my first books was about neural networks in software algorithms that was supposed the revolution of machine intelligence. Besides the speculations, now decades passed, the applications were some primitive games that we played in consoles before PC's. Now all of that is not even in the history of software. Don't get hyped by the terminology, see what is actually going on, what the real implementations are, what is demonstrated. Commented Dec 3, 2023 at 23:59
  • When we are talking about intelligent agents in software, we are talking about something like a thermostat. Commented Dec 4, 2023 at 0:11
  • @IoannisPaizis There is a generally recognized a distinction between AI and AGI, but my position on this is that intelligence is modular, compositional, and cumulative. In any case, building software agents and NLP systems helps us to see how complicated the human mind and brain are.
    – J D
    Commented Dec 4, 2023 at 14:02
  • 1
    @JD Indeed. My background is on vision and colour perception. AI is as hard to relate to as a box jellyfish. A box jelly has 24 eyes but very little connecting them to anything, and almost no processing that we can figure, yet they see and navigate. People are figuring out how they work, but it is uphill all the way. AI models sometimes give me the same WTF feeling. Commented Dec 4, 2023 at 16:08
0

You might find this interesting

https://johannadrucker.substack.com/p/poetry-has-no-future-unless-it-comes

A chat bot trained entirely on the poet Charles Bernstein's poems. What about a chat bot trained only on Hegel (or you or me), what would you ask it?

Poetry Has No Future Unless It Comes to an End: Poems of Artificial Intelligenceby Davide Balula and Charles Bernstein was created between 2020 and 2022 from a dataset of Bernstein's writing. The process and background are detailed in the book, which is published by Nero Editions in Italy. Balula created a synthetic voice based on recordings of Bernstein. Listen to the audiobook, and other sound files, including a live performance, at PennSound. Order the book from Printed Matter. Johanna Drucker has written about the collaboration

1
  • 2
    I don't want a bot trained on Hegel's text. I'd rather want someone who understands Hegel and is equally smart as him so that he can rewrite his whole corpus in a way which isn't so obfuscated, lol. Although perhaps AI will be able to do it one day!
    – user71009
    Commented Jan 27 at 7:37
-3

I believe that the Google AI machine will try to convert all milk of the world into ice cream, so that kids all over the world would have enough of it, forever. On the other hand, we people with our "fallible intelligence" and "inability to adopt new ideas", will have a hard time coping with it. Unfortunately, since all the data will then, be about ice cream, experimental philosophy will have nothing to study but ice-creams.


(justification can be provided on request)

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .