Skip to main content

Questions tagged [artificial-intelligence]

Artificial intelligence means making a computer do something that appears clever to humans. Fully general artificial intelligence remains an elusive and far-off goal; but many relatively 'intelligent' behaviors are now common even from consumer devices, for instance, recognizing a human face or playing a difficult game of chess.

31 votes
18 answers
13k views

Why is it impossible for a program or AI to have semantic understanding?

relatively new to philosophy. This question is based on John Searle's Chinese Room Argument. I find it odd that his main argument for why programs could not think was that because programs could only ...
Abraham's user avatar
  • 503
23 votes
8 answers
2k views

Could a sentient machine suffer?

I was considering this closed question very intently, and I found that I'm not at all fluent in the idea of modern slavery. Many philosophers have spoken on slavery. On this forum, someone has already ...
davidlowryduda's user avatar
19 votes
10 answers
9k views

What are the retorts to Searle's Chinese Room?

Searle's Chinese Room basically argues that a program cannot make a computer 'intelligent'. Searle summarises the argument as Imagine a native English speaker who knows no Chinese locked in a room ...
dorzey's user avatar
  • 353
19 votes
5 answers
6k views

How can one refute John Searle's "syntax is not semantics" argument against strong AI?

There are many refutations of John Searle's Chinese Room argument against Strong AI. But they seem to be addressing the structure of the thought experiment itself, as opposed to the underlying ...
Alexander S King's user avatar
19 votes
7 answers
2k views

Does compatibilism imply that a chess program has free will?

I am puzzled by compatibilism and am trying to understand what it means using a test example. Given that a typical chess program generates several choices, evaluates them with a goal of winning and ...
Harshavardhan's user avatar
18 votes
9 answers
6k views

Does the success of AI (Large Language Models) support Wittgenstein's position that "meaning is use"?

By 'success' we think of current AI/LLMs capacity of producing text that is regarded as coherent, informative, even convincing, by human readers [see for instance Spitale et al. and Salvi et al.] ...
ac15's user avatar
  • 1,761
16 votes
9 answers
4k views

Why are humans and AI often treated differently in cases where they perform nearly identical processes?

With respect to AI, some people appear to have an objection to the idea of feed[ing an] AI with other people's works and then claim[ing] all the output as yours. Let's create the following ...
user4574's user avatar
  • 269
15 votes
11 answers
4k views

I prompt an AI into generating something; who created it: me, the AI, or the AI's author?

I've been struggling with this question recently: Question: I prompt an AI into generating something; who created it? I can think of arguments in quite a few directions: I created it: The AI is an ...
Rebecca J. Stones's user avatar
15 votes
6 answers
5k views

Why do some physicalists use the Turing Machine as a model of the brain?

It has always puzzled me when people casually make comments like "Since the brain is a Turing Machine...". Just to clarify: I'm talking about generic discussions, not philosophical journals ...
David Gudeman's user avatar
14 votes
22 answers
8k views

What do humans do uniquely, that computers apparently will not be able to?

The question is often brought of what computers will be able to do as well or better than humans. We could ask a more definitive question: what do humans do that we never expect computers to do, no ...
Scott Rowe's user avatar
  • 1,678
13 votes
6 answers
2k views

Does Gödel's argument that minds are more powerful than computers have the inconsistency loophole?

In "Raatikainen, P., 2005, “On the Philosophical Relevance of Gödel's Incompleteness Theorems,” , the author argues that Penrose's and others use of Gödel's theorem as an argument against mechanism (...
Alexander S King's user avatar
12 votes
8 answers
3k views

Does the computational theory of mind explain anything?

In science, when you theorize that X reduces to Y, you propose a theory that links X and Y in some causal way. Physicists don't just say, "What you experiences as a gas is really a swarm of fast-...
David Gudeman's user avatar
12 votes
7 answers
9k views

On the difference between "knowing" and "understanding"?

Intuitively, there is a clear difference between knowing something and understanding something. We speak of someone 'getting' or 'internalizing' a concept, of developing a 'gut feeling' for something, ...
Alexander S King's user avatar
11 votes
8 answers
2k views

Is the simulation of emotional states equivalent to actually experiencing emotions?

According to the 'Mario Lives!' video, researchers have been able to develop an AI unit that is able to experience emotional states, such as greed, hunger, and curiosity. If the AI is currently ...
Left SE On 10_6_19's user avatar
11 votes
3 answers
1k views

On Wittgenstein's family resemblance and machine learning

Wittgenstein proposed in his later philosophy the concept of family resemblance to describe groups which cannot be defined by a single (or simple set) of common features but instead display (from the ...
Alexander S King's user avatar

15 30 50 per page
1
2 3 4 5
17