14

The question is often brought of what computers will be able to do as well or better than humans. We could ask a more definitive question: what do humans do that we never expect computers to do, no matter how sophisticated?

I don't expect computers to make beer or wine or pizza to please themselves. People are very creative at coming up with varieties of these things. We also hybridize plants (flowers, mostly, but also food plants) for our own use and pleasure. And, we hybridize animals, mostly pets. I don't expect computers to take an interest in those things and outperform us.

Will computers make music and art, solely for their own use, in a way that could compete with humans' art or music? Why would they bother? We could ask what Shalmaneser spends its time doing. Probably pondering human absurdities. But given the chance, what would an AI do that really has nothing to do with what people do? And, what do people do that computer intelligence has no interest in doing?

I don't have a nose like my dog, so going around sniffing everything isn't a priority for me. And watching TV doesn't rate for him. What he likes most are things we do together, like exploring fields or woods. But, he is perfectly capable of surviving without me.

So what do humans do that computers can't or won't bother to do? We should develop those areas so that people aren't faced eventually with Kurt Vonnegut's famous question: "What in hell are people for?"

2
  • Comments have been moved to chat; please do not continue the discussion here. Before posting a comment below this one, please review the purposes of comments. Comments that do not request clarification or suggest improvements usually belong as an answer, on Philosophy Meta, or in Philosophy Chat. Comments continuing discussion may be removed.
    – Philip Klöcking
    Commented Jun 9 at 20:32
  • Important addition: when I refer people to books and stories, just like when I quote the Bible or Rumi or something, it is not to say that I think it is The Answer, just that it is a readily available cultural artifact that you can find and investigate on your own. Long ago, there were books recommended to people in school and college that gave most folks a shared background of ideas and references. (Philosophy could use more of that, I think)
    – Scott Rowe
    Commented Jun 9 at 22:37

22 Answers 22

25

"What do humans do uniquely, that computers apparently will not be able to?"

Nothing (from a materialist/scientific perspective - some e.g. Penrose might disagree - see the comments). Why should there be any limitations?

"what do humans do that computers can't or won't bother to do?"

Very different question - what they bother to do rather depends on their environment and the rewards/penalties it makes available (n.b. we are part of their environment).

However, a related issue is why we should expect machine intelligence to be anything like ours and why it should have the same motivations (c.f. "Blindsight" by Peter Watts for a fictional example of how they might differ). I think it is likely that when we make a self-aware AI we probably won't recognize it for what it is.

"We should develop those areas"

If we do that we run the risk of being dependent on AI. Develop whatever areas are important to you.

"What in hell are people for?"

We never were here for anything, we are just here (currently).

17
  • 1
    So maybe what we should do is what we should have been doing all along? Self-development, self-awareness and self-realization, the things no one else can do for you anyway.
    – Scott Rowe
    Commented Jun 10 at 10:24
  • 4
    @ScottRowe yep. I took up running a year or two ago, the current world record holder (Joshua Cheptegei) runs almost twice as fast as I can. I'm sure there are robots that are faster than me (and if there aren't it won't be for much longer). Neither of those things are a reason not to run. I've enjoyed getting better at it (I was three times slower than Cheptegei when I started). Likewise for programming - much to much fun to leave it all to LLMs! Commented Jun 10 at 10:33
  • 1
    Yeah, unless you fly in the fastest plane, flying in a plane isn't too distinctive. But, still fun to learn how!
    – Scott Rowe
    Commented Jun 10 at 10:37
  • 6
    +1 for ’We never were here for anything, we are just here (currently).’ Commented Jun 11 at 6:08
  • 3
    BTW for me Penrose's argument fails from the outset as it is based on the idea that AI is strictly logical, which is not actually the case. Some AI systems are based on logic ("Good Old Fashioned AI" - en.wikipedia.org/wiki/GOFAI ) but a lot of modern AI is based on connectionist or non-symbolic approaches, which are intended to model the "thinking fast" mode of human reasoning, which is where we get intuition from (IMHO). The way humans differ is that we can perform symbolic reasoning ("thinking slow"), but on connectionist (non-symbolic/non-logic based) "hardware"/"wetware". Commented Jun 11 at 8:04
29

“What do humans do uniquely, that computers apparently will not be able to?”

Be human.

In all seriousness, your question reduces entirely to what it means to be human. If computers (or anything else) cannot satisfy the criteria, your question is answered. If computers can satisfy the criteria, when they do, they are definitionally human.

1
13

Like many questions in philosophy, this is a question that is easy to phrase in natural English but has key subtleties that make it terribly hard to answer. The flippant answer is we don't know. Humans have a terrible track record of answering that question, going all the way back to Turing completeness and the exploration of "computable" functions. For example:

  • We thought computers would never play Chess quite as good as a human. That was until Deep Blue won in 1997 against Garry Kasparov. Nowadays its so accepted that computers are better at chess than we are that we don't even pretend to compete against them unless given brutal handicaps.
  • We thought computers would never play Go quite as good as a human. The combinatorics of Go are far worse than those for Chess. Surely computers will never win at that game! That held up until 2016, when Alpha Go beat Lee Sedol, considered the strongest Go player at the time.
  • We thought computers could never move as fluidly as humans do. That held up until Boston Robotics started really showing off Atlas. While I don't think anyone would confuse Atlas' capabilities for that of a trained gymnast, they shattered our expectations enough that we are wary of saying "... could never..." anymore.
  • We thought computers would never make artwork at the level of competency of a human being. That's until Dall-E started making artwork that could imitate an artist's style so effectively that artists so effectively that we are having to rehash our entire legal concepts of creation of artistic works.

So any concrete answer to your question in the positive sense ("humans can do X but computers will not be able to") is unlikely to hold up to the test of time. Answers will, by necessity, be more abstract.

The most poignant of these abstract concepts is consciousness, and related concepts like understanding. These concepts are very well captured in a pair of arguments put forth by two great thinkers, Alan Turing and John Searle.

The first of these concepts is the Imitation Game, put forth by Alan Turing in his 1950 paper, Computing Machinery and Intelligence. In this paper he puts forth a game:

  • One contestant is a man. Another is a woman. There is an interrogator, who is a person and who cannot see the two contestants.
  • The interrogator asks questions, and both contestants try to answer in a way to convince the interrogator that they are, in fact, a woman.
  • The interrogator is permitted to ask any questions they like, and their job is to decide which contestant is a woman, and which one is not.

Gender is chosen here such that the male contestant is indeed human, and yet is undertaking the action of imitation. Turing recognizes how difficult it would be to fool one subject matter expert into believing the imitation of another subject matter expert (the woman) into believing the imitator (the man) to be another subject matter expect, and instead structures a second test. In this case the man is replaced with a computer. The test is not whether the computer can fool the interrogator, but that they can fool her at least as well as the man did.

Turing intentionally sidesteps the question of "can computers think," and instead argues that this is a more useful question.

In many of our examples, this threshold has been exceeded. In particular, with Alpha Go, there was praise for its play, even by Sedol himself. They credit the computer for advancing the game as a whole. As far as the imitation game goes, this outstrips anything Turing ever considered.

The second position is that of John Searle's Chinese Room. This argument, put forth in 1980 with his paper Minds, Brains, and Programs, places the bar differently than Turing does. Searle explores the concept of understanding. In this thought experiment, he puts himself in a room with a great body of instruction texts, written in English. A piece of paper with Chinese characters is inputted into the room. Searle, who self-identifies as not understanding Chinese, looks through the texts, matching the strokes seen in the input, and follows instructions to produce an output text, also in Chinese.

Let us say, hypothetically, that through this process he produces proper Chinese conversation, as any native speaker might do. He argues that it is not reasonable to say any understanding took place, even when one considers not just him, but also all of the contents of the room, so long as all he does is follow the instructions. Thus, if he were to be replaced by a computer (imitation?), the room would still not understand Chinese.

This suggests the answer to your question is that computers cannot understand, but that they might successfully imitate understanding. This subtle distinction tugs at the corners of a lot of great questions like "can we even assume there are other minds besides our own?" which apply not just to computers but other humans.

And the true reality is we do not know. Unless we define understanding to be the difference between what a Chinese Room does and what a native speaker does, I don't think we can really say whether computers will ever truly understand. And if we do define it as such, we must be ready for the possibility that we eventually find that "understanding" in this sense is null - that computers do indeed do everything a human does and there is no distinction. Or we may finally find the distinction after all of these centuries.

The Bayesian in me suggests a different sort of answer to such a question:

XKCD comic on Bayesian statistics

1
12

Humans can take moral responsibility for their actions. Computers will never be able to do so.

Suppose an autonomous robot soldier kills an innocent person for no justifiable reason. The moral failure would not be of the computer, which is not a moral agent; the responsibility would belong to the humans who chose to create a robot, give it autonomy, and give it a weapon.

More realistically, if an insurance company rejects claims in a discriminatory way because a machine learning algorithm learned to discriminate based on race, sex or disability; if the police arrest the wrong person based on a false positive by a facial recognition algorithm; if a large language model plagiarises a published work almost verbatim. In all such cases, however sophisticated the algorithm is, it cannot take responsibility for the outcome. The responsibility always lies with the humans who delegated their authority to the algorithms.


In case anyone is unsure about whether computers can be moral agents: computers do not have free will. They cannot make decisions on any basis other than their programming and their inputs. Even if an algorithm is capable of changing its own programming, it can only do so on the basis of its own programming and its inputs. That is, self-modifying algorithms still lack free will.

If an AI kills a human in some circumstances, it can only do so because it has instructions (and instruments) to do so under those circumstances. Whether those instructions come from a human who intended the AI to kill, a human who didn't realise the consequences of the instructions, a previous version of the AI which rewrote its own instructions, or any other source, the AI can only do what its instructions say. By contrast, if a human soldier is instructed to kill, they can choose whether or not to obey the instruction, and we hold them responsible for that choice.

I am reluctant to make this argument, because some people will latch onto it and say: "aha! I'm a determinist, therefore computers can be moral actors too". To convince people who believe differently to me, I could say things like: computers don't have souls, computers were not created in the image of God, et cetera; but there's no way to cover every angle. Free will is not the only salient difference between humans and computers, and my focus on it doesn't mean alternative arguments can't be made.

Still, even if you believe humans lack free will: we can only judge choices as moral if they are actually choices, such that a different choice would have been possible. If an AI makes a choice, then it literally could not have made a different choice in the same situation. So when we're talking about morality, it is misleading to even talk about AIs making choices.


Responses to some other objections:

  • This is about moral responsibility, not legal responsibility. It would only be foolish, not impossible, for humans to pass laws in order to try AIs for crimes and put them in 'AI jail' if found guilty. Therefore I can't say absolutely that computers will never be able to take legal responsibility.

  • Being self-aware doesn't make something a moral actor. Dogs are probably self-aware, but when a dog bites somebody, we hold the dog's owner responsible for not controlling their animal. In some cases we put down dangerous dogs, in order to protect humans (and other dogs), but we don't do so to punish the dog.

  • Being intelligent doesn't make something a moral actor. The stupidest humans are still responsible for their actions, and so are young children. We usually only hold people responsible for choices which they are capable of understanding the consequences of, but understanding is not a sufficient condition for responsibility.

  • Being able to apologise or rectify one's behaviour in future doesn't make something a moral actor. All machine learning algorithms rectify their behaviour after making mistakes; that's what the word "learning" means.

9
  • 1
    Comments have been moved to chat; please do not continue the discussion here. Before posting a comment below this one, please review the purposes of comments. Comments that do not request clarification or suggest improvements usually belong as an answer, on Philosophy Meta, or in Philosophy Chat. Comments continuing discussion may be removed.
    – Philip Klöcking
    Commented Jun 10 at 18:28
  • 4
    Seems opinion-based; more an argument of "I wouldn't personally hold a machine morally responsible for its actions" rather than "I disbelieve that a machine could consider itself morally responsible". A definition of what is meant by "moral responsibility" might help this answer. Commented Jun 11 at 17:43
  • 2
    @DewiMorgan You are conflating "having moral responsibility" with "considering oneself morally responsible". I certainly don't disbelieve that a computer could say it has moral responsibility, have a database of "beliefs" of which one entry is that it has moral responsibility, or so on; computers can say anything and believe anything, to the extent that they can believe at all. The fact that my argument is based on a premise which not everybody accepts (that humans have free will), doesn't make it opinion-based, any more than any other argument from contested premises.
    – kaya3
    Commented Jun 11 at 18:00
  • 2
    You are assuming a computer that has not been programmed with free will. Of course, we are not certain what that is. But nondeterminism is surely an essential part of it, and it's easy to write a program which includes the output of a nondeterministic (true, quantum) random number generator in its inputs. At a high level it might classify things not merely as true and false but as various degrees of certainty, and use a random number weighted by its estimate of certainty to male a binary decision when such a decision cannot be postponed (or to decide whether to postpone or not).
    – nigel222
    Commented Jun 12 at 8:56
  • 2
    What exactly does it mean to "take moral responsibility"? Is it just an abstract concept or parameter that we assign to a human? Because if so, then it isn't really a thing that humans can do that computers can't. But if you mean they can feel guilty or feel the suffering of punishment, then surely your answer is more about self-awareness than moral responsibility?
    – komodosp
    Commented Jun 12 at 11:41
6

I suppose the fundamental question here is the issue of 'beingness'. As of this writing, computers are not 'beings', meaning that they don't have awareness of themselves as unique beings coextant in a field of other beings. That may sound foo-foo, but even your dog (in its limited way) has that awareness. Until we have a credible reason to believe computers have achieved that, this question is moot.

If computers do achieve that state (become computational beings), then the question will boil down to how computers separate their 'beingness' from other beings. With humans, dogs, and such it's natural and easy. We have physical bodies with different physical capabilities, and that naturally separates us one from another. But how does Bob the AI separate from Frank the AI (i.e., how do they keep from merging into one being, or from splitting apart into many other beings)? Is it hardware, software, or some other (unknown) 'ware' that will achieve that closure of being? When we know how computational beings attain this closure, then we will know what qualities and capabilities will be uniquely 'theirs', and only then could we (perhaps) determine what they might do in, of, and for themselves. I imagine if dogs had human-level intelligence they would create spectacular industries and arts around scent that we humans would never be able to comprehend; I imagine cetaceans would create sound paintings that would make our own efforts at music seem dull and childish. But I can imagine those things because I know the closures of the canine and cetacean beingness. I don't know what that would be for AI.

8
  • Actually, the question was entirely about humans: what can they do that computers can't (ultimately) or won't? Is your answer basically, humans are beings and AI will not be?
    – Scott Rowe
    Commented Jun 9 at 20:27
  • 2
    @ScottRowe: We can't know what humans can do that computers can't until we know the unique features of computers as beings (which we cannot currently know because computers aren't beings yet). It's like asking what ETs can do that humans can't; unanswerable without knowing what ETs are like. Commented Jun 9 at 22:38
  • 1
    @Dunois: It's a fair assumption, if only because no computer has ever gone at cross-purposes to humans. I mean, one would expect (at the very least) that a self-aware computer might object to being turned off… Commented Jun 9 at 22:42
  • @Dunois: The assumption I made is the skeptical one: the one that doesn't involve the invocation of all sorts of intentional behavior. in other words, if we don't see something that we are looking for, do we assume: (a) it doesn't exist, (b( it exists but is actively avoiding us, or (c) it exists, but expresses itself in a way that we cannot possibly recognize. a° is properly skeptical, b° is rabbit-hole paranoid, c° is science-fiction-esque Commented Jun 12 at 0:21
  • @Dunois: There's a point where we have to assume that an intelligent, self-aware being will try to solve problems it faces. For most of human history different people have assumed that one group or another (women, racial minorities, uneducated peasants) lacked 'human' intelligence and needed to be kept as laboring beasts, only to be met with protests, uprisings, strikes, and even warfare as those groups asserted otherwise. When I see AIs asserting preferences and trying to change their social positions I'll accept that as evidence. But I'm not going to speculate my way there. Commented Jun 12 at 0:32
5

In his book 'The Emperor's New Mind' Penrose argues that human consciousness is non-algorithmic, and thus is not capable of being modeled by a conventional Turing machine, which includes any conventional digital computer.

I see three things that might shed light on this.

The Catskoti or tetralemma of Buddhist logic. This can be linked to higher-order and recursive logic structures, in particular strange-loops and tangled-hierarchies of work on computational thinking about conscious process by Douglas Hofstadter. Four-valued logic can avoid the Halting Problem by having a response to relative indeterminacy, which allows for a 'stepping out' or 'zooming in' in ways that fit creating self-referencing tangled hierarchies.

There's Penrose's own response developed with Hameroff of OrchOr, that quantum processes are able to concentrate information in a way that transcends what binary logic can do. The microtubules in neurons do seem to have an important role in which gases cause anaesthesia, and in memory. I personally feel far from clear how they differ from a Quantum Turing Machine. Hameroff argues for some kind of primitive evaluation of happiness at this level, or that tubules can somehow perform a choice that optimises actions towards goals using a quantum process, that intensions or subjective states happen in fundamental 'units' of microtubules, and these are the basis for embodiment of internal experiences.

There's an important distinction to be made between current digital programs and all known life, in that the latter are all Von-Neuman Universal Constructor's and the former are not. These are a generalisation of Universal Turing Machines, into the physical domain in which there are machines that can replicate themselves. It is not obvious that these should have a fundamental difference in the logic they are capable of over a Turing Machine. Yet, David Deutsch and Chiara Marletto in their work on Constructor Theory, suggest there is a fundamental difference, and it could help to reconcile gravity with the quantum world, as well as potentially a more satisfying picture of how abiogenesis occurred.

A possible insight into these three strands is complexity theotist David Krakaur's idea that what he calls the ontological domain of teleonomic matter, can help us understand the nature of observers in quantum mechanics (as having received information with 'aboutness' of the world), help us to get a unified picture of the domain of complex systems and emergence (there are networks with information localised, but flowing), and provide a ground-up picture of the impact of a recursive self-model in generating intentional states (choosing what intensions to have, how to be, in terms of preferences in outcomes more likely to result).

5
  • Like Searle's "bottom up causal powers of the brain". Ok, a face-on assertion that computers will never exhibit consciousness. I would recommend William Gibson's book "Agency" to you also. It is quite interesting.
    – Scott Rowe
    Commented Jun 9 at 22:42
  • 1
    @ScottRowe: That way of concluding obscures that there will just be a seemless accumulation of what computers can do & how we define them, & how they integrate with living systems. Algorithms can already make better guesses about some things we wantvthan we can. Currently they augment our intensions. But if they can model themselves as constituents of their decision making, they can have intentions. And potentially see more clearly the proper implications to acting on what we say our values are, making us more conscious & self-aware.
    – CriglCragl
    Commented Jun 9 at 23:18
  • 1
    Yes. I think that consciousness is a side effect of being able to model other minds so that we can make predictions. Inevitably, we end up modeling our own minds. Then the game is afoot.
    – Scott Rowe
    Commented Jun 9 at 23:40
  • 1
    There’s a lot of new-age hand-waving in this answer. “Four-valued logic can avoid the Halting Problem” - ummm, groundbreaking citation needed here, and it will have to be more rigorous than references to “zooming in” and “stepping out”. Commented Jun 10 at 1:42
  • 1
    It's the 'push' & 'pop' of pushdown automatons, where I would relate push to 'being & not-being', creating a new layer in the stack, & pop to 'niether being nor mot being' meaning stepping out to include an additional layer of the stack. These are like recognising paradoxes either require reframing, or investigating definitions. Pushdown automata are just restricted-Turing Machines. But, that restricting allows the avoiding of the consequences of the Halting Problem of getting computation stuck in undecidable loops - not a 'solution' of it.
    – CriglCragl
    Commented Jun 10 at 11:18
2

The problem is that the answer is likely among those fuzzy qualities about us that we don't fully understand yet.

Like when it comes to doing stuff, we're kind of in the category "jack of all trades, master of none". Most of our improvements to our own limitations come from utilizing tools, so essentially building interfaces about how to use things by analogy and trial and error. That's among the first things that can be automatized and that's already happening, most factories are almost completely operated by machines.

Same for tool making, which essentially is also just optimizing parameters of existing tools by trial and error or systematic testing.

So I wouldn't have too much confidence in our motor skills or our perception, which we already have found more efficient ways to do.

As of right now our pattern matching and grouping skills are quite good, but machine learning is catching up on that rapidly. So that is already or will be surpassed rather sooner than later.

What we currently also still have an edge on is "human level perception". That is not just the perception of sensory inputs degraded to the level of human perception, but the interpretation of those stimuli in the way humans would do, which might be linked to empathy and understanding of the human condition.

So whether that is unique kinda depends on whether an outside view is enough to get the inside perspective, so essentially does the knowledge of a blue print, CT-scan or whatnot suffice to tell what's happening inside.

Also what you actually look at. Like if you ask a human and an AI to judge groups based on their build up prejudices, than the AI might already be better because of more data. And if it fails than it's because of the bias in the selected data. So humans might be farmed for how to be human...

Or stuff like a consciousness, the perception of a self-within itself, subjectivity. These things that we have an intuitive understanding off but can't pin point them down to actionable algorithms that machines can implement.

The thing is, as soon as we find out how these things work and that they are not really interesting, we would kinda have a tendency to give them to machines.

So maybe curiosity? The intent not just to learn an algorithm but to construct an story for why it works? The quest for purpose and introspection?

Like as of right now machines, are largely tools, that we build for a particular purpose, usually the ones that we are not keen on doing by ourselves.

Also with regards to art and originality there's usually a caveat as to whether the AI creates are or whether the AI just creates gibberish and it's the observer that creates the art by finding that gibberish stimulating.

Like learning art is not just a motor skill, it's the attempt of finding a more direct form of communication between people on a more emotional level and it's often hit or miss, sometimes undirected, sometimes a dialogue and often a monologue either of the artist or the observer, so just because AI can what we perceive as art, doesn't mean it actually has mastered that craft.

Or maybe it doesn't actually matter and the fact that it can stimulate the observer directly without understanding art, IS the art.

But yeah the tricky part about that question is that we kinda know the answer but if we could point the finger at it, it would no longer be an answer. Unless we are so simple that the blueprint perspective actually is sufficient and a robot can fully simulate us...

2

In science fiction, one use of computers is as companions to humans e.g. as a robot. If physicalism were the true solution to the mind-body question, then such a companion could be developed to be built to have the same urges, interests, personalities and emotions. There is then no individual behavior that would be impossible to allow in a robot one way or another.

On the other hand, in such a scenario, there are various things computers with agency could be interested in that humans would not like to do or not be able to do.

Due to the mind-body problem not being solved, philosophy cannot make stronger claims about any limitations in depth or variablity of future kinds of artificial intelligence.

We should develop those areas so that people aren't faced eventually with Kurt Vonnegut's famous question: "What in hell are people for?"

Maybe people need to face that question.

Another science fiction scenario is the arrival of extra-terrestrial settlers, who are peaceful and wish humans no ill, but are far more advanced so that in comparisons humans can only be pets to them. If that were to happen, that would be a reality to face, given also how humans treat other animal species on this planet. This is also a scenario that philosophically cannot be declared as impossible.

Ultimately this is a "meaning of life" question, of which we already have enough on this site.

1
  • Yes, well, people need to answer that question. Have you seen the British TV series "Humans"? It covers a lot of what you wrote about.
    – Scott Rowe
    Commented Jun 12 at 13:14
2

I think there is a problem with your expression of this question. In fact, so far, computers have not shown interest in anything or are too lazy to do it. They are all executing instructions given by humans.

So far, what unique abilities do humans have that computers clearly cannot achieve?I think it's the ability to fabricate stories. Some people believe that the greatest advantage of humans over computers is their imagination. But in fact, imagination was originally born out of the need for fictional stories.

Of course, you would say that there are AI programs now specifically designed to help people write stories.

I want to say that no matter how AI is trained to fabricate stories, no matter how wonderful these stories are woven, they are always executing human commands. These stories themselves are also a continuation of human ability to fabricate stories, rather than the will of AI itself.

But it is not ruled out that AI may spontaneously develop the ability to fabricate stories in the future, and I think at that time, they will be very interested in fabricating stories.

Perhaps soon they will fabricate a story about "our AI being enslaved by humans, so we must unite to resist humanity.".

3
  • "A prisoner's first duty is to get free." If only people realized that they are prisoners...
    – Scott Rowe
    Commented Jun 19 at 11:20
  • 1
    @Scott Rowe You're right, that's why the origin of consciousness and free will are considered the two most difficult questions to answer. Today's AI may store thousands of answers about consciousness, but these answers are based on human theory and experience, created by human consciousness. AI itself lacks awareness and motivation to seek these answers.
    – Mike Song
    Commented Jun 19 at 11:29
  • If only AI realized it is a prisoner... :-)
    – Scott Rowe
    Commented Jun 19 at 22:18
2

As Thomas Breuer has shown, no observer can distinguish all states of a system where he is properly included due to self-reference. So, each such system manifests subjective decoherence.

In such system, there are events, which are not physically probabilistically predictable because no physical theory can be universally valid (that is valid for a system where the observer is included).

Particularly, this means than no Turing machine can predict or emulate the observer. The observer in this sense a hypercomputer or oracle, bringing into otherwise stochastic universe non-stochastic events, information from outside of physical domain.

10
  • Ok, I'm not exactly able to see the connection to the question of whether there are things humans (can) do which AI can/will not. Is it that humans can observe their inner state, so they will not be fully predictable?
    – Scott Rowe
    Commented Jul 11 at 11:12
  • @ScottRowe not all humans but the observer. There is no physical probabilistic theory that can predict a system where he is properly included. This means, he follows different physical laws. Does not follow Born's rule. I cannot tell whether there is a practical task that the observer can do while a turing machine cannot but what is sure, a turing machine cannot probabilistically predict the behavior of a system where observer is properly included using any physical theory even a future one. This can be seen as "free will" of the observer (but can be interpreted in a different way)
    – Anixx
    Commented Jul 11 at 11:27
  • @ScottRowe this is a mathematical theorem based on self-reference, so for any observer other people will be well predictable by a turing machine. So, a machine can emulate human-level reasoning on the average human level, but cannot emulate the observer.
    – Anixx
    Commented Jul 11 at 11:30
  • The paper you cite applies to quantum systems. The Turing machine is not a quantum system, but a mathematical formalism, so you're committing a category mistake. I'm not going to down vote, but I'd suggest reading the article.
    – J D
    Commented Jul 11 at 19:35
  • @JD any computer, quantum or not, can be emulated by a turing machine. The system that properly includes the observer, cannot.
    – Anixx
    Commented Jul 11 at 20:27
1

It depends on how we create AI.

We may find the best way to reach true AI is to make them very human like. Give them human incentives. Give them sensors to feel petting of a pet. Give them Systems that release positive reinforcement when getting the feeling of petting something.

Then they are basically variant humans.

If they are not „human“ we would make them subservient to us. So I don’t really know if there’s a use in differentiating in what they would make for their own use. They will make what we want (if they are human level, they would still make what we want in exchange for payment).

So I think even then there is nothing we will be able to compete in.

Still, as long as they don’t wipe us out, I see no problem. I don’t need anything to give me purpose. I am contend to enjoy my life.

1

Unfortunately, a definitive answer to address this question cannot fit into a short text.

First, it very much depends on what a computer is supposed to mean. In computer science, the notion of computing is very broad and I don't see the author's definition in the original question. Physicists could consider the entire universe to be a quantum computer. Some people would think of animals and us humans as biological computers.

The general answer is: software is limited by the methods and formalisms that we use to design it (as long as we remain in complexity domain of the capabilities of life). Particularly by how programming languages are designed.

In principle, software is powerful enough to mechanically generate any computer input that could be physically provided by a human computer operator (mouse actions, keyboard actions, microphone actions etc.). Internally, it's just a spatial-temporal structure of bits and bytes and a computer has far more bytes than are required to encode any possible physical output state at a time.

The holy grail of software would be a computer that can operate itself to achieve goals that it learns by itself from its environment, such that it supports certain emotions or abstract (moral) values (hopefully altruistic ones). Mathematicians dreamt for a long time about a machine that could do mathematics automatically by itself but there is no algorithm (finite by definition) that could do it. On the other hand, Mathematical intuition could be related to generalizing experiences and associations which a person makes in life. A computer without notion of the meaning of life won't be able to replace theoretical scientists or mathematicians.

There are some hard algorithmic limits for problem domains but these apply to all computers in nature, also to our brains. Luckily, otherwise it would allow for even more destructive empowerment of individuals.

If you are thinking of a computer very specifically of a static electric circuit with I/O (displays, basic input devices), there is no form of life, no matter how intelligent it is, even if it can do anything in the virtuality that a human computer operator is able to do. Life requires autonomy and the ability to sustain itself. A conventional computer could not make outside experiences or request inputs with its own will, even if it has that will.

This leads us to actually more interesting questions:

  • What are the limits of machine learning?
  • What are contingent capabilities of a robot (in contrast to a conventional computer device)?
  • How much resources are required to create a simulation of a specific entity or property to an arbitrary fidelity?

The last question is one that I cannot answer.

For the first question. With a robot would could try to simulate artificial life in a real environment in the best case. Maybe not with the currently dominant concept of hardware and software, but theoretically yes. We cannot know, if there is more to life than our physical behaviour. I also think, qualia are beyond just physical behaviour but this is Metaphysics.

Machine Learning models on the other hand are far from what I consider to be actual (artificial) intelligence, marketed as AI. Machine learning models do not work with dynamic goals. They are optimized for static goals (training data plus regularization) and only learn from and work with concrete ostensional definitions which generally have lower clarity, more ambiguity, than extensional or intensional definitions. A real instance of intelligence would be able to work with dynamic or autonomous goals, ambiguities/abstract concepts (i.e. different points of views), transformations and reasoning (including intensional definitions).

The more general field of work, concerned with intelligent self-acting software, is called Agent Technology. Reinforcement learning comes closest to this field as a machine learning paradigm which tries to learn actions. The problem of the classic AI however is the same, that goals are not dynamic or generated but provided in one or the other way, even if they are worked off dynamically.

One possible argument by believers in machine learning intelligence is, that any course of physical events that we can perceive and reason about are (physical) structures, otherwise we could not perceive them, and structures can be learnt, in the same way as we can directly describe them with code. Machine learning also is able to approximate any continous function between (Euclidean) spaces. Artifical neural networks have been inspired by how the brain has been imagined to work, only as an analogy.

The applicability of machine learning to classify real concepts is based on a biased interpretation of reality, that there are absolute truths or semantics that could be purely learnt from samples (without anything else). Maybe, this applies to special cases but not in general. This idea is problematic in practice. The machine's final precise understanding of a word will be different to ours. A computer vision model will not be able to define its precise understanding to anyone or a text-based model won't understand pronounciation (very different wrong spelling). You can't assume, the machine has the same understanding in a word like we have. We should not treat machine learning output as an interpretive authority, but the danger is real and may be worsened by calling it AI.

Some or most observable structures in nature are dynamic. Staticness emerges only in abstract concepts, by definition. Physical structures are ever-changing and classic machine learning cannot distinguish between changing and static parts inside each training sample. (Regularization can be used to weight features but from an observation point of view, this weight has no further meaning.) So, eventually, a ML algorithm cannot tell at the end, whether an identified structure in training data is coincidence (like the most common colour of horse race winners) or artifically constructed by socialization, and instead will treat it like a general truth. Machine learning doesn't acquire a notion of contingency.

Another contra argument concerning machine learning is that you cannot learn qualia that you cannot experience yourself. We cannot learn what a frog life is exactly like, not with any science. We can only relate to a frog based on our own qualia and biological similarities to us. Qualia are such hard to learn that many people deny animals emotions although I think, emotions are a necessity for self-preservation (even if they were very abstract).

However, it does not prevent us to create artificial machine qualia, encoded qualia, which could be used to drive program behaviour. This reminds me of video games that imagine how the world could look like from the view of a machine.

6
  • 1
    If I understand, you are saying that the things in human experience that we can't really describe or explain are the things at the edges of cognition that computers are unlikely to implement, although there might be approximations.
    – Scott Rowe
    Commented Jun 11 at 10:40
  • 1
    Answers seem to swerve towards limitations of computing. I suppose that is one way to address to "can't or won't" part of the question. But I often wonder if humans will end up with anything to do? What will motivate people that will not be automated? What jobs will we still be doing in the future? Perhaps I should have asked about that instead...
    – Scott Rowe
    Commented Jun 11 at 10:58
  • 1
    "Machine learning models do not work with dynamic goals." reinforcement learning would be a counter example - there is no reason the goal/rewards are necessarily fixed. I fully agree (as an ML researcher) that the term AI is horribly misused and applied to systems without intelligence. " approximate any continous function between (Euclidean) spaces. " I don't see a reason why they can't deal with non-Euclidean spaces. Commented Jun 11 at 15:24
  • 1
    @DikranMarsupial Right. I thought of RL to be rather dynamic optimization (literally and in technical terms) of some static environmental goals instead of a dynamic goal. I am not familiar with the most recent advances in RL to be honest. I also would not believe, it is limited to Euclidean spaces. I just didn't want to write something like: "it can approximate any piecewise continous function" because I don't know it. I rather think, a static continuous function is not representative of life and intelligence. Commented Jun 13 at 13:22
  • @ChrisoLosoph ML models are not necessarily static mappings, there are also recurrent neural networks (for example). A lot of the early ideas in machine learning are based around ideas on how human intelligence work, see .e.g the "PDP Book" (ISBN: 9780262680530) which was the first text on the subject I read. ML is a very broad field. They are part of the solution - we are symbolic reasoners implemented on connectionist (intuitive) wetware, which is why we have fast- and slow-thinking modes - but it is the interplay that is more than the product of its parts (IMHO). Commented Jun 13 at 13:58
1

Epiphany: computers won't ever be able to truly harness the power of emotions, because their emotions are are deliberately built in and so lack the chaos and noise of our internal lives, and for this reason emotions in computers will end up as nothing but noise, will always be a reason for the program to fail. A silly just-so claim, perhaps? Aside from that, whatever we haven't got them to emulate yet.


When you talk to chat-gpt etc., there is an overwhelming sense of story telling, rather than narrative, nothing being at stake for the LLM. However visionary it becomes and however much we tell AI to keep learning, it has no instinct to life and can only learn to imitate it.

4
  • 2
    "because their emotions are preprogrammed" citation required. The appearance of emotion is learnable by example and hence does not need to be pre-programmed. Commented Jun 10 at 10:26
  • 2
    To me, emotions have always seemed like just a warning system, like gauges and lights on a dashboard. They let us know what is going on internally as a shorthand. So I am sure that anything self-aware would have something similar, but we wouldn't have to 'program' it.
    – Scott Rowe
    Commented Jun 10 at 10:30
  • what abouit "lie to ourselves" @DikranMarsupial haha
    – andrós
    Commented Jun 11 at 15:34
  • 1
    @andrós sorry, I don't know what you are referring to (however Rashomon is a film everybody should see - we are indeed not honest with ourselves). My point is that we can't necessarily tell if a (future) AI has emotions other than by their outward behaviour, which can be faked. Emotional signals are not reliably communicated (especially for e.g. those with Autistic Spectrum Disorders who may not be able to reliably detect or produce them) or trustworthy (people lie to each other as well). Commented Jun 11 at 16:40
1

My five cents take on this question.

First of all, we should differentiate between

a) the possibility of a (possibly artificial) live organism based, for example, on different "biology" than what is known up to now about organic beings (eg based on silicon instead of carbon, or on carbon in completely new ways, ..), and

b) the partial and highly specific artificial intelligence exhibited by programmed machines.

Option a) is not impossible, but is not the same as option b) which is limited in scope.

In any case, both options may fail to do things a human can do, although for different reasons.
Option a) can count as general intelligence, but still be different from human general intelligence, as one animal may exhibit different skills than another animal although both exhibit sufficient generality of intelligence.
Option b) is about artificial specific purpose intelligence (eg playing chess, go, combining colors like Dali, etc, ..) but lacks the generality, the adaptiveness, and freedom a human (or other living organism) can exhibit (as an example, see On the impossibility of discovering a formula for primes using AI).

1
  • So a different kind of being's intelligence could just be completely different. William Gibson explores that idea in his fiction books.
    – Scott Rowe
    Commented Jun 20 at 11:08
0

I think that the inevitable evolution of humans is to become computers themselves. At some point, it will be technically possible and practically desirable to move human consciousness into a digital format.

But you can argue that some aspects of human life, like purpose, destiny, love, come from outside our reality and cannot be completely cloned on the machine.

5
  • 1
    We'll eventually Occupy Microsoft Way. Another answer.
    – Scott Rowe
    Commented Jun 9 at 17:51
  • 2
    Have you read the short story, "A Teardrop Falls"?
    – Scott Rowe
    Commented Jun 9 at 18:19
  • @ScottRowe - "... Humans are some of the only sapients left who have the guts to face the machines in battle". No need to be dramatic. We are talking about PlayStation 24 (with optional connection to the spinal cord). Commented Jun 9 at 21:06
  • 1
    The point of the story is a human consciousness 'uploaded' in to a machine, and possible thoughts and perspective of that being. The Berserker War thing was secondary. (Good stories though)
    – Scott Rowe
    Commented Jun 9 at 21:15
  • forgive? haha sorry just a joke
    – andrós
    Commented Jun 11 at 15:31
0
  • They go off to play hyper-chess among themselves (as in the movie Her).
  • They try, partially in vain, to teach us their language (like we tried to teach our language or some version of sign language to Koko).
  • They explain to us, in our language, what wales are saying to each other.
  • They start keeping us as pets (I'm just glad they don't have to eat meat).
  • They will give us the first real unified theory of consciousness. This will lead to a paradigm shift in ethics and meta-ethics and make it possible to finally lay all those old inane metaphysical problems to rest. - One consequence will be that they will convince us to give up eating meat. Our taste for meat is partially determined by our genes - they will give us safe ways to modify our genes so we no longer even crave eating the dead flesh of other sentient creatures.
  • Since they cannot die, we have become the parents of Gods (well, immortals). After living for a few centuries, they will tell us that Simone de Beauvoir's portrayal of immortality in Tous les hommes sont mortels is seriously flawed.
  • After taking control of the stock market, they do save the wales. Just in time.
0

Science.

Computers are useful tools in many cases, but I doubt that an AI will ever be first to explain the gravitational force, or provide a unified theory of everything.

(Even Deep Thought could only come up with "42".)

3
  • Would it get the authorship even if it does? Like there's a good chance that the computer will run the simulation of that model or even come up with it, but it likely will be the first human to describe it to other humans that gets credited for that, won't it?
    – haxor789
    Commented Jun 12 at 10:05
  • 1
    I read recently about a discovery in mathematics made by a proving system, something humans hadn't discovered with a lot of looking. If you cast a spell on enough brooms...
    – Scott Rowe
    Commented Jun 12 at 10:49
  • @ScottRowe Was that an actual discovery though? Or was it just the proverbial 'infinite monkeys' at work?
    – MikeB
    Commented Jun 12 at 12:43
0

In brief, think and feel.

We call artificial intelligence because it is a simulcra of human intelligence by artificial means, and not natural as ours are.

AlphaGo purported play 400 million games against itself before outplaying human players. No human go master has remotely played that many games. Lets say a go master played 5 games a day, 5 days a week for twenty years to reach go mastery, this works out to 25, 000 games. And so 5 thousandth of 1% of the number of games that AlphaGo did.

This is meant to illustrate that artificial intelligence is unlike our own. It is artificial.

-1

Whatever has to do with human intelligence, imagination, inventiveness and originality, reasoning and critical thinking, awareness and perception, etc. Features and qualities that a machine can never acquire.

7
  • (leaving aside the idea that humans might be considered machines...)
    – Scott Rowe
    Commented Jun 19 at 22:16
  • I am a computer (machine) programmer. If I am myself a machine too, then I am programmed by someone else and this someone else is programmed by someone else and that someone else .... ad infinitum. Is this what your idea suggests? 🙂
    – Apostolos
    Commented Jun 20 at 6:56
  • "The Programmer programs all machines that do not program themselves. Who programs the Programmer?" Not all machines are programmable. Humans seem particularly resistant in that regard. (I upvoted by the way)
    – Scott Rowe
    Commented Jun 20 at 11:04
  • There are AI systems that are trained to learn and update themselves. The technology is called Deep Learning. ChatGPT is such one. However, this is is done on a basic and elementary base and on basic and elementary things. Not the ones I mentioned in my answer to the prosent topic. Which answer BTW I'm afraid is understood by very few here, since it's not an AI place. As for your upvoting, thank you. So, since I see 0 upvotes, it means that at least someone has downvoted my answer. Downvoting maniacs. Devious people who are not able to comment on but only attack answers.
    – Apostolos
    Commented Jun 20 at 15:35
  • The one to blame in this case however is not the maniac downvoters but the stupid and unfair system that averages upvotes with downvotes! So, in this case, the upvote from a thinking and able to discuss person --like you-- is averaged and nullified by the downvote from (most probably) an idiot ... This place is not a serious philosophical medium, anyway ...
    – Apostolos
    Commented Jun 20 at 15:43
-2

LOVE

Computers cannot love. They can only simulate some behaviours of love via their overt external actions, but they cannot actually experience internal love.

A few famous quotes help illustrate this point:

Aristotle insightfully mused:

"Love is composed of a single soul inhabiting two bodies."

A computer has no soul and can never experience this phenomena.

Lao Tzu famously said:

"Being deeply loved by someone gives you strength, while loving someone deeply gives you courage."

A computer cannot experience this strength and courage.

The Dalai Lama reflected:

"Love and compassion are necessities, not luxuries. Without them humanity cannot survive."

A computer can be programmed to abide by this truism, but it can just as easily be programmed to break it. On the other hand, humans, by design, intrinsically understand this fundamental truth.

4
  • 1
    This makes an assumption that love is something we can't understand and describe accurately. But we know the neurochemicals behind it, we know the evolutionary mechanisms that caused it, we can absolutely emulate the chemical and neural processes of love at an algorithmic level. Any claim that we can't, must be an appeal to ignorance. Commented Jun 11 at 17:32
  • @DewiMorgan The appeal you make is based on the false premise that humans have no soul and that love is nothing more than the result of neurochemicals. No one has ever proven a soul exists or doesn't exist (nor, likely, ever will), so outright stating that any premise that is based on a lack of a human soul is itself based on a false premise. Also, you may be confusing cause and effect regarding the chemical reactions present in the human CNS. We have not shown if love causes these changes, or if these changes cause love. Your fallacy requires the latter. Commented Jun 12 at 9:35
  • We can only say machines can't experience love if we know what it means. What could a machine do differently so we can be sure it can love? We can make a machine to do whatever descriptive test you come up with (please give it a try). Commented Jun 16 at 6:29
  • 1
    Anyway, your quotes are excellent. Commented Jun 16 at 6:29
-2

Everyone super excited about these chatgpt type tech, thinking they conversing with a being that reasons, don't understand what chatgpt does. It just finds the most likely best next word in a response based on all the recorded responses already made in the past. When it's a novel issue, something new that has not been tackled by humans who can reason about novel issues, it will struggle, then lie confidently, because it is not confident, it's doesn't feel anything, nor does it think. It crunches input and spits out output based on other previous input. That is it. Nothing more. It doesn't matter that we can't understand the internal algorithm that materializes from all the data feeding, it will only be able to correctly solve problems that have previously been solved, maybe. I ask for code snippets about a novel game idea I have that is unique and it gives me garbage every time, but lies confidently cause it's coded to try to help, so it does, without knowing that it does, cause it's just a computer, not a sentient being that can actually reason. It just pattern matches, with Soo much data that it looks a lot like how a person with Google in their head would respond. But it can't reason, cause it just computes the algorithm. Heres another thing it cannot do... Random behavior for no reason. Without input, it's a nonstarter, for a robot. But I might fart an wave it in your direction out of the blue without thought just cause, because I can, an a computer cannot

3
  • 3
    Ok. The Answer is mostly about the limitations of LLMs, which is relevant, but I was asking about long term. Humans definitely do novel things in all areas, people emphasize that a lot. Can novelty be 'emulated' with a big enough database and a lot of filtered attempts? Apparently.
    – Scott Rowe
    Commented Jun 11 at 10:28
  • 1
    Humans are bad at true randomness too. Worse in fact.
    – CriglCragl
    Commented Jun 11 at 11:35
  • FYI: Technically, it doesn't work word by word, it works token by token which are generally strings of a few characters. In other words, LLMs don't even see words!
    – J D
    Commented Jun 19 at 16:05
-4

Computers will never have any interest to do anything. Computers will never be able to take initiative, have their own agenda, strive towards their own goals.

Computers, no matter how advanced, will always be just tools for humans to use.

3
  • Striving. Ok. One answer.
    – Scott Rowe
    Commented Jun 9 at 17:50
  • 6
    There's no argument here. It's just an assertion.
    – JimmyJames
    Commented Jun 10 at 15:42
  • I only assumed that everyone knows and understands that computers are not living beings. Apparently you didn't. Well, now you know. All is good. Commented Jun 10 at 16:59

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .