3

Google engineer Blake Lemoine recently made headlines by claiming that he thought Google's LaMDA conversational AI was sentient, based on his interactions with it. (E.g.: https://www.wired.com/story/lamda-sentient-ai-bias-google-blake-lemoine/)

One can find many articles out there dismissing that claim, but what I find puzzling is the following. Most of those articles proceed by showing examples where such large language AIs clearly do not understand what they are talking about, i.e., they still lack certain types of intelligence -- and then they consider the case closed.

But it seems that in the philosophy literature, for the most part, intelligence and phenomenal consciousness (or sentience) are considered very much distinct, apparently with neither considered a prerequisite for the other. So then why should we think those arguments are effective?

To be fair, perhaps Lemoine's case itself was partially based on perceived intelligence of the system -- but to the extent that that is true this just reinforces the question of why people think the two concepts are closely tied. And at the same time, all these arguments seem to have something to them.

So my question is whether there are good resources on the link between intelligence and phenomenal consciousness, especially the former (not) being a prerequisite for the other, and perhaps especially as applicable to the analysis of systems such as LaMDA.

EDIT 2022-07-14: I'm looking for e.g. published papers in the philosophy literature on this topic, so if you could provide any, I would greatly appreciate that.

6
  • 1
    It is arguable that for example, when we see a grid pattern dress, we have sensation, but when we start to analyze that picture, e.g counting the number of squares, our intelligence comes into play. This suggests that intelligence is not a prerequisite for feeling a sensation.
    – Koorosh
    Commented Jul 11, 2022 at 21:52
  • You're right, Lemoine's claims are dubious. Most people think a dog has (perceptual) consciousness, but a dog certainly can't carry on an introspective conversation. So, if a dog is conscious, being able to carry on a conversation like that is not necessary for consciousness.
    – causative
    Commented Jul 12, 2022 at 3:33
  • According to Plato's theory of the tripartite sentient soul which was defined as the self-mover to account for the apparent self-awareness of living beings, intelligence (logos) is the critical indispensable part of the sentient soul, or using his other words, to participate in the intelligible realm of ideas Commented Jul 12, 2022 at 5:20
  • If the basic feature of consciousness is awareness of existence then no, human-like intelligence is not a prerequisite
    – Nikos M.
    Commented Jul 12, 2022 at 17:39
  • I think AI engineers ignore the difference between sentience and intelligence not so much because they consider one a prerequisite for the other, but because they simply do not care about sentience without intelligence. Whether AI has some murky awareness like a dog or, perhaps, a mollusk, is not particularly salient, we build them to match or surpass us in performance. And it shall remain a metaphysical speculation anyway, unless we can tease it out by some external indications that they are "like" us. And that requires intelligence.
    – Conifold
    Commented Jul 12, 2022 at 20:06

3 Answers 3

4

I don't think you understand that objections to LaMDA. Intelligence isn't the problem. The problem is awareness. They are different things.

We generally think of dogs as conscious beings. A dog appears to us to have a sense of self, emotions, desires, needs, and agency. They aren't particularly intelligent when compared to humans, they have no language, but we think they are conscious and aware.

When I was a kid, I had a dog that liked to chase flies. She would take a single piece of food from her bowl, and place it in an open space with lots of room. She would wait for a fly to show up and then try to catch the fly. For her, this provided hours of entertainment.

LaMDA, has no activity when it isn't being interacted with. It doesn't ponder, it doesn't form new connections after training is complete. It cannot teach itself to catch flies.

It doesn't need to, because it doesn't get bored and has no need for entertainment. It isn't aware.

There's a good interview with Melanie Mitchell that goes into this in a bit more depth.

13
  • 1
    I never made a connection before between awareness and the ability to get bored. Sounds like a new kind of Turing test. How long will the cursor blink before something new spontaneously starts to happen? Up until now, forever.
    – Scott Rowe
    Commented Jul 12, 2022 at 1:33
  • But lots of AI systems do have a model of themselves (e.g., robotic systems modeling their physical position) and do do something when not being interacted with (e.g., AlphaGo engaging in self-play). So if those are the criteria then it's pretty easy to build aware systems. What would be your definition of awareness and are you claiming it's equivalent to sentience?
    – present
    Commented Jul 12, 2022 at 9:56
  • @present I don't have a hard definition of awareness, and I'm not saying it is equivalent to sentience. All I'm saying is that things like memory of past events and self-directed activity are pre-requisites for sentience, and that LaMDA does not have them.
    – philosodad
    Commented Jul 12, 2022 at 11:56
  • I don't see why those are prerequisites. Imagine that I am in terrible pain and there's nothing I can do. I'm not able to remember anything (too distracted by pain). I'm not able to plan anything to do (too distracted by pain). Surely I'm still sentient? Also, as I tried to point out, it would be entirely trivial to modify LaMDA to have (in some sense) memory of past events and self-directed activity, which presumably wouldn't suddenly turn it into a sentient system, so it seems it doesn't come down to that?
    – present
    Commented Jul 12, 2022 at 12:29
  • 1
    @present it would not, as you think, be trivial to add memory of past conversations and self-direction to LaMBDA. Even if it were, they didn't, and so the system has no awareness or sense of self, as a sense of self requires a memory of continuity across time.
    – philosodad
    Commented Jul 12, 2022 at 19:09
2

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

1

Caveat

The OP's original question was extensively edited by Mark Andrews after this response was written.

Short Answer

From a metaphysical standpoint, you are trying to determine what is the metaphysically necessary link between two concepts that are related in the philosophy of mind. I don't think philosophers generally have a consensus on the relationships among intentionality, consciousness, awareness, self-awareness, intelligence, and cognition (but you'll hear all manner of claims from the thinkers on PhilSE here!). Each term has subtle distinctions with various definitions vying for control to describe aspects of what might be considered "thought". While intelligence and consciousness are generally considered to be concomitant, there are definitions and contexts where one can exclude the other, though my sense is that intelligence is generally understood to presuppose consciousness in biological organisms, the opposite of what you suggest.

Long Answer

From a naturalized epistemology, views on intelligence are usually adopted from psychologists, and consciousness is usually discussed in terms of cognitive science, often borrowing heavily from contemporary facts from neurology. John Searle's The Mystery of Consciousness dips heavily into the latter particularly in his sections involving Daniel Dennett and Gerald Edelman. It might help to use synonymy to differentiate the concepts:

  1. Consciousness is generally seen as a synonym for awareness.
  2. Intentionality is generally seen as a synonym for aboutness.
  3. Intelligence is usually seen more as a synonym for having the capacities for logical, linguistic, or symbolic abilities.
  4. Agency generally presupposes all three properties; for instance, a dog can be about getting a bone, can be argued to do so because of an awareness of hunger and food in the environment, and can use send signals such as barking or read environmental cues from other dogs and people to negotiate a path to achieving a goal. In this way, dogs possess a limited form of agency.

A non-biological example might be that AI can be considered intelligent because an NLP system might be able to carry on limited discourse, but wouldn't demonstrate consciousness. And on the other hand, most people will ascribe consciousness to a new born baby whose intelligence after birth is less than that of a dog, and yet clearly manifests an awareness of internal and external states. These differences in meanings also manifest themselves in the categories of the functions of language where clearly metalinguistic use of language demonstrates both an awareness and an intelligence about language use.

It's not controversial to claim all of these terms suffer from demarcation problems insofar as no definitions are definitive. The WP article on consciousness has a section "The problem of definition", and having read on various topics, one inevitably runs into a panoply of claims and definitions. For instance, psychologists who study human intelligence favor the g-factor and the Cattell-Horn theory of intelligence. A computer scientist may find a definition of intelligence that includes artificial intelligence. And of course, the holy grail for computer scientists is artificial general intelligence which one might caricature as AI plus awareness.

So, if one extends a definition of intelligence to cover AI, then intelligence can be argued to exist without consciousness. If one extends the definition of consciousness to a lower-order animal, such as a cockroach, then one might argue that consciousness exists without intelligence. It is in the human being that both are generally presumed to exist in tandem. Given evolution, if any claim were to be made about the necessity of one before the other, it might be that it is intelligence that presupposes consciousness, at least in terms of evolutionary psychology, since if intelligence is more strongly associated with animal communication (signals and symbols), it is the awareness that inheres to consciousness that allows it possible for the ability to use communication effectively.

EDIT 2022-07-14: A good place to start with papers on this topic would be https://philpapers.org/browse/philosophy-of-consciousness.

7
  • Thank you. As you point out, definitions are tricky. But wouldn't we also ascribe intelligence to say an ant colony collectively, without concluding the colony collectively is conscious?
    – present
    Commented Jul 12, 2022 at 10:49
  • Sure. We have a term for it in fact. en.wikipedia.org/wiki/Swarm_intelligence?wprov=sfla1
    – J D
    Commented Jul 12, 2022 at 12:55
  • Name me an animal that demonstrates intelligent behavior when it is unconscious.
    – J D
    Commented Jul 12, 2022 at 13:00
  • Exactly my point. Human·like intelligence is not the defining feature of consciousness.
    – Nikos M.
    Commented Jul 12, 2022 at 17:44
  • 1
    @JD of course notwithstanding. I would say that the defining feature of consciousness is basic awareness of existence along with decision/choice capability (free will). Rest functions are on top of that. It can be a long discussion but consciousness can be argued to be unneeded thus redundant if free will is not present.
    – Nikos M.
    Commented Jul 12, 2022 at 19:40

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .