4

As far as I know, the current philosophical consensus is that chatbots like ChatGPT are not conscious.

However, in analogy with philosophical zombies, would it be possible to have a "philosophical ChatGPT"? That is, a system that is physically equivalent to ChatGPT (or some other AI chatbot which isn't conscious), but unlike ChatGPT, is conscious and experiences qualia?

3
  • 4
    Is the question, "Can you attach an immaterial soul to a chatGPT-like thing?" The question seems to only make sense if you're talking about immaterial souls, since you said physically equivalent
    – TKoL
    Commented Mar 28 at 14:02
  • @TKoL I'm basically asking if you could give ChatGPT the thing that humans have but p-zombies lack. It seems that there is no consensus on what this should be, but I think "immaterial soul" is a defensible answer. Commented Mar 28 at 14:11
  • It is possible that all humans are p-zombies except one.
    – Anixx
    Commented Jul 11 at 5:28

7 Answers 7

3
+50

"AI chatbot" is quite broad, as that can include any possible future technology, possibly incorporating some biological elements. That may include future AI that could potentially be conscious (at least under some physicalist view). One might debate whether that'll still be a "chatbot", but anyway. Current AI is at least commonly believed to not be conscious.


Under a physicalist view where consciousness weakly emerges from / reduces to brain processes, consciousness can theoretically emerge from artificial constructs, so a conscious AI can theoretically exist, but it can't be physically identical to a non-conscious AI. Philosophical zombies also can't exist under this view. This is because consciousness is directly and inseparably tied to physical state, on account of reducing to that. It would be like asking whether you can have 2 physically identical computers where one is capable of performing computation, and the other is not - that's not possible because computation emerges from the physical state of the computer.

Under some dualist view, it should be theoretically possible for consciousness to attach to some artificial construct, and therefore it should be possible for philosophical AI anti-zombies to exist. Although some might say that consciousness comes from a deity who probably won't inject consciousness into an artificial construct. Or they may say consciousness requires some physical state (which sounds a lot like physicalism with an additional unnecessary claim), and it's impossible to artificially create such a state (although we managed to artificially create a lot of things that earlier generations wouldn't have thought was possible).

2
  • 1
    Another interesting possibility is that to be conscious, you must be the descendent of something conscious. So even if you duplicated the physical state of a human brain perfectly, it doesn't "count" unless it came from a conscious human mother. Commented Mar 30 at 14:51
  • 1
    @ChristopherKing Under physicalism, descendent-based consciousness would still need to correspond to some arrangement and transfer of physical parts. In theory, it should be possible to artificially create such a physical arrangement to create consciousness (but in practice that may or may not be possible).
    – NotThatGuy
    Commented Mar 30 at 23:13
2

Assuming physicalist-materialism. This question is saying, what thresholds of behaviour and complexity indicate consciousness is present, and implicitly is true Artificial General Intelligence (AGI) possible? This is not a question with a clear or undisputed answer. The P-Zombie framing adds extra complexity, as it's a thought experiment aimed at questioning whether we can know about internal experiences from minds by observing external phenomena, which we can't truly answer until we have an accepted synthetic mind to test. I make the case here that the more complex the mind, the more it's internal experiences make it difficult though not impossible to predict, by shifting data needed to predict it by majority inside the mind: Can the goals of an organism be imputed from observation? There's also the issue of 'intelligible intelligence', that as computer systems increasingly train themselves, it's becoming more difficult to know how they do what they do - it can be argued that human sentience and self-awareness may be chiefly of importance in us humans inquiring into how and why our brains offer up the information they do, especially when that information is found to be incorrect or contradictory (see Kahneman, 'Thinking Fast & Slow').

Where in evolution, does consciousness occur? We generally grant humans as having a special quality of 'self-awareness'. But we know many animals pass the mirror test, indicating they can distinguish between their reflection and another being. The human neocortex seems to have emerged primarily to cope with the complexity of our social landscape, with our mimicry and linguistic knowledge being founded on intersubjectivity and the development of a 'social self', linked to the Default Mode Network.

We want our AI to interact meaningfully using language. LLMs seem to be able to do this far better than expected, and this can be argued it's because they rely on a 'low resolution image of the internet', eg here: ChatGPT Is a Blurry JPEG of the Web (New Yorker article). So although it is only predicting one word at a time, contextual clues allow them to mimic human behaviour. But they often fail on things where 'common sense' is needed, like a question 'if 3 towels take 2hrs to dry, how long do 9 towels take to dry?'. Chatbots generally say 6hrs, but humans who think about it can guess the drying time is fixed, regardless of numbers. What we really need is LLMs and chatbots that can go deeper into what Wittgenstein called 'modes of life', in order to look deeper for contextual cues, especially in regard to one-off creative actions or innovative behaviours. That could fix a lot of problems, but wouldn't necessarily require the bot to have a self-model.

I'd compare current bots to something like insect intelligence, where simple 'agents' can achieve complex things, but like 'blindsight' minds or individual neurons, they can have emergent complexity that just isn't necessary in each agent for it yo achieve it's goals.

I'd argue the best way we have to picture how humans can do what they do, is Hofstadter's idea that minds are 'strange loops', and they can do things Turing Machines don't seem able to because they can build 'tangled hierarchies', or loops of logic, and recursion in their nesting of layers that they use to understand the world. This provides a Coherentist and Anti-Foundationalist picture of epistemology that avoids Munchausen's Trilemma: to say it less technically, we tend to just start wherever we find ourselves and keep exploring renewing and relating together what we know about the world, including the self-loop starting with no or minimal purposes/self-knowledge. Sounds simple, very hard to get computers to do it - AlphaZero might be an example in a simple game-world, or Tegmark and Wu's AI Physicist (see discussion & links here Reference request: How do we grasp reality?). Strange Loops explicitly involve something processing information about the world, that includes a model of itself in the model, which allows it to try out different dispositions and intentions and their expected impacts, in order to decide how/who to be. There is then a cumulative process of adapting to the behavioural niche, comparable to an evolutionary algorithm - but, it has the capacity to investigate and cumulatively increasingly determine it's own true 'best interests' which clearly includes self knowledge, or to take up any other goals that emerge to fit what it began with, eg to further survival and replication, or to break with such goals for emergent reasons (in humans we choose to die for sometimes very abstract reasons, memes can be a helluva bug).

So in this view, self-awareness and self-consciousness would involve specific types of recursive structures, and a cumulative process of investigating and adapting to a niche which includes increasing self-knowledge. If this view is right, we probably aren't that far from conscious chatbots and true AGI.

Nick Bostrom has interesting things to say about the implications of this in his book Superintelligence, where he talks about the risk of 'malignant failure modes' or conflicts of interest between humans and computer minds, and specifically the idea of 'mindcrime', or causing of suffering in computer sentiences related to capacities and how they are treated.

1

Michael Levin has studied the early-stage development of biological entities like embryos and has argued that the transition from inert matter to conscious life is a “continuum”. He has also built “robots” out of biologically engineered cells, called “xenobots”. Neuroscientists like Alysson Muotri and Anil Seth argue that consciousness is fully explainable as a physical phenomenon attached to or somehow facilitated by matter, particularly neuronal cells. Muotri has grown brain organoids, which are lab-grown networks of brain cells.

The heart of your question is one of the most famous philosophy of mind questions ever, which is mind vs. matter. We are arguably at a moment in history where we can explore these perennial, impenetrable questions with experiments. Developments in biology (including synthetic biology), neuroscience (including neuroimaging), mathematics (such as integrated information theory), artificial intelligence (such as large language models), and in my opinion, quantum mechanics (including Penrose’s speculation that quantum systems are relevant to consciousness) allow us to come closer to testing hypotheses with manipulable, controlled scenarios that have regularizable outcomes. Nobody knows the answer to your question, but it seems we are going to come closer in our lifetimes.

But David Chalmers raises the deep question if even the above has real explanatory force in the face of the hard problem of consciousness. In my opinion, in an era of civilization marked by so many obscurities having been lain to rest, including the mechanics of the physical world, the origins of life, and a rough picture of the nature of intelligence, qualia remains as perhaps the most outstanding epistemic-scientific mystery of our time.

0

Short Answer

This question reduces to:

"Could a language-proficient algorithm ever be conscious"?

Answering this question requires answering the question "what is consciousness, and what causes it to manifest". There is no consensus among philosophers, or scientists on this question, so the answer today has to be "we do not know".

Longer Answer

Philosophy of Mind has been the dominant area of activity among philosophers for the last 3/4 of a century, and this has been because the difficulty of understanding and predicting consciousness has been the most notable challenge to the dominant ontology of physicalism. This is called the Hard Problem of Consciousness. Philosophers have pursued multiple approaches to try to resolve this challenge. Currently, there is no consensus that any of these avenues will close, and each of them have different answers to your question.

For reductive physicalism the proposed answer is that the processor hosting the AI chatbot could become conscious.

For emergent physicalism, some aspect of the structure of the processor hosting the AI chatbot could allow the processor to become conscious.

For algorithmic identity theory, the algorithm of the chatbot could be conscious.

For algorithmic emergence theory, some modes, functions, or structures of the algorithm could allow consciousness to emerge for the chatbot.

For fusion theories (Integrated Information Theory is a fusion of algorithmic and physicalist thinking), a chatbot with specific structure, AND specific algorithms, could be consious.

For delusionist physicalism, humans are not conscious, and while it might be logically possible for an AI to became conscious, there is no reason to expect this to happen.

For neutral monism, there is an automatic coupling of consciousness with all physical objects, and the processor hosting the chatbot is already conscious today.

For consciousness-based idealism, it is an open question what the relation between the chatbot algorithm, or its processor, is with consciousness. Some idealist speculations limit consciousness to pre-existing agents, in which case the chatbot would never be conscious, and others are universally pan-psychist, in which the current chatbots (either through their processor, or thru their algorithms) already are conscious.

For interactive spiritual dualism, the physics of what could become ensouled and therefore become conscious is an open question, and either an algorithm, or a processor, could possibly be structured such that they could become ensouled. In general, our processors and algorithms of today are assumed to not currently host ensoulment very well, or at all.

For interactive emergent dualism, the physical (or algorithmic) structure needed for consciousness to emerge and become interactive is still TBD, but once we discover what it is, then either the algorithm or the processor for the chatbot could become conscious if we implement that structure.

These answers are all over the place, ranging between chat bots already being conscious, thru there being no consciousness even in humans. However, most of these Research Progrmmes postulate that the answer will at some point be "yes".

0

It's important to note that both "philosophical zombies" ("p-zombies") and conscious AI ("c-AI") entered the philosophical literature as thought experiments intended to demonstrate opposite points. C-AI is the older concept. Associated most closely with Alan Turing (and introduced as the core part of the infamous "Turing Test"), it asks the question "If something can convincingly display all signs of consciousness, what possible legitimate reason is there to deny it the label of conscious?" The experiment assumes empiricism--the idea that all we can know is what is demonstrated by the evidence of our senses--and is intended to establish (at least) weak physicalism, the idea that nothing that is non-material can have any meaningful impact on the world. In other words, AI is a machine, machines are material, AI gives convincing signs of consciousness, therefore AI, to all intents and purposes IS conscious, therefore consciousness is material.

P-zombies can be seen as a direct challenge to c-AI. The idea of the thought experiment is to imagine a person who is indistinguishable from any other human being, but has no inner life or consciousness. This is intended to create a dilemma forcing you to (A) deny this is possible, in which case you have admitted that consciousness does make a significant difference or (B) admit it is possible, in which case you have tacitly accepted that consciousness is something real and non-physical. (If your instinct is to deny that any non-physical consciousness exists in any meaningful sense, you must not exempt yourself from this blanket assessment.)

So you have two venerable thought experiments, with opposite thrusts, both accepting empiricist assumptions, and both revolving around an entity that convincingly appears to be conscious (but that arguably might not be). What has changed recently is that AI capable of passing the Turing test is potentially already here, so the thought experiment is now real or at least increasingly plausible. In any case, both experiments support the contention that people who deny consciousness to modern AI are probably doing so out of unacknowledged non-physicalist commitments.

0

No, I do not think so. I am no more equipped than the great minds of history to solve the hard problem, but then in the absense of solution I am no less equipped. I wrote this last year. Perhaps it will support my answer.

Defining Consciousness

An Exercise in Courage and Humility

It has been called ‘The Hard Problem’ by some of the most learned minds in the world. The struggle to define consciousness began a thousand years before the word was coined in the 1500s. Consciousness has been categorized, theorized, debated, defined and re-defined, everything but observed in terms of cause and effect.

The advent of the recognition of the temporary nature of the space time continuum, (the most significant discovery of human kind) was completely omitted from the debate. The true answer to consciousness just lays there, with the greatest minds in science, philosophy, and religion pointedly ignoring the obvious. Consciousness is awareness of space time. It is that simple. If an entity is aware of space time, and in any way effects or attempts change related to that awareness, it is a conscious entity. That is where courage comes in. That makes my dog and a cockroach on the floor both conscious entities. Not necessarily self-aware, but definitely conscious.

One could argue that an alarm clock makes changes related to an awareness of space time, and is in no way conscious. But if we consider that the clock is a device designed and built specifically to make those changes for the benefit of the conscious entity, we begin to understand where humility comes in. People make alarm clocks for people they will never meet. We construct devices according to our knowledge of space time to enhance our consciousness. Our consciousness is therefore directly related to the unconscious material we share space time with. We the conscious are forever tied to the unconscious. It is always consciousness that organizes change to purpose. We are never alone in consciousness, and we almost continually use that state to benefit other conscious entities as well as ourselves. We have a sense of self, but we are in no way completely self.

Self-awareness is not a prerequisite for consciousness as herein defined. The broader definition I propose expands consciousness throughout life based on what a conscious entity does, not what it thinks. To paraphrase; ‘I do therefore I am.’ Or at least ‘I am’ as long as I lay claim to the doing. I had pizza for lunch. I will have stew for supper. Why does an earthworm turn one way underground instead of another? It cannot be mere unconscious reaction to stimulus, or all the earthworms would be headed in the same direction. They are not. They are a writhing random wriggling mess. They each on some level decide arbitrarily what change they will make in reaction to the same stimulus. They do this despite the high probability that they are not in any way aware that they are individual earthworms.

I realize this simple definition has repercussions. Repercussions that resound through the universe, perhaps as far back as the big bang itself. It forces us to consider and acknowledge conscious entities which are far inferior to ourselves in terms of intellect, and also acknowledge our unbreakable bond with the unconscious. We grew comfortable with the notion that consciousness and intellect are the same thing, or at least closely related. They are not. Now we must face the possibility that there may be consciousness superior to ourselves in terms of intellect, and in terms of interaction with the unconscious. Perhaps superior even without the benefit of intellect. That is the notion avoided like some cosmic plague by the scientific community.

I welcome this logical definition of consciousness with open arms. It is our greatest, and perhaps only Hope.

-1

AI works by taking previously built sentences and information and regurgitating them into logical ideals that conform to the form of speech it has been created to imitate.

so from a logical standpoint, I suppose we can have an AI that could be considered "a philosophical non-zombie" as it can discuss philosophical concepts and provide accurate and new information.

but the AI itself would remain non-conscious as it does not create any unique information, and it only bases the information it generates on the information it is given thus it cannot be a conscious being.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .