9

Consciousness doesn't reveal itself except through behaviour. We can't see others' minds, but we can hear their voice and what they say. This leads the observer to conclude that their interlocutor has a mind which has consciousness. We can't directly observe it. Consciousness manifests itself through observable, external actions.

AI is getting really good at acting like it understands what's going on around it. It learns and changes its behavior. So, do we say it's conscious? Or is there something more, something special, about what we feel on the inside that a machine just can't have?

Do we have to start rewriting the book on what consciousness really means?

7
  • 2
    Probably the question means to ask about LLMs, not all of AI.
    – tkruse
    Commented Nov 5, 2023 at 15:06
  • 1
    AI breaks all three Aristotle's Laws of Reason. It is called intelligence, while it is not. Technically, it is just an extreme powerful autocompletion system. Other assumptions in this post are also quite subjective.
    – RodolfoAP
    Commented Nov 6, 2023 at 6:32
  • 2
    AI itself likely not, the debate whether machines are conscious is not bounded to AI: there have been a lot debates before AI was near the level that it is right now, but arguments like the Turing test and Chinese Room (counter-argument) can live "indepdent" of current AI. Commented Nov 6, 2023 at 8:59
  • 1
    How do you know every other person is conscious? Maybe every other person is also just a 'facade' and acts like its conscious, but maybe is "empty inside" ? I think Consciousness can not be observed unless by the person/thing itself.
    – Yalla T.
    Commented Nov 6, 2023 at 16:10
  • @YallaT. A strong case can be made from observing ncc's.
    – J D
    Commented Nov 6, 2023 at 18:07

10 Answers 10

8

Wikipedia: Intelligence has been defined in many ways: the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving.

Unfortunately AI has none of these capabilities.

AI is just a fancy word wrapping up the capacity of the machine to process huge amounts of data, while simultaneously performing statistical and algorithmical processing upon it.

AI has not in fact delivered the initial goal (the ability to solve an arbitrary problem) So,

Wikipedia: The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display.

But, contrary to that, in a conscious brain, the whole brain thinks, it's not that specific abilities are performed by specific parts, consciousness is an emergent phenomenon.

What we are now trying to understand is how this emergence occurs, and in this way, AI - even in its theoretical form - is of no relevance or help to that.


Modification

By saying that AI has none of the attributes of intelligence I do not of course imply that it cannot made as to be seeing as having them. It's that these attributes are hard-coded to the machine in such a way that the machine can only "respond" to specific problems.

Without going into details about how these attributes cannot be "embeded" in an AI system inherently, I will address the capability of abstraction which is a fundamental property in all of these attributes. The analogy of an AI system that exercises abstract thinking is to have a computer that can create parts of it’s own software and the software would refactor and change itself according to the data it receives, in such a way that the aggregated data would become part of the software itself.

Now, besides this, there exists a concern whether a specific AI system can develop consciousness.

It is obvious that by attributing consciousness to a machine we are interested in the behavioural aspect of it; otherwise by saying that something is consciousness in the sense that it has an awareness of existence without the ability to act has only theoritical/theological ramificationss.

A machine can only act in the way it was constructed. Irrespective of the complexity, there still exists a specific way in which it operates; even if there exist "decision making" attributes to it, these decisions are enforced by predifined ways, and on and on. The "purpose" so to speak of the machine is hard-coded in the way it is constructed; there is no escape from this, only bugs. Be sure that your car won't attack you; if it does it's a bug; or a hijacking.

10
  • 2
    For me it is not quite clear why two thirds of your answer deal with intelligence and only a short part states a thesis about consciousness, but without any argument. How does the capability of intelligence relate to the property of consciousness? Even when denying AI intelligence why not studying AI-systems as smart systems which possibly develop consciousness?
    – Jo Wehler
    Commented Nov 6, 2023 at 10:58
  • 2
    You just assert that AI has none of the "intelligence" capabilities. Seriously, have you followed the developments lately at all? Sure there can be debate in the details, but saying for example GPT-4 can't do any learning is frankly hilarious. Commented Nov 6, 2023 at 11:36
  • 3
    @leftaroundabout GPT-4 literally cannot do any learning: it's a generative pre-trained transformer. We can argue over other things, but this one's pretty cut-and-dry. (Note that OpenAI's GPT models are optimised to be convincing simulacra: you have to seriously think, not just intuit, when trying to determine what properties they have.)
    – wizzwizz4
    Commented Nov 6, 2023 at 12:30
  • 1
    The idea that AI algorithms don't do logic is just wrong, and machine learning is clearly a form of learning. And how does GPS not solve problems?
    – J D
    Commented Nov 6, 2023 at 14:04
  • 1
    ChatGPT, whether or conscious or not uses natural language questions and produced semi-reliable answers. How is that not one of the initial goals of AI?
    – J D
    Commented Nov 6, 2023 at 14:09
6

Today it is an open question whether machines can develop consciousness.

Neuroscience is searching for the neuronal correlates of consciousness (NCC), i.e for neuronal structures and activated circuits in the human brain which are necessary and sufficient for creating conscious mental processes. Hence neuroscience searches for explanations from the third-person stance of conscious experience, which is from the first-person stance. A good overview is Christof Koch: Consciousness. The book also contains different working definitions of consciousness.

Currently we lack an accepted theory which makes the OP’s question operable for neuroscience or informatics. A mathematical model to approach this question is Integrated Information Theory (ITT) due to Tononi and collaborators.

IMO an important trigger for a smart system to develop consciousness is the task to orientate itself and to act in a given environment in a way advantageous for the system. From the viewpoint of the human brain there are many advantages when processing new and important stimuli in the conscious mode and others in an unconscious mode: The former mode needs much more resources and processing power, but the latter can handle new processes only on a try-and-error basis.

It looks plausible to generalize this principle to AI-systems and to analyse the decisions made by these species from the third-person viewpoint.

3

It depends on who you ask. This is a live question, and different thinkers have different intuitions about it.

We can start with a few of relatively uncontroversial observations:

  1. Consciousness is still a "black box." We don't understand much about how it works. We know it's tied deeply to the physical structure of the brain, and we understand some of the relationships, but not at a very deep level.
  2. Modern AI is also somewhat of a "black box." We know a lot about how it works, but it's not like old-school programming which follows a set of discreet, knowable instructions. We created it, but we don't completely understand it.
  3. We know that the two black boxes are constructed differently (the way AI is trained is modeled on how people learn, but is significantly different from it), and have differences in how they work (human utterances are not purely predictive, the way many LLM AI's are).

That's the stuff 90% of us agree on. Then we start to get into the controversy: Most, but not all people, think that there is an internal state that corresponds with what we call "consciousness," and that it is only directly accessible to the person who owns it. However, there are people who believe consciousness doesn't really exist, and others who think it's only an "epiphenomenon" (a side effect). At the opposite pole of thought, there are people who believe that we can directly connect with other's consciousness, psychically, perhaps, or spiritually. There are also people, generally religious, who identify "consciousness" with the soul.

Based on such differences, people can come to widely different answers to your question. Your question seems to take the strict empiricist view championed by computing pioneer Alan Turing. His famous "Turing Test" was originally a thought experiment. The idea of it is that consciousness exactly equates to the empirical footprint of consciousness, and therefore that there is no valid reason to deny the label of "conscious" to any purely mechanical device that can convincingly simulate consciousness. Turing would likely feel vindicated by modern AI, and would likely

  • deem it conscious and
  • view that as evidence that human consciousness is likely also purely mechanical at its roots.

But other people would vehemently disagree with such conclusions, and many of them would deny that current AI necessitates any change in our beliefs about consciousness.

1
  • Current LLMs have obvious tells that prevent them from passing the imitation game if the interrogator is competent, and Turing would certainly recognize that. In section 6(4) of Computing Machinery and Intelligence he discussed sophisticated interrogation "to discover whether some one really understands something or has ‘learnt it parrot fashion’." If you allow for an unskilled interrogator then Eliza/Doctor passed the test 60 years ago.
    – benrg
    Commented Nov 8, 2023 at 0:08
2

Welcome, Stas,

You ask:

How AI is Changing Our View of Consciousness?

The most important way that AI affects the study of consciousness is that it helps to make philosophy of mind experimental in a new way. AI allows us to continue to simulate consciousness in more and more sophisticated ways. Such an approach in philosophy is called experimental (SEP).

Two hundred years ago, long before brain imaging techniques and electromechanical computers, the philosophy of mind had to be conducted in an almost purely speculative manner. In philosophy, this is called speculative metaphysics. But with William Wundt's first lab of psychology, philosophy of mind began a track of naturalized epistemology by using the findings of science to reach philosophical conclusions. Today, of course, philosophers in the Continental and analytical traditions rely heavily on the findings of science.

One way that science helps us to understand consciousness is the use of brain-imagining techniques like the functional MRI. We are now able with some level of precision to see where brain activity is and is not during this sort of research, and some researchers have theories on what consciousness is, such as Tononi and Edelman's neural re-entry. Cognitive neuroscience has greatly informed our understanding of consciousness, for instance, where language processing takes place in a rough way.

But from the other direction, instead of viewing inside of minds, we can attempt to build machines that do what they do. For instance, in natural language processing and knowledge representation, computer scientists continue to build more and more sophisticated software that approximates human consciousness. Such models give us insight into our own conscious experience and use of language and such is the case that new philosophical disciplines emerge, as in the case of natural language ontology (SEP). From SEP:

Natural language ontology is a sub-discipline of both philosophy and linguistics, more specifically, of metaphysics and natural language semantics. It was recognized as a separate field of study relatively recently, through the development of natural language semantics over the last decades. At the same time, natural language ontology can be considered a practice that philosophers have engaged in throughout the history of philosophy when drawing on language in support of a metaphysical argument or notion.

Thus, how we characterize intelligence and what we consider consciousness to be continues to change in light of new methods that obtain to it either directly or indirectly.

1

Current AI often "appears" smart, but the Large Language Models (LLMs) are not very smart. I don't claim that products like ChatGPT are not impressive, but it is at the best "mimicking" output of what an intelligent (consious) device does. It does not mimic the tough process.

What we see now is basically the result of the Chinese room argument [wiki] as a counter-argument to the Turing test. This argument is not very new, in fact old chat bots that were less equipped like ELIZA could already trick humans into believing that these were actual humans. It did that with a small set of "rewrite rules" in English. So imagine says:

I am quite happy.

It can build a syntax tree [wiki] out of this sentence and swap "I" for "you" and swap the verb with the noun phrase. So without understanding anything out of it, it can generate a response:

Why are you quite happy?

This of course fails from the moment you make a sentence that is grammatically sound, but semantically makes no sense. In that case the rewrite rules will generate questions that are also grammatically sound, but semantically make no sense either, whereas a normal person would understand that this is invalid.

Now LLMs are a more advanced generation of this. These are also not created by humans that introduce rewrite rules, it is often the result of using statistics and huge amounts of data. This gives the system some "understanding" of words, where word2vec for example maps a word on an arbitrary large vector, which gives an indication how far two words are distant from each other. It thus can to some extent detect that things are non-sensical, just because it can "calculate" how likely it is for a sentence to occur naturally. This based on statistical analysis of huge amounts of data. This is also what humans often do to validate that a sentence is grammatically sound: you search on Google two or more variants, and you look for the number of search results. The statistical models have turned that into a more statistically sound model.

But these models thus don't in any way really "understand" what is asked, and what the answer looks like. These are machines that just aim to produce the same result a human would, but they use a different set of tools: statistical analysis to predict the sequence of words to produce a response.

I think what AI might have learned humans in the philosophical sense, is that it does not take that much to mimic intelligent behavior. But to some extent we can already know that from the word of animals, where a lot of animals don't have a brain, or at most a small brain, but still can show behavior that often looks as planned and organized. For example most insects have a very small brain (200'000 neurons).

This is essentially the difference between weak AI and strong AI, where weak AI means to mimic intelligent behavior, but not per se with an "intelligent" toolkit, just statistical analysis might be sufficient. Strong AI on the other hand tries to come up with a toolkit that does not only mimic intelligence, but actually has intelligence. Likely such strong AI devices, if they eventually will be produced, will also result in what one would call consciousness, as is specified by the strong AI hypothesis [wiki], but the current generation of AI is (almost) completely "weak AI". Where an algorithm uses a statistic model to produce content that may look as if it was produced by an intelligent device.

Probably the main problem is to design a "waterproof" test to what consciousness exactly is. I think a classic test was someone recognizing itself in a mirror. But that also has caveats: for example algorithms exist to detect yourself in a mirror: you somehow change the state of the machine in such way that the mirror device will reflect that, and then you analyze if the change in the mirror changed. If you do this an arbitrary number of times with the same outcome, the probability that the mirror shows yourself grows. Multi-agent systems [wiki] for example deals with a system of (individual) agents that with certain algorithms can organize themselves into a system, so there detecting oneself, and building "relations" with the other agents is done, but often this is done with (simple) deterministic algorithms, that don't require intelligence per se, just like ants for example don't need a lot of high-level intelligence to organize in a colony: these have a number of "algorithms" hard-wired that let the ant colony run smoothly.

1

I think that AIXI model by Marcus Hutter explains the general intelligence very well and the psychology and consciousness are just some approximations (heuristics) of this general intelligence that is not attainable exactly due to being incomputable. AIXI model is for the computable environments (science believes that physics is computable in the sense of Turing), but ordinal computation can be path of generalization of AIXI towards uncomputable environments as well.

So, IIT and other models of consciousness should be special cases of AIXI computations, although the current level of consciousness models are not achieved this perspective yet.

There is Association for the Mathematical Models of Consciousness https://amcs-community.org/ and it organizes "Models of Consciousness" conferences, already 4 have been held.

So, no mysteries here. AGI will be able to have the intelligence/functioning capabilities beyond consciousness, more emergence should be possible beyond consciousness.

This is topical question because Consciousness Studies can contribute to the AI Safety as the Association letters have stated it.

1

Do we have to start rewriting the book on what consciousness really means?

Consciousness may mean two very different things: First, an organism is said to be conscious of its environment if it has a perception allowing it to navigate this environment successfully. There is no difficulty in principle for machines to be conscious in this sense.

Second, consciousness may be the subjective experience of the contents of our own mind in terms of qualia. For example, we may be conscious of pain, redness, fear, hunger etc. By definition, we know our consciousness and no technological evolution is ever going to change this.

The fact that some people mistake one for the other is not new. However, the only empirical evidence we have is that other people possess consciousness in the first sense. The only organism which we know has consciousness in the second sense is ourselves. Nobody is ever going to know if a machine has consciousness in this second sense.

The current engouement for IA machines is grounded not in the illusion that they have consciousness in the second sense, but in the fact that humans will be lured by their promise of material benefits.

Even in the extremely unlikely event that AI machines could have consciousness in the 2nd sense, only they would know it.

Whether we would then believe that they do is irrelevant.

2
  • 1
    While the interest in AI is certainly economic, in the sense of automating things that were previously to complex to do so. It's nonetheless interesting from the perspective of consciousness. Like if our second sense consciousness is physical then there is the theoretical chance to replicate it and the chance for an AI to be conscious in the 2nd sense. Which would pose ethical challenges to the human supremacy and/or it could be insightful in terms of understanding second sense consciousness in simpler toy examples where we can follow every step of the process unlike for ourselves.
    – haxor789
    Commented Nov 6, 2023 at 12:35
  • @haxor789 "the chance for an AI to be conscious in the 2nd sense." See my edit. Commented Nov 6, 2023 at 16:46
1

A lot of debate and arguments around consciousness have been about how to define it and how to measure/detect it.

You wrote, "Consciousness doesn't reveal itself except through behaviour."

So, let's say you want to find out if someone over the internet is conscious. So, you come up with a written test to let you know if something exhibits consciousness, maybe it's 10,000 true/false questions.

What if I fill out the test by flipping a coin for each question? And I'm lucky enough to pass the test.

Would you say the coin is conscious?

If you think the metal coin is conscious, why can't a more complicated device be conscious for behaving in the same way.

If you think the coin isn't conscious, how would you come up with a perfectly accurate behavioral test to detect consciousness? Bringing us back to the question of how to define it and measure/detect it.

--

Or from the other direction, start with a conscious human, and try to think how much could be removed while remaining conscious. You probably don't need arms and legs to be conscious. You are probably conscious while you are alone in your bedroom, although no one else would be able to vouch for it, since they can't see your behavior. Do you need a human brain? or would a machine simulating a human brain be good enough? Is an injured or defective brain still conscious? How accurate would the simulation have to be to meet your threshold for consciousness?

--

That said, I don't think it's necessary to re-write the book on consciousness, but I wouldn't be surprised if the wording changes to start talking more about Human Consciousness, just as we already sometimes talk about human and animal consciousness.

0

First of all, your question contains a lot of assertions that are certainly not everybody's opinion (about how to detect consciousness etc.).

But to your main question: as of now, all our AIs are nothing qualitatively new. They use mostly statistical and machine learning technologies and - this is the new bit - gargantuan amounts of training data.

And granted, the AI algorithms themselves are pretty spectacular as well, and the culmination of many years of research as well as advances driven by more commercial avenues. They are a highlight of how far the scientists/developers have come since the fledgling days of "AI" in the 50s/60s/70s. They are awesome tools right now, but the awesomeness is in the people creating them, not in themselves.

As to the question, whether the current crop (like ChatGPT, Midjourney etc.) have any consciousness or are anywhere close to questions about that, the answer is clearly no.

So, do we say it's conscious?

Absolutely not. We can easily "see" what it is doing by going into the source code. The lights are not on for sure.

Or is there something more, something special, about what we feel on the inside that a machine just can't have?

That is a very complex question and as far as I can tell, nobody has an answer to that, many very intelligent philosophers are thinking and writing about this, but I have not heard of any kind of consensus on that.

But even if we do not know what consciousness is we can for sure tell that a specific instance does not qualify (i.e., I can pick up a stone and tell with confidence that it is not consciousness, even though I do not know what consciousness is). In my opinion (founded on some insight in how they work) the current machine-learning based AIs definitely do not qualify.

Do we have to start rewriting the book on what consciousness really means?

We can start to rewrite that book when we have it. Nobody knows what consciousness means. There are plenty of opinions and so on and forth, but nothing even close to a "book".

-1

AI is like mirror. It mirrors the intelligence of its creator. AI is as intelligent as the creator’s ability to smartly process the data. AI does the processing of the data as it has been programmed to do. It is not that the creator himself can not do the processing himself but it is just that it will take large amount of time to follow the steps of processing the data. AI is like a smartly programmed machine.

Now , the question arises can we smartly program the device in such a way that it mimics or becomes conscious ? Consciousness is conditioned by pain , pleasure , love , hatred , delusions. These terms are alien to machines. Machines do not love or hate or feel pain or pleasure. That level of conditioning is not possible in machines. Therefore even if machines became conscious they will never understand what is love or hatred or pain or pleasure.

7
  • Well what are pain, pleasure, love, hatred and delusion exactly, and why can humans/animals have these, and machines don't. What is fundamentally different between humans and machines in that aspect? Commented Nov 7, 2023 at 10:43
  • Pain , pleasure etc are associated with feelings. Animals including humans have feelings. Therefore it is important not to show cruelty towards them. If machines could have feelings then they will cease to be machines. Feelings can not be programmed in machines. Feelings can be irrational or spontaneous like anger. Commented Nov 7, 2023 at 11:07
  • but what is the device that "runs" feelings. Given these are for example biochemical reactions, one can build a biochemical computer, and if it is biochemical, why do one-cell organisms don't have such feelings. Commented Nov 7, 2023 at 11:09
  • @willeM_VanOnsem There are different kinds of feelings and there are different intensities of feeling. From subtle pain to extreme pain. How do you know one-cell organisms don’t have feelings ? The device is organic body which runs on organic food and not on electricity.. I had asked a question whether it is possible to create machine which runs on vegetables? But the question was closed and was probably deleted. No discussion was allowed. Commented Nov 7, 2023 at 13:28
  • yes, these are wetware computers: en.wikipedia.org/wiki/Wetware_computer but regardless, the mechanism of a transistor and a neuron are not that much different. In fact the working principle of a neuron can be mimicked, that is the working principle of an artificial neural network: en.wikipedia.org/wiki/Artificial_neural_network Commented Nov 7, 2023 at 13:32

Not the answer you're looking for? Browse other questions tagged .