1

The question of how consciousness arises and what, if any, effect it has on our behaviour is clearly both fascinating intellectually and of great practical and ethical significance. One very common view, perhaps the prevailing view, on how consciousness arises seems to be that consciousness is evoked whenever a physical system carries out a suitable computation. I don't know if this position has a name so I will call it the computational view of consciousness for the rest of my question. I am in no way a philosopher nor am I familiar with the great deal of thought that people have no doubt put into these issues but the computational view of consciousness seems to me to be quite a strange position to hold. Therefore, I would be very interested to learn about what others have had to say on the lines of argument I will describe here against this view.

My first reason for being skeptical of this view is that it seems to me to be at odds with the idea that mental events can have a causal influence on the physical world. That our behaviour is influenced by mental events seems to me hard to deny. Surely when I talk about being conscious, the fact that I am conscious is at least part of the cause of my talking about it. How could we have any conception of "consciousness" but for our direct experience of it? However, this seems to sit poorly with the claim that consciousness is only evoked by computations being carried out. After all, the operation of a computer can be explained in terms of logic gates, which can be understood in terms of simple mechanical laws, apparently leaving no place for any mental states evoked by the computer to influence the computer in turn1.

I have tried to formulate counter-arguments to the above but haven't yet succeeded in finding anything I find particularly convincing. The claim that mental events can have a causal influence on the physical world seems particularly hard to dispute. It just doesn't seem plausible to me that our brains could "know" that they had brought about mental events without being influenced by them somehow. However, the idea that the computational view of consciousness is incompatible with mental events influencing the physical world seems more open to attack. Perhaps we could argue that the brain carries out computations differently to a digital computer such that its operation can't be explained in terms of pieces governed by mechanical laws, like logic gates, so there is room for the influence of the mental events evoked by the brain's computational action to form an important part of the brain's operation in turn. However, by introducing a fundamental difference between the causal role played by mental events in the brain and the role they play in a digital computer, we seem to have at least violated the spirit of the computational view of consciousness and it would seem quite strange that brains were influenced by the mental events they bring about but digital computers weren't.

The other concern I have with the computational view of consciousness is that it's not clear to me that we can objectively say which physical systems are carrying out computations or which computations they are carrying out. When we say that a computer is performing a particular computation, is that not merely a matter of perspective? Imagine a physical system consisting of a clock and some number of beans. The only thing that changes as the system evolves in time is the number on the clock, which counts the number of seconds that have passed since some initial time. Then, given a Turing machine, M, could we not simply identify the state of the physical system with n beans and t seconds displayed on the clock with the state that M would be in after t steps if it was given the nth allowed input and then claim that the system was a physical instantiation of M because its dynamical evolution matched the evolution of the Turing machine's computation? It would therefore seem that, depending on our perspective, this very simple system could be seen as an implementation of whatever Turing machine we liked. Is it possible to come up with a sensible definition of what it means for a physical system to carry out a computation such that silly arguments like this don't work?

My Questions

I have only begun to think about these issues so I would be keen to hear what people who have thought about these questions much more thoroughly have had to say. How do people who adhere to the computational view of consciousness respond to these kinds of arguments? Are they generally thought of as serious challenges?

Also, it would seem that nothing in our current understanding of the laws of physics that would predict the occurrence of mental events, let alone their having a causal influence on the physical world. Therefore, if mental events can indeed exert a causal influence on the physical world, it would seem that our current physical theories fail to predict not just mental phenomena but also physical phenomena that depend on the influence of mental events. Is there any consensus on whether this should be seen as an inadequacy of our current theories of physics and whether investigating the laws that govern such phenomena might be a promising route to better understand consciousness?

1 I suppose you might claim that those mechanical laws subtly depend on the influence of mental events. However, given that you could, for example, build a NAND gate out of dominoes and account for its operation in terms of gravity and Newton's laws, you would have to draw the somewhat far-fetched conclusion that basic mechanical laws such as gravity or Newton's laws depend on the influence of mental events.

10
  • This is not what computational theory of mind claims, it claims that mind functions on the model of a computer, physical or otherwise. Mental causation is a separate issue, but I do not follow why CTM is incompatible with it. Mental states and computations can be just abstracted redescriptions of what physically happens in the brain, for example, then both of them can "cause" physical changes in the sense that what they are descriptions of causes them. The second objection is known as the triviality argument, see above link.
    – Conifold
    Commented Nov 23, 2023 at 4:24
  • @MBar2269 See also philosophy.stackexchange.com/questions/104500/… The answer provides some references to the literature.
    – Jo Wehler
    Commented Nov 23, 2023 at 6:24
  • Please clarify your specific problem or provide additional details to highlight exactly what you need. As it's currently written, it's hard to tell exactly what you're asking.
    – Meanach
    Commented Nov 23, 2023 at 7:13
  • See en.m.wikipedia.org/wiki/Argument_from_incredulity
    – tkruse
    Commented Nov 23, 2023 at 7:23
  • @Conifold Thank you. My use of the term "computational view of consciousness" now seems misleading. I was not arguing against the view that the mind is ultimately a computational system, in that you could, in principle, simulate it on a computer. I was instead arguing against the idea that carrying out computation evokes consciousness. In other words, I was arguing against the position that a complete computational simulation of the brain would bring about consciousness. Is there a term for that position?
    – MBar2269
    Commented Nov 23, 2023 at 15:20

5 Answers 5

1

My first reason for being skeptical of this view is that it seems to me to be at odds with the idea that mental events can have a causal influence on the physical world.

After all, the operation of a computer can be explained in terms of logic gates, which can be understood in terms of simple mechanical laws, apparently leaving no place for any mental states evoked by the computer to influence the computer in turn.

People who subscribe to ideas like the "computational view" tend to hold that:

  1. Yes, mental events do have a causal influence on the world.
  2. The mental events are also part of the world and can be understood in terms of simple mechanical laws.

I am hungry, therefore I eat a sandwich. There's a mental event (hunger) and a causal influence (sandwich gets eaten). But the mental event is also a set of neural events. To say it is the neural events - which obey simple mechanical laws - that cause the sandwich-eating, or to say it is the mental events which cause the sandwich-eating, is to say the same thing from two different perspectives.

As a metaphor, instead of "mental event" and "neural event," let's talk instead about "wave in the ocean" and "water molecule." The wave in the ocean has a causal effect on the world; it lifts boats, erodes the shore, and so on. But the wave is nothing more than an interaction between water molecules, which obey simple, mechanical laws. It is the water molecules that lift the boats, it is the water molecules that erode the shore. Do we say, based on this, that there is really no "room for" there to be a wave, and there is only the water molecules? No, we say that there is both the wave, and the molecules, and that they are two perspectives on the same thing.

7
  • It seems that a distinction here is that when we talk about a wave, what we're talking about is molecules collectively moving in an appropriate way. It doesn't make sense to imagine the water molecules moving in a wave-like way but there not being a wave. Therefore, all influences of the wave are ultimately due to influences of the molecules. In contrast, we can imagine a computer performing any computation without being conscious. Hence, we can talk about additional causal influences that would have been there if the physical system had been the same but there had also been a mental event.
    – MBar2269
    Commented Nov 23, 2023 at 17:34
  • @MBar2269 Can you really imagine a computer performing any computation without being conscious? What does it mean to imagine something not being conscious? What picture is in your head when imagining it? Any time you imagine anything, you do so via qualia in your head. Which qualia can picture a lack of qualia?
    – causative
    Commented Nov 23, 2023 at 17:38
  • By the computer not being conscious, I mean that there's nothing that it's like to be the computer. In the case of the wave, it's contradictory to talk about the water molecules collectively moving in a wave-like way and there not being a wave but it seems like there is no such contradiction in talking about a computer carrying out a computation yet not experiencing anything. It is the additional influence of mental events in comparison to what would have happened if there had been no experience that seems me to be needed to explain why we talk about being conscious.
    – MBar2269
    Commented Nov 23, 2023 at 17:59
  • @MBar2269 So you can't imagine the computer not being conscious, you can only talk about it not being conscious. I can talk about molecules moving in a wave-like way without there being a wave; I just did. On what basis do you claim there is no contradiction between a computer carrying out a computation, and the computer not experiencing anything? The real story is, you just can't think of a contradiction. But before we had molecular theory, people didn't think ocean waves depended on molecules, either; they couldn't think of a contradiction there either.
    – causative
    Commented Nov 23, 2023 at 18:22
  • I think we're using the term imagine in two different ways. I'm using it to mean we can talk about something without contradiction. When we talk about something having an experience, we do not mean that it has carried out an appropriate computation. No description of a computation could ever tell you what it's like to experience the colour red. It might be the case that, in our universe, the carrying out of a certain computation induces the experience of red but surely there could be another universe in which there would be no experience induced by the carrying out of the same computation.
    – MBar2269
    Commented Nov 24, 2023 at 14:10
0

Philosophical writers on theories of the mind fall into two categories, reductionist and non-reductionists. Neither camp could yet present definitive proof or a definitive way to disprove the other.

However currently (in the last 100 years) reductionism is gaining popularity, while non-reductionism is falling out of favor.

This is likely due to ever increasing understanding of both the brain and of the potential of computers, as well as a better understanding that there is yet much more to learn about both with mere reductionist science. Also beliefs in anything magic, demons, angels, fate, prophecy, divine intervention, religious scripture and so on is waning (in the western world at least).

Similarly advances in cosmology and the certainty of biological evolution makes the religious idea of a soul gifted exclusively to humans difficult to sustain. So any non-reductionist theory has to answer awkwardly difficult questions about when mental events arise both in the growth of the individual as well as in the evolutionary history.

Currently philosophy has no better answer.

0

A point of weakness in your position is the assumption that your awareness influences whatever processes happen in your brain. To take a random example- while driving, you might see a traffic jam ahead and consciously decide to take another route, or you might realise that you have left your wallet at home and consciously decide to turn back for it. In those two cases, we assume that our mental awareness has taken a decision which has then had a physical effect. However, you have to concede the possibility that the decisions were taken subconsciously, and your awareness was simply correlated with it in some way. Suppose you had a electronic tag in your wallet and a self-driving Tesla programmed to periodically check for the presence of the tag and return to your house if the tag is not present (far fetched, I know). The Tesla sets off towards your office, performs a scan, finds the wallet missing and takes you back to the house, all automatically. The Tesla could also be programmed with an electronic voice which tells you what is happening. "Uh oh," the voice might say "looks like we've forgotten the wallet. I suppose we had better go back for it." You would therefore gain the impression that the Tesla had remembered something and made a decision. You might object that you are not as pre-determined as that- for example, you might decide not to turn back for your wallet because you have an important meeting with the boss, or just because you can't face the tedium of repeating part of the trip. Well, in principle the Tesla could be programmed to behave in the same way. With access to your diary, the Tesla voice might say "we'll just have to forget the wallet, as we need to see Stalla at nine so we can't be late." Or the Tesla could be programmed to refuse to repeat journeys unless some suitable interval had passed, so the voice might say "I can't be bothered to go back for the wallet". Whatever pattern of decision-making you think you are free to make, could in principle be programmed into the Tesla, with a suitable comment from the Tesla voice giving the impression of a conscious decision. Given that, I am not convinced by your assumption that non-physical awareness in your brain is independently taking decisions- it could be just reflecting programmed decisions in the way that the Tesla voice did in my thought experiment.

That said, the 'computation causes consciousness' idea has some inherent implausibility if you assume it is computation alone that is the cause. I get the impression that people imagine vastly powerful computers when that idea is raised. However, computations can be performed by steam-powered machines with wooden levers and cogs. I could imagine such a machine of enormous complexity, chugging away and performing whatever calculations the fast computer performs-is it conscious? How about if we split the calculation over n steam powered machines- are they individually a bit conscious, or is the ensemble conscious? How about if the machines were separated by great distances, and instructions passed between them by post- does that make a difference? How about if we replaced the wooden machines by a vast army of people each performing a tiny step in the calculation- does the army collectively become conscious? How about if we halt the calculation for six months between every step- does that influence the emergence of consciousness? I am sure that with the luxury of another fifteen minutes of cogitation I could double that list of obvious questions about the idea of computation alone causing consciousness. If and when there is a theory that provides all the answers, I will take it more seriously, but for now it just seems vague speculation.

3
  • In response to your argument against mental events influencing physical processes in the brain, I agree that we can explain the examples you gave without invoking some kind of influence of our minds on our brains. However, I think other phenomena, such as the fact that we talk about being conscious, are much harder to explain without supposing our minds influence our brains. When you talk about the fact that you're conscious, is that not caused by the fact that you're conscious? If the mind didn't influence the brain, how could the brain "know" that it had brought about a conscious experience?
    – MBar2269
    Commented Nov 23, 2023 at 15:46
  • @MBar2269 Why do you insist that something must influence brain (the substrate)? One can write a program that enumerates random true statements about arithmetic (or any other formal system), but it would be a stretch to say that the truth of those arithmetic statements affects computer hardware. Commented Feb 7 at 13:02
  • You talk about being conscious and your brain controls your speech so presumably your brain in some sense knows you are conscious. How could your brain know this if it were not influenced in some way by the mental events it produces?
    – MBar2269
    Commented May 28 at 16:07
0

There is nothing special about computability . Capacity to compute , in mind and in machine , arises , changes , and vanishes.

Capacity to compute arises in a baby , who has no understanding or very little understanding of concepts. It also arises in machines as the basic ability to compute (like calculator or primitive processors which can only add or subtract) .

Capacity to compute changes ,as the baby grows old and learns new concepts and matures to become an adult in its ability to make fair decisions. Changes in computers are visible to us in terms of latest update of hardware and software.

Computability vanishes in organic life as the mind grows feeble and dies,in some cases mind can also go mad. Mind can make irrational decisions. On the other hand computing machines can have bugs(as inadvertent error or deliberate malicious program).There is an inevitable crash of the system due to software or a hardware failure.

Does the consciousness vanishes if the capacity to compute is lost or diminished ? Will the irrational person stop feeling pain and disgust ? No . Definition of consciousness based on computability is insufficient. But we can have a better definition and distinction between organic life consciousness and machines consciousness based on feelings. All those things which have some depth of feeling are conscious for sure. Feeling is a subjective experience but it can also be deduced by observing behaviour. A “thing” which feels will not put fingers in the fire but “thing” which does not feel can go through it easily (and you will not feel any guilt if you throw the “machine” in fire).

I think we need move away from the definition of consciousness based on computability. A better way to describe depth of consciousness is by measuring the depth of feeling. A “thing” which deeply feels pain or disgust is deeply conscious. Feelings and consciousness have depth just like suffering which has depth. From subtle suffering to deep suffering. From a pinch to rib cage breaking pain, there is a depth. The same situation is with consciousness.

0

I understand that your main question is around the issue of how and why consciousness exists. This is an unsolved problem by science, formulated in modern terms as the hard problem of consciousness.

My belief is that a computer does not and cannot possess any kind of consciousness. Computational power is a behavioural aspect of a system, whether a living being or a machine. Consciousness on the other hand is an inherent property of being: it cannot be "attached to" something.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .