18

By the “hard problem”, I’m referring to the exposition by David Chalmers.

He phrased the hard problem as “why objective, mechanical processing can give rise to subjective experiences.” I find it difficult to think of this as hard.

Imagine the following. People are really pre-programmed computers coupled with various sensory inputs. The computer has a “task manager” that monitors and controls all the software being ran: the visual recognition software, the math arithmetic software, the emotional perception and expression software, etc. Then, it seems like this task manager is “conscious”. Only the task manager itself is aware of the programs ran, and others don’t see the program status. This, the awareness is “subjective”.

How David Chalmers talks about the problem of consciousness make it seem I must be missing something in my description?

18
  • 13
    What behavior of other people or computers seems like is irrelevant to the hard problem, reproducing human behavior with AI is an "easy" problem. The reason we believe other people are conscious is the analogy with ourselves, and in our own case we experience "what it is like" first hand. The hard part is to explain why physical processes in computers or our brains should be accompanied by such first person feels at all when zombies lacking them could follow the same physical laws and manifest the same outward behavior without them.
    – Conifold
    Commented Nov 18, 2020 at 15:38
  • 9
    That is not what it means at all. Private content can be and is easily explained by neuroscience models. People talking about the hard problem of consciousness talk about something else, the "experienced quality" nature of first person feels, which seems orthogonal to any third person descriptions of what they might accompany. Publicity/privacy is just one such description, that they happen to be private is just a side effect.
    – Conifold
    Commented Nov 18, 2020 at 23:25
  • 1
    What is your reason for thinking the task manager has subjective experiences? Commented Nov 19, 2020 at 2:57
  • 4
    @AmeetSharma Yes. I think we have arrived at a conclusion in this discussion. The "hard" problem of consciousness is essentially equivalent to "we think human experience cannot be explained by mechanical processes" -- which again, is an assumption, even though a very intuitive assumption to all of us (it "feels" like the right assumption). Whether this problem is "hard" depends on the subsequent progress in cognitive sciences.
    – J Li
    Commented Nov 19, 2020 at 3:28
  • 3
    Feels may well be perfectly correlated with neurons firing, but that would do nothing for the hard problem. It is not about relations and correlations. Their arguments is simple: scientific explanation is based on modeling, models are matched to third person descriptions, while feels can be correlated to something so describable they themselves are not, therefore science can not explain them as such. That's why the problem is "hard". It would require inventing some new mode of explanation, either to bridge the feel/description gap or to explicate why no explanation is called for.
    – Conifold
    Commented Nov 19, 2020 at 5:44

12 Answers 12

21

What matters is not the fact that the experience is subjective per se, what matters is that there is no way to share the quality or quale of that subjective experience with anybody else.

If you see a shade of red, how do you know how others experience it? Some people have different chemoreceptors for red and will experience it as different shades. Others are red-green colourblind but cannot tell the rest of us whether they experience all reds as greens or all greens as reds or something else. Some octopuses are sentient and have colour vision; their brains and eyes evolved entirely separately from ours (last common ancestor was probably a flatworm), so how do they experience the redness of say a sea anemone?

Other animals have other senses - electric, magnetic, etc. which we do not. Some birds sense geomagnetic fields with their eyes, some birds are sentient. But we homo sapiens have no processing pathways for magnetic senses and so can never know, not even in principle, the subjective quale of looking at a magnetic field.

We sometimes talk of the "neural correlate" of a quale. Such correlates may be measured and recorded by an electroencephalograph (EEG), which is objective. But the mapping from EEG to quale is not as simple as that. No two brains are wired identically. There is no such thing as a blow-by-blow, synapse-by-synapse correspondence between two brains; comparison of neural correlations can never be an exact match. Rather, we have to identify the information carried by those signals. The quale is thus more correctly understood to be the subjective experience of that information, not of the physical signal ''per se''.

Even so, all the encephalography, signal reconstruction and computational simulation or senteint AI in the world cannot enable any quale of redness to be identified, recorded and communicated.

Consequently no law of physics, nothing founded on the laws of physics, nothing reducible to the laws of physics, can describe qualia (the plural of quale). There is no way you can objectively capture subjective experiential qualities, in order to compare them and see if they are the same or not. They are simply not open to objective science in the way that their neural correlates and information content are.

That is what is hard about the hard problem.

25
  • 6
    This goes to far in the direction of impossible: "Consequently no law of physics, nothing founded on the laws of physics, nothing reducible to the laws of physics, can describe qualia (the plural of quale). They are simply not open to objective science." Perhaps, perhaps not. We don't know whether we will ever understand "qualia". Of course you can fight over definitions but what if someday we manage to really reproduce your vision of a painting in someone else's mind. (Through for example a complete understanding of neurons and their states and how to read/write them to a specific state.)
    – Kvothe
    Commented Nov 18, 2020 at 18:10
  • 4
    My point being, this is clearly a field of study where as of yet there are many things we cannot understand or test due to technical problems. I would therefore not lightly conclude we already know what we can ever know. For example perhaps the ability to simulate new conscious beings, i.e. beings convincingly claiming to be conscious, from scratch, will teach us a completely new understanding of the origin of consciousness. It is definitely a hard problem, but perhaps not impossible.
    – Kvothe
    Commented Nov 18, 2020 at 18:16
  • 10
    @JLi The key point is that the perceptual nature of the qualia is not open to confirmation. You and Mary have not the slightest idea how the subjective qualities of your experiences compare. For a scientific rationalist (often a materialist), that makes it a problem - and, worse, a hard problem with no apparent solution even in principle. Of course, if you are not an atheistic scientific rationalist then there is no problem, hard or soft. Commented Nov 18, 2020 at 19:38
  • 4
    "But we homo sapiens can never know, not even in principle, the subjective quale of looking at a magnetic field." Are you sure about that?
    – Joshua
    Commented Nov 18, 2020 at 20:52
  • 8
    Enough has been said. Those who choose not to buy the hard problem are at liberty to disagree, but I have described it as best I can. Commented Nov 19, 2020 at 11:21
12

Q: … He phrased the hard problem as “why objective, mechanical processing can give rise to subjective experiences.” I find it difficult to think of this as hard. …

... Then, it seems like this task manager is “conscious”. Only the task manager itself is aware of the programs ran, and others don’t see the program status. This, the awareness is “subjective”.

How David Chalmers talks about the problem of consciousness make it seem I must be missing something in my description?

A: You seem to miss the most important word “experiences”.

What is hard about the hard problem of consciousness is why there is subjective experience occurring with consciousness (1-5) and not why awareness or subjective awareness occurs with consciousness (as you seem to understand). Chalmers himself says these:

The hard problem of consciousness is the problem of experience. Humans beings have subjective experience: … There is something it is like to see a vivid green, to feel a sharp pain, to visualize the Eiffel tower, to feel a deep regret, and to think that one is late. Each of these states has a phenomenal character, with phenomenal properties (or qualia) characterizing what it is like to be in the state.

There is no question that experience is closely associated with physical processes in systems such as brains. It seems that physical processes give rise to experience, at least in the sense that producing a physical system (such as a brain) with the right physical properties inevitably yields corresponding states of experience. But how and why do physical processes give rise to experience? Why do not these processes take place "in the dark," without any accompanying states of experience? This is the central mystery of consciousness.” (1)

and

“For any physical process we specify there will be an unanswered question: Why should this process give rise to experience? Given any such process, it is conceptually coherent that it could be instantiated in the absence of experience.” (2)

For example, when we see a house, listen to a song, or smell a rose, in addition to the awareness (similar to the computer awareness) of those things, we have subjective experiences of what it is like to see the house, to hear a song, and to smell a rose occurring in our mind (see figure below) (6). The hard problem is “Why do these subjective experiences occur in our mind – why do we not just process these kinds of information in the dark without subjective experiences occurring as computers do in their information processing?”

Subjective experiences

You are right that computers can be subjectively aware of the image of the house, the sound of the song, and the smell of the rose, but so can we. Thus, subjective awareness is not the issue that makes the hard problem of consciousness hard and does not differentiate us from computers. On the contrary, at present, there is no evidence that computers have subjective experiences as we do. Therefore, it is the subjective experiences that make the hard problem of consciousness hard and differentiate us from computers.

This is in contrast to the easy problems of consciousness:

“The easy problems of consciousness include those of explaining the following phenomena: the ability to discriminate, categorize, and react to environmental stimuli; the integration of information by a cognitive system; the reportability of mental states; the ability of a system to access its own internal states; the focus of attention; …

Although we do not yet have anything close to a complete explanation of these phenomena, we have a clear idea of how we might go about explaining them. This is why I call these problems the easy problems. Of course, "easy" is a relative term. Getting the details right will probably take a century or two of difficult empirical work. Still, there is every reason to believe that the methods of cognitive science and neuroscience will succeed.” (2)

And at present, a lot of advances have been made regarding the easy problem of consciousness. Although we still do not know all the details about it, we now have a good general idea of what the neural correlate of consciousness (7-9) is like. The complete knowledge of neural correlate of consciousness will completely solve the easy problem of consciousness.

References:

  1. Chalmers DJ. Consciousness and its place in nature. In: Chalmers DJ, editor. Philosophy of mind: Classical and contemporary readings. Oxford: Oxford University Press; 2002. ISBN-13: 978-0195145816 ISBN-10: 019514581X.

  2. Chalmers DJ. Facing up to the problem of consciousness. J Conscious Stud. 1995;2(3):200-219.

  3. Chalmers DJ. Moving forward on the problem of consciousness. J Conscious Stud. 1997;4(1):3-46.

  4. Weisberg J. The hard problem of consciousness. The Internet Encyclopedia of Philosophy.

  5. Van Gulick R. Consciousness. In: Zalta EN, editor. The Stanford Encyclopedia of Philosophy.

  6. Ukachoke C. The Basic Theory of the Mind. 1st ed. Bangkok, Thailand; Charansanitwong Printing Co. 2018.

  7. Chalmers DJ. What is a neural correlate of consciousness? In: Metzinger T, editor. Neural Correlates of Consciousness: Empirical and Conceptual Questions. MIT Press, Cambridge, MA. 2000

  8. Koch C, Massimini M, Boly M, Tononi G. Neural correlates of consciousness: Progress and problems. Nature Reviews Neuroscience. 2016;17: 307-321. https://puredhamma.net/wp-content/uploads/Neural-correlates-of-consciousness-Koch-et-al-2016.pdf

  9. Tononi G, Koch C. The neural correlates of consciousness: An update. Annals of the New York Academy of Sciences. 2008;1124:239-61. 10.1196/annals.1440.004. https://authors.library.caltech.edu/40650/1/Tononi-Koch-08.pdf

13
  • Thank you so much. My issue with the “hard problem” is about our understanding of “experience”. The typical neuroscience perspective is that “experience” is no more than patterns of neurons firing, and “experience” is simply an intuitive way human brains perceive such patterns. Thus, it seems that the “hard” problem boils down to us saying “I find it hard to imagine that my experience is just neurons firing”. To be a little dramatic (purely for exposition), this objection isn’t that different from “I cannot imagine humans evolving from animals”, and thus we have a “hard problem of evolution”.
    – J Li
    Commented Nov 18, 2020 at 17:08
  • 2
    I agree with you that the answer must lie in it is somehow an effect of neurons firing. But you are completely wrong if you think that this "somehow" is currently understood. If you would start simulating a bunch of neurons from scratch not knowing our human experience you would definitely not have predicted that those neurons would develop consciousness.
    – Kvothe
    Commented Nov 18, 2020 at 18:20
  • @JLi what exactly do you mean by "is" in "I find it hard to imagine that my experience is just neurons firing"? Therein perhaps lies the difficulty. Firing neurons "are" experience only in a similar sense to music "being" soundwaves or notes, or soccer "being" 22 people running around and kicking a ball. Commented Nov 18, 2020 at 20:43
  • 1
    @Yuri, I don't see how that goes against what I was saying. First off all this is far of from AlphaGo starting outputting thoughts on how he was perceiving things and outputting awareness of himself as an entity and his thought process. Secondly even if we did we would not yet understand (although we would probably get closer). An important step would be I think when you can from a microscopic model predict that the neurons would become self-aware/conscious. So not just know it has to happen because you saw it happen but actually be able to predict it from the building blocks of neurons.
    – Kvothe
    Commented Nov 19, 2020 at 10:22
3

The difficulty is in explaining consciousness in terms of the kind of things that are in the physical world. No one has a clue how to that.

Many people believe we will one day explain mental contents in physical terms. For example, we might one day be able to explain human deductive logic in terms of the physical characteristics of neurons like we can explain the logic of a computer in terms of its hardware. We might also one day be able to predict the behaviour of a human being from a brain scan like we predict the weather by looking at the Earth's atmosphere. Yet, nobody has a clue as to how the quality of our subjective experience could possibly ever be explained in terms of subatomic particles, quantum events or some such. We don't even know where we would have to begin.

Then again, I fail to see what would be the use of doing that. We don't seem to need to explain consciousness.

This isn't the only problem seemingly impossible to solve either. Any fundamental constituant of reality could not possibly be explained in terms of the physical world. Maybe subjective experience is just such a constituant.

Funnily, consciousness would then be the only such fundamental constituant of reality we actually know and will ever know. So not only do we probably not need to explain consciousness, but we seem to know all there is to know about it.

It is also likely that our qualia are the only things we will ever really know of the real world. So the real problem is not to explain our qualia and subjective experience, but to make sure our beliefs about the physical world are reliable enough for us to survive in it and prosper.

2
  • ur philosophy is more similar to Leibniz's idealistic Monism than Descartes' dualism, just curious why instead u put Descartes pic as ur avatar?? Commented Mar 10, 2021 at 18:04
  • @DoubleKnot 1. There is nothing idealistic in my position, on the contrary: "We don't seem to need to explain consciousness" - 2. Descartes because of the Cogito, which explains why we cannot explain consciousness. Commented Apr 8, 2021 at 9:33
2

The Question

David Chalmers did not express it clearly in that quote (which is a loaded question, btw). What he meant to ask is "What did Mary learn, when she saw the red color for the first time?"

As the story goes, Mary is a brilliant scientist and a leading expert in everything color -- what they are (bands in EM spectrum), how they're sensed by the eyes, and how they reconstructed by the brain. Amazingly she accomplished all that w/o actually seeing a color. She is not colorblind, but she has been living in a black and white environment. Her lab, home, furniture, screens all monochrome, the shades of gray... until one day when she got out and saw red leaves on the trees (it was a beautiful day in the fall).

And that was Chalmers's question -- what Mary had learned in that moment? She already knew everything there is to know about colors. Yet, seeing red was not just a novel experience, it enriched her life in the most profound way -- which would not be possible unless she had learned something just from seeing the color... but what exactly did she learn? <== and that, again, is the so-called "hard problem".

A (very short) Answer

Now if you think about it, the "hard problem" question is essentially about the nature of fundamental concepts -- also platonic forms, also John Locke's "simple ideas", also Immanuel Kant's "intuitions", etc... like your concept of a "chair", or a "jump", or, indeed, of what counts as "red".

It is a knowledge of sorts -- like, you know what a chair is, don't you? But try and give a precise definition of what is -- and what isn't! -- a chair in rational terms, and you will soon find yourself grasping for words and only becoming more frustrated, realizing... wait, you don't know what a freaking chair is!?..

Well, strictly speaking, you don't, for it is not a rational knowledge.1 What you do have, instead, is a pretty good idea of what constitutes a chair. And, unlike knowledge, ideas/concepts are not products of your rational Self. They are created by your neural network AI, commonly referred to as your "subconsciousness".2

In fact, "getting ideas" of things is what neutral networks do as their way of processing experiences. Being, at its core, an image recognition system, a neural net treats everything as a picture,3 looking for similar patterns and anti-patterns in different depictions of the same class/type of things.

A concept of a chair, therefore, is but a collection of numerous patterns found in things classified as chairs by some trusted authority. Plus the anti-patterns, their presence strongly suggesting the thing is not a chair.

And that's your qualia, hopelessly subjective, as it should be, a sea of simple concepts. The rational Self then uses them as lego pieces to assemble three-dimensional mental models, each simulating a certain aspect of reality. If simulation correctly describes the real thing -- if it's true -- then it is promoted to the rank of knowledge. The individual models, in turn, become pieces of the ultimate jigsaw puzzle, the Big Picture -- a complete simulation of the world. Modeling ourselves, as a part of it, makes us self-aware and, thus, capable of conscious choice.

And... that's all there is to it. The real hard problem is not the consciousness -- it's us, creating obstacles upon obstacles, making something that everyone should have pretty much out of reach.

 
1 We can call it "irrational knowledge", but I'm afraid that would bread a lot of confusion.

2 In some way, it functions very similarly to a Flight Computer, first adopted in the modern fighter jets (F-16 was the first to take full advantage). At the time, they wanted to make them extremely agile, but that would also make them aerodynamically unstable, impossible for a human to control. Enter Flight Computer. Capable of making minute adjustments of individual control surfaces every split second, and could fly a brick with winglets (and so it did with Space Shuttle). The human pilot is still there, of course, but they can only access FC. The good FC then makes the pilot feel like they are in control, by doing its best to interpret and accommodate the pilot's intentions. Or not, if the FC knows better, as it happened with US Airways Flight 1549 (the "Miracle on the Hudson"), when, for the last minute of the fight, the FC diligently ignored the pilot's trying to lift the plane nose up, which would have ended in a stall like this...

3 the actual meaning of "being superficial"

9
  • Thank you Yuri. From a neuroscience perspective, it seems that the answer is clear? When Mary lives in a colorless environment, she learned such knowledge rationally (mostly in her frontal cortex). When she saw the color for the first time, it is a different set of neurons firing. Thus, these two are very different sets of neurons firing. For simplicity, we call the former “knowledge” and the latter “experience”.
    – J Li
    Commented Nov 18, 2020 at 17:13
  • That is correct! Tho as far as the answer goes, it barely scratches the surface. The most substantial part is the "two minds" concept -- humans having two independent centers of cognition: 1) the rational (tho seldom conscious) Self, and 2) the irritational (subconscious) neural net. The two are nothing alike -- both in terms of what they do and how they do it. Even their availability differ -- while the irrational mind is a given, the rational becoming at all operational is very much an option (achieving its nominal performance in next to impossible, grace of, umm... "civilization"). Commented Nov 18, 2020 at 21:57
  • Isn't it obvious that she's learnt how her organism reacts to the color? There's simple mathematical argument that shows that sometimes you cannot predict things, no matter how much you know. Commented Nov 19, 2020 at 10:01
  • @Dmitri > "Isn't it obvious that she's learnt how her organism reacts to the color?" -- she didn't "react to the color", she reacted to seeing it. She reacted because she learned something the moment she saw the first time. Commented Nov 19, 2020 at 22:10
  • @YuriAlexandrovich I didn’t say “she reacted”. I said her body did, which is in no way contradictory with saying that her self reacted to her seeing. My point is that you can invoke the same argument about unexpected experience even if “Mary” is just a Turing Machine. Commented Nov 20, 2020 at 4:50
1

Not only do we have something we call experiences, we are also aware of having experiences and we can reflect about them, have feelings about them etc.

This meta-stuff is not (yet?) within the realm of what computers can do or are expected to do. So it's a hard problem both philosophically and scientifically.

3
  • The real hard problem is not about the consciousness itsef. It's about having it in the first place, which most of us don't. We are supposed to be our conscious, rational Selves, but most of us are forced to commit a virtual suicide early in childhood. Alone and betrayed, their Selves effectively give up on thinking, on making conscious choices, on their agency. Once their neural net AI, their subconsciousness takes over their thought process ('cause someone has to drive!), their Selves becomes their helpless, nagging Ego -- sometimes observing.... Commented Nov 19, 2020 at 23:19
  • .... from backseat, or fast asleep in there, leaving their neural net AI chatbot/autopilot to try and make it "look* like they are still conscious, still awake at the wheel... Commented Nov 19, 2020 at 23:20
  • .... i don't know what else I can do, to wake them up Commented Nov 19, 2020 at 23:21
1

When you are talking about a green apple, your experience is that green apple. When you are talking about neurons switched on in your brain while one is talking about the apple, your experience are those neuron cells. You see - objects of your consciousness are different.

You say, "but they correspond perfectly, the apple qualities and the cell's parameters", and you describe how in details. But then the object is the correspondence, yet a third and another object.

If you hope to substitute the apple perception by the equivalent neurochemical status you will have to train yourself to visualize the latter whenever somebody says "apple". There will be substitution of objects, and that is all.

Moreover, the experience of an object is immersed in ones current project (expectations, wishes, mood etc). My impression of the green apple is very specific if my tooth is on sore. Likewise, your attempt to link my this experience with brain cells is unique in another way, say, because you are preparing your thesis and are motivated by the prospect. But both "contexts", yours and mine, are not clearly apprehended most of the time and so, scientifically speaking, are hard to control for.

You might protest on the object whirl, "I believe apple and neurons reside in the world even when I don't think about them". Your right. Still, when you are thinking about their correspondence, you are keeping them apart. To relate or compare two things (even as equivalent) means to deny their identity. So, your effort to map the green apple image onto the neuronal firing field strengthens and sharpens the distinction between the counterparts. Thus a physical explanation of an experience is self-defeating.

You may resist, "I'm talking about a correspondence between the intrapsychic image and the brain, not about the apple out there and the brain". Then you are a mystic. For, there is no anything inside consciousness. Consciousness is void - it is just activity about things (material or imaginary) of the outer world (yes, imagination is an outer experience). By putting a spook proxy of the apple in place of the apple and drawing links between neurons and the proxy you are playing a forgery (called modelling) because you can move the spooky instance as close to the brain as possible while (groundedlessly) claiming it "represents the experience/qualia".


Those were remarks con scientific reductionists. Now about the hard problem of consciousness itself. Wikipedia describes it as follows: "The hard problem ... is the problem of explaining why ... we have ... phenomenal experiences".

Since I tend to be a phenomenologist, that is not a problem for me: everything in the world are just phenomena (which exist as experienced aka apparent) and there is no anything besides phenomena.

So I would reformulate the hard problem by shifting the accent: "The hard problem ... is the problem of explaining why ... we have ... phenomenal experiences (rather than we are they)".

To have something is alias to not be it, or (put differently) to be it by the mode of a lack or via a clearance. That is the hard problem which science-based reductions cannot help, I suspect.

2
  • My only argument against phenomenology is it seems lacking guiding metaphysical principle since as u mentioned it denies either material or rational mind ideal as real ontological substance (real existence ontologically) which reflect through our sense organs become our experienced phenomena as materialism and idealism trying to leverage. So ur phenomenology is basically back to a layman just using one phenomenon to explain another phenomenon totally within this perceived world... I'd like to see if u have anything to say about this lack of ontological substance issue of phenomenology? Commented Mar 10, 2021 at 18:29
  • I have nothing to add because you said it yourself and accurately. Yes, phenomenology sees no need in matter or in spirit (idea). We live in an "layman" (your word; immediate or unprejudiced, my word) world where entities are just serials of phenomena replacing each other. "Let's go back to things themselves" as they appear, is the motto. The kernel is in the obvious, not under it.
    – ttnphns
    Commented Mar 10, 2021 at 19:31
1

You are on the right track here, and we can use Daniel Dennett's "What RoboMary Knows" thought experiment to continue this approach.  While this was developed in response to a well-known thought experiment from Frank Jackson's "Knowledge Argument", known by various names such as “Mary’s Room” or "Mary the Color Scientist", the Hard Problem claim is built on the same notions.

Here, Dennett posits a conscious, self-aware, qualia-experiencing type of robot, which knows all the relevant details of its own circuitry and programming, and also has the ability to make specific, targeted changes to its own internal state. In his reply to Jackson, Dennett tells a story in which one such robot, RoboMary, has been equipped with monochrome cameras instead of the usual color ones, but, using her* extensive knowledge of color in the environment and of color vision, she is able to calculate how color cameras would record the scene before her, deduce what changes this would cause to the state of her neural circuitry, and, using her fine-grained control of that circuitry, put it into the state it would have reached if she had color cameras. Given that her physical state is identical to the one which would have resulted from seeing in color, physicalists see no reason to suppose that this would be experienced by RoboMary any differently than actually seeing the scene in color, and would have the same consequences as doing so.

To some, this may look like begging the question, by asserting that consciousness can arise in a purely physical entity. Dennett anticipates this objection:

Hold everything. Before turning to the interesting bits, I must consider what many will view as a pressing objection:

"Robots don’t have color experiences!  Robots don’t have qualia. This scenario isn’t remotely on the same topic as the story of Mary the color scientist."

I suspect that many will want to endorse this objection, but they really must restrain themselves, on pain of begging the question most blatantly. Contemporary materialism–at least in my version of it–cheerfully endorses the assertion that we are robots of a sort–made of robots made of robots. Thinking in terms of robots is a useful exercise, since it removes the excuse that we don’t yet know enough about brains to say just what is going on that might be relevant, permitting a sort of woolly romanticism about the mysterious powers of brains to cloud our judgment. If materialism is true, it should be possible (“in principle!”) to build a material thing–call it a robot brain–that does what a brain does, and hence instantiates the same theory of experience that we do. Those who rule out my scenario as irrelevant from the outset are not arguing for the falsity of materialism; they are assuming it, and just illustrating that assumption in their version of the Mary story.  That might be interesting as social anthropology, but is unlikely to shed any light on the science of consciousness.

To fit this story to the hard problem, let us first see what its proponents claim, which is that, while figuring out the physics of how the brain works is a hard problem in the ordinary sense, there is a much harder problem lurking behind it: explaining how the physics of the brain gives rise to qualia. With an intuition pumped up by the Knowledge Argument, they propose qualia are intrinsic, ineffable and private, and assume this creates an unbridgeable "explanatory gap" between physical knowledge and knowing what it is like to have experiences.

To respond, we can suppose that, instead of deducing a state corresponding to seeing something in color, RoboMary is simply told what that state is by another robot of the same type, which has functioning color vision. Just as in Dennett's original story, RoboMary, after setting her internal state accordingly, now knows what it is like to see a scene in color, without having done so. For these conscious entities, their qualia are neither intrinsically private nor ineffable, as demonstrated by their transfer from one to the other.

Of course, none of this solves the hard problem, as it depends on it being possible to make, at least in principle, a RoboMary - an artificial conscious machine - which is something not yet established. What it does show is that, contrary to what many anti-materialists believe (and perhaps hope is true), the hard problem need not be unsolvable regardless of what progress is made in neuroscience. RoboMary is plausible under the usual physicalist assumptions, which makes it equally plausible that the apparently ineffable nature of qualia is merely a consequence of our inability to examine and modify, at the neuron and synapse level of detail, the processes going on in our own brains. RoboMary has that ability, and if RoboMary is possible, so too is the communication of qualia by language.

Anti-materialists could simply assert that RoboMary is not possible, but, as Dennett says, that would not be arguing for the falsity of materialism, it would be assuming it. They might claim that RoboMary would still leave something unexplained, but to be plausible, they would have to be more specific than they have so far about what that is, and do so without tacitly begging the question by assuming qualia are not the result of physical processes (Dennett himself has made that point in various places, such as "Explaining the 'Magic' of Consciousness.")

Some anti-materialists would doubtless argue that RoboMary would be, at best, a p-zombie (something physically identical, at least neurologically and functionally, to a human, but lacking qualia.) Responding to that claim in detail (and all other claims that the definitive anti-materialist argument is to be found elsewhere than in the one we are discussing) is beyond the scope of this question; here, it is sufficient to note that not all of the many physicalism-inclined philosophers accept the argument's leap from p-zombies' conceivability to their modal possibility, despite Chalmers' closely-argued attempt to persuade them that it is not a claim that needs further justification.

One useful feature of this approach is that it avoids issues of what sort of event learning "what it's like" is. Whether it is learning a fact, or gaining an ability or phenomenal concept, the physicalist premise holds that all mental events involve, and are (in principle) causally explicable by, physical changes in the brain, and so they are communicable in the form of a sequence of physical changes to be made at specific locations (again, in principle, and only for conscious agents having the level of control of their physical state being proposed for RoboMary. Dennett's story is not, as some have mistakenly taken it to be, an argument that Mary herself would be able to do this.)

It has been said that the hard problem is only a problem for physicalists, but there is something of a double standard in so saying. Anti-materialists have been no more successful than physicalists in completing an explanation of how minds work; saying "well, it cannot be via physical processes alone" does not explain anything, and it does not mean that the question of how minds work goes away, even if it turns out that the anti-materialists are correct.


*I am following Dennett's lead in using gendered pronouns here.

17
  • Chalmers himself would not deny that something like RoboMary is possible, or say she'd be a p-zombie--he takes for granted that the physical world is causally closed (no interactive dualism), and he thinks there are likely to be "psychophysical laws" relating physical patterns to phenomenal experience, and that these laws would respect a principle of "organizational invariance" meaning an accurate simulation of a brain would have the same type of phenomenal experience as the original. So he'd presumably agree RoboMary's self-alterations would give her the same experience as color cameras.
    – Hypnosifl
    Commented Jul 15, 2021 at 13:35
  • @Hypnosifl Indeed, and Chalmer's minimalistic dualism might be considered unsatisfactory by many anti-materialists. For any hard-problem proponent accepting that RoboMary's self-alterations would give her the same experience as color cameras, the question - what, specifically, is beyond science's ability to explain - becomes more pointed.
    – A Raybould
    Commented Jul 15, 2021 at 16:48
  • 1
    Chalmer's argument, like Nagel's, is that there are facts about first-person consciousness that go beyond all possible third-person physical facts. And on p. 144-145 of The Conscious Mind he argues that a physicalist may say that Mary gains a new ability when seeing color for the first time, it doesn't make sense for them to say that Mary has learned any new facts (i.e. the fact of what it is like to experience color). So he might argue that similarly in the RoboMary thought-experiment, RoboMary's rewiring does not allow her to learn any new facts in the physicalist picture.
    – Hypnosifl
    Commented Jul 15, 2021 at 17:16
  • @Hypnosifl The challenge for anyone claiming that there is a fact of what it is like to experience color, and that one must learn it in order to know what it is like to see color, is that no-one who knows what it is like has been able to articulate this fact that they are supposed to know.
    – A Raybould
    Commented Jul 15, 2021 at 18:47
  • @Hypnosifl In Dennett's original version, RoboMary is deducing new facts, and not just gaining abilities, from what she already knows, and in the scenario in my reply, she is learning them discursively from another conscious agent.
    – A Raybould
    Commented Jul 15, 2021 at 19:08
0

What's hard about it, is that no one can see an ontological distinction in the causal chain of sensory-cognitive processing that changes this objective relationship to a purely subjective one that we agree is "consciousness" (even if MRI scanners can correlate these together).

The solution is that there is such an ontological distinction akin to anti-particles and particles that allow a separate chain of processing, since each, in this example, would be in a separate dimension of Time and causality is related to a singular dimension. The relationship required is one between the neurons to the skin or membrane of the neuron. This is the ontological separation that allows subjective experience to be a different medium from the objective, even though they are distinctly and necessarily related.

No one has shown, scientifically, that consciousness could be riding on the surface of the neuron, but this is probably the case, since the need for separation must exist, if we are not robots.

0

@ChristianDumitrescu has the precise right answer in form of a commentary. I'll just expand it, hoping having got the precise point.

As synthesized by @ChristianDumitrescu, it's a problem of degree, not kind.

To start, complexity is essentially the quality of a system by which it is just difficult to understand. A complex system is not a system that has "more than 100 subsystems" (read once on the web, which would be equivalent to say that a circle is a figure formed by at least 100 arcs), or which exhibits a large number of intrincate relationships. A complex system is a system which is difficult to understand, a model which is difficult to assess as a simple concept (yes, I know that is quite the same definition of a standard system, group of interrelated parts, but there's no formal definition of a complex system, the classical systems theory has been developed precisely to address complexity).

A mathematical "simple" problem would usually be a problem that features a limited set of unknown variables. A "hard" problem would feature an unknown set of unknown variables. That is, precisely a complex problem, as defined above. Consciousness is a hard problem because the product exceeds by far a linear (or non-linear**!) product of its constituent functions, which in turn, are far from being comprehensible (i.e. ...are complex)

** Non-linear: essentially which features emergent behaviors. As said commonly, where the whole is more than the sum of the parts.

0

The hard problem of consciousness is the "explanatory gap" between, on the one hand, the language of physics — which apparently governs everything that happens in the universe — and on the other hand the inner experiences that all sentient human beings have.

There seems to be no way to start with the laws of physics (as we know them) and the objects that they apply to (assemblages of particles and waves) and end up with a conclusion that any experience whatsoever is being experienced.

That is what the hard problem of consciousness is.

(The word "hard" is used to distinguish it from the so-called "easy" problems of consciousness, which are not really easy, but perhaps easier: These are the problems of describing the types of consciousness that occur and under what circumstances.)

0

I think you don't understand the problem.

There is definitely some correlation between parts of the brain and conscious activity. When you do mathematics or study IT, you use the left half of your brain more than the right, and when you paint it's the other way around. Alcohol activates part A of the brain, smoking dope activates part B, when we are in love, it's part C etc. Understanding those correlations is the easy problem, and it's a matter of time before we fully solve it (most likely).

Your example of software-hardware is dealing with the easy problem, it already pre-supposes consciousness, it does not deal with the most important question of how consciousness arose in the first place. Also, this example is bad because the mind is not all like a computer, those analogies come from cognitive science, which is full of wrong assumptions that go back to Husserl. Modern computers are just combinatorics machines.

The hard problem is very different. First of all, it has to do with spontaneity. In this context, spontaneous is used in the sense of chemistry. Sometimes you mix 2 chemicals and nothing happens (like oil and water), and in other cases you mix 2 things, and it blows up or bubbles spontaneously. Basically, you mix A and B (or add C to it, or add 1000 other chemicals), leave them on their own, and without any interference from your side, and a vigorous reaction occurs by itself.

What are you fundamentally? Just a bunch of atoms. Your brain is also a bunch of atoms. Add 2 atoms together, what will happen? Nothing, now add a 3rd one? Still nothing. We go on billions of times, nothing. We keep going in this way, and it just so happens that we mix 6,543,523,432,234 atoms of carbon, hydrogen etc, and spontaneously consciousness arises. Atom + atom + atom + billion more atoms -> consciousness. How? That's the hard problem.

This problem is hard for several reasons: It will probably never be solved. It is not even clear if we will ever solve 0.001% of this problem. Our modern science has no tools to deal with it and no conceptual framework to even approach it.

Consciousness, subjective experiences and other phenomena of the mind have nothing to do with atoms even though the brain is nothing but atoms. No matter how many atoms you mix in, no matter in what sequence, shape or form, you will NEVER get consciousness, and the fact that is exists is literally a miracle.

Once again, if we already have consciousness, we can find some physical correspondence between some feeling and some firing neurons, and that is the easy problem.

-2

Referring to your computer example I see this as the difference between a 1960 electronic system and modern computer. In the 60s switch A turned on light A. Now there are many layers and when you press the button in your app there is far more happening before the light comes on.

The easy problem is the hardware, the physical parts of the brain, how an event triggers a sequence of neurons to fire and generate a response. The hard problem is understanding the high level programs that are running. i.e. the sequences, timings, patterns and feedback loops running in that system.

Imagine trying to work out what specific action a user is taking on their smart phone by only looking at a fuzzy memory dump, then reverse engineer a web browser from that information.

1
  • martin -- you have not understood the hard problem, which has nothing to do with the complexity of calculation algorithms. Our higher level programs are no more or less conscious than our simple ones.
    – Dcleve
    Commented Feb 25, 2021 at 23:36

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .