4
$\begingroup$

I asked a similar/related question a couple days ago about how language and/or culture affect an individual's cognitive capacity.

I'm looking for materials, if any, on the subject of a language's relationship with the level of comprehension of consciousness.

For example, if a specific language doesn't have the right words to describe the biological aspects of visual perception, will the individual using that language be able to comprehend all aspects of consciousness as well as an individual using a language which has the words to describe the event?

I've read The Rediscovery of the Mind by John Searle and it discusses some issues of how our current state of understanding consciousness is a function of the scientific progressions of society. He talks about how we could be misunderstanding/misinterpreting the fundamentals of consciousness when we use out-dated and/or inaccurate definitions & concepts as a scaffold for defining current theories of consciousness.

Are there any books, studies, theories, or relevant materials on this topic?

$\endgroup$
2
  • $\begingroup$ This question is attracting flags and downvoted because it is somewhat vague and it has been suggesting in flags that is't not on topic to our site. Any thoughts on how to improve the question Cheese? You may want to ask on Meta for the community's advice. $\endgroup$
    – Josh
    Commented Feb 26, 2012 at 15:07
  • $\begingroup$ I don't understand why it is too vague. As simple as I can restate the question: Does language directly impose a limit on the level of understanding of the mechanics of cognition? If so, are there materials to be consumed that discuss this relationship? $\endgroup$ Commented Mar 11, 2012 at 2:21

1 Answer 1

7
$\begingroup$

One thing that comes to mind is the discussion over why English-speaking people think submarines cannot swim, while they think airplanes can fly. Supposedly in Russian, though, they do refer to submarines as "swimming."

Meanwhile, we ask whether computers can think, without really realizing that this question turns out to be simply a question about semantics, just like the question of whether or not a submarine can swim.

(Edsger Dijkstra said: "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.")

The thing is, some words we use we consider, by definition, apply only to humans or animals. We seem to be more concerned about the pedigree of the thing, than its actual behavior or function.

We use the word "conscious" similar to the word "swim"....it, almost by definition, only applies to living things. On the other hand, the word "aware" is much more commonly used to refer to something a machine or object might be capable of. For instance, a thermostat is "aware" of the current temperature in the room, or my computer is "aware" of the other computers on the network. There is a very real thing we are describing about the thermostat and the computer, and we aren't being metaphorical. (note that the computer is even aware that one of those computers on the network is actually itself. Is is therefore "self aware"?)

People use lots of words in everyday speech that fall into this grey area of metaphor or anthropomorphism. We say a magnet is "attracted" to metal, without it feeling like we are anthropomorphizing or speaking metaphorically...we just define "attract" in a way that can apply to both objects, animals and humans.

Likewise, people instantly understand what we mean if we say the magnet "wants" to stick to the metal, or that a magnet "doesn't like" having its north pole pressed against the north pole of another magnet... but, if you ask them, they might say we are being poetic in our speech by applying those terms to a simple object like a magnet.

As time goes on, more and more words that formerly applied to only people or animals, will start being used to describe things going on in machines. A cameras "sees". A security system "notices" a person moving in the room. A computer "makes a decision". A thermostat "tries" to get the temperature in the room to match the one set on its dial. Siri "understands" what I said. My Tivo "thinks" I will like a show. Obviously these aren't just changes in the definitions of words, they go hand in hand with advances in technology that results in our having occasions to meaningfully use the words to apply to non-living things.

So, to get to the point, yes, I think the language we use absolutely affects comprehension of consciousness. The more we get comfortable using the word "conscious" (and all these other words, "think," "aware," "try," "like," "want," "decide," "detect") to refer to things a machine is capable of, the less we will see the concept of consciousness as being mysterious and magical. I think that the "hard problem" of consciousness will not be "solved" by science, instead, science, technology, and the evolving way we use language to describe machines will gradually make the problem seem non-existant.

$\endgroup$
1
  • $\begingroup$ I think I understand your argument and I like how you summarized it at the end. I'm looking for related materials on the subject - do you know of any? Also, I think that the evolution of language (words/concepts/ideas/etc..) has the potential to push us even further from defining matters of consciousness in a concise manner. Just like the effect that happened when we wanted to find out what neutrons and protons are assembled of. The discoveries there just pushed more language and concepts into existence while casting more shadows on the truth of what and why matter is $\endgroup$ Commented Mar 11, 2012 at 2:40

Not the answer you're looking for? Browse other questions tagged or ask your own question.