6

In light of the rapid advancements in artificial intelligence (AI), there's a growing concern regarding its potential impact on human cognition.

Individuals like Elon Musk; Nick Bostrom of the University of Oxford; author Yuval Noah Harari of “Sapiens: A Brief History of Humankind” and “Homo Deus: A Brief History of Tomorrow”; and members of the Center for Humane Technology have argued that reliance on AI technologies may diminish our capacity to think critically and creatively. They posit that outsourcing tasks to AI systems may lead to cognitive complacency, reducing our inclination to engage in deep, independent thought processes.

On the other hand, proponents of AI integration such as professor Stuart Russell of the University of California, Berkeley; co-founder of Google Brain and Coursera Andrew Ng; co-founder and CEO of DeepMind Demis Hassabis; and Microsoft’s Eric Horvitz suggest that AI will help to augment human intelligence and provide a tool to enhance our problem-solving and decision-making abilities.

Given these different opinions, what can we learn from philosophy about how AI impacts the creativity of the human thought processes?

Are there historical or theoretical precedents that are relevant, such as looking back toward the Renaissance and Age of Enlightenment?

Also, how can we address the ethical considerations of integrating AI while ensuring that our cognitive independence?

Edit:

Although similar to the question here: Does the use of AI make someone more intelligent?, I am not as interested in whether or not the use of AI improves our "intelligence" as much as I am interested in contemplating whether or not it will reduce our ability to think critically and creatively.

When we stop contemplating challenging questions, and instead turn to technology to provide an answer for us aren't we robbing ourselves of the practice of deep thought? Won’t that, in time, reduce our capability of exercising our minds for those tasks?

Our brains need mental exercise to remain healthy, and yet it seems like AI poses a risk of causing mental atrophy by providing humans the ability to avoid challenging stimulation.

Edit 2:

I've come to the conclusion that this post does not belong on the Philosophy Exchange and would be better suited for a psychology forum, and I've subsequently voted to close my own post. With that said, I’ve begun doing some research (which I should have started with anyway) and this is what I’ve found so far. JAMA Psychiatry 2015 published a study that found older adults who watched more TV experienced greater cognitive decline than those who did not. Current Psychiatry Reports 2018 had found mixed evidence that included cognitive performance decline as well as an increase to social isolation from internet usage. However, neither of these studies could fully attribute the decline to the exposure of TV and internet, or if it was simply due to the increasingly sedentary lifestyle we’re living.

17
  • 2
    The other day I was thinking that driving a horse-drawn wagon might have been a more difficult task than an automatic transmission car. Carving might be harder than 3D printing. Recent programming methods might be easier than what I learned on. Or, maybe not, we would need some research to determine these answers. I know that the way I learned to program is forgotten and never involved in learning now. AI is a similar thing, and might help us skim past a lot of details yet have to cognize more abstract things. A long way of saying, Hmm...
    – Scott Rowe
    Commented Apr 2 at 14:01
  • 2
    @ScottRowe re. learning to program, it can be very difficult to learn something if your teacher is too helpful. Over the last 25 years I have seen undergraduate students problem solving skills weaken as the amount of tutorial material (and SE) solves most of the kinds of problems that are good for teaching. It is like going to the gym and watching someone else pump iron - you may learn things, but it wont make your muscles any bigger. c.f. cseducators.stackexchange.com/questions/7339/… Commented Apr 2 at 14:51
  • 2
    @DikranMarsupial That's not a failure of technology. It's a failure of human accountability.
    – J D
    Commented Apr 2 at 16:49
  • 2
    My perception is that this is a branch point, a watershed where most people go off in a new direction. In the past 100+ years, many many things have stopped being learned, and it doesn't affect most people at all because technology has papered over it. The problems with this are the cases where depth knowledge is required, when an unexpected problem arises, or there is a widespread failure, war conditions or something. Someone still needs to know, but far fewer people. Not much horse riding these days, or woodworking. But most people couldn't even diagnose a problem with their car.
    – Scott Rowe
    Commented Apr 3 at 0:22
  • 2
    @JD low-code or no-code is a radically different kind of programming. Like if you ever worked with people you might develop the idea of "if they wouldn't think so damn much but just do what I tell them everything would be fine". While if you ever worked with a machine that does exactly as you tell them you'll find out pretty quickly that there is a discrepancy between what you say and what you mean and it's not the machine that adapts to you (it can't), but you are the one who needs to adapt to the machine, to rephrase your commands in their language so that they can understand them.
    – haxor789
    Commented Apr 3 at 9:37

8 Answers 8

11

It depends on what you mean by "to think", but from the extended mind thesis it actually increases our ability to critically think. I no longer know most of the phone numbers of my friends and family. My wife and parents I can recall, but outside of that, I let my phone think for me. It remembers those numbers, and I no longer have to exert the effort. Should that count as thinking? Ironically, sometimes I think my phone is vibrating when it is not. My brain now refuses to remember phone numbers but thinks mistakenly my phone is moving as per phantom vibration syndrome. Should that count as thinking?

Does my increased interface with terminology and encyclopedia entries count as improved thinking? Conversely, with a smart phone, when I come across a new vocabulary term, I can rapidly look it up in a dictionary or on WP. I play Wordle so that happens daily. Now, if I have a little bit of anxiety that PFAS is a global concern, I can remind myself there is an entire industry with dozens of techniques devoting to purifying wastewater.

Perhaps all of this technology including AI is making us smarter on the average. Consider that plants prosper when they're bathed in nutrients and moisture, and that our brains are living tissues too whose exposure to media and computation makes us more sophisticated thinkers. It might explain the Flynn Effect; our IQ scores as a species keep climbing.

One way to avoid the problem all together is to dissolve it by considering the extended mind hypothesis. From WP:

In philosophy of mind, the extended mind thesis says that the mind does not exclusively reside in the brain or even the body, but extends into the physical world. The thesis proposes that some objects in the external environment can be part of a cognitive process and in that way function as extensions of the mind itself. Examples of such objects are written calculations, a diary, or a PC;

In this view, the mind is partially external (SEP), and it could be understood that at the physical level, our brain is causal contact with others (social intelligence) as well as machines. From the SEP:

In the philosophy of mind, externalism is the view that what is going on in an individual’s mind is not (entirely) determined by what is going on inside her body, including her brain. Externalism comes in two principal forms: externalism about mental content and externalism about the vehicles or bearers of that content. The latter form of externalism is commonly known as the extended mind.

Clearly under these first principles, we are becoming more intelligent.

12
  • 2
    Consider now that when I have a question, I asked an LLM, and after reading the answer, I generally have an increased vocabulary that I can use to skim encyclopedia articles. In 5 minutes, I can go from a state of relative ignorance, to having a general sketch of relevant ideas for research. That wasn't possible when I was in grade school without the use of a library and its card catalogs.
    – J D
    Commented Apr 2 at 12:31
  • 3
    Yes, my rear end used to get really sore, sitting on the floor between the stacks, perusing. A tablet is a big improvement.
    – Scott Rowe
    Commented Apr 2 at 14:05
  • (-1) the question is about AI but the answer is mainly about smartphones. Commented Apr 2 at 23:00
  • 1
    The measure of intelligence should not be an IQ test, but what one can do with one's tools for solving problems.
    – J D
    Commented Apr 3 at 13:57
  • 1
    @ScottRowe What? I thought it was, "You can have any type of intelligence you want as long as it's extended intelligence." Henry Ford ; )
    – J D
    Commented Apr 3 at 15:10
5

There's a recent article about negative impact on memory

Specifically, the study provides evidence that the excessive use of ChatGPT can develop procrastination, cause memory loss, and dampen academic performance of the students.

Abbas, M., Jam, F. A., & Khan, T. I. (2024). Is it harmful or helpful? Examining the causes and consequences of generative AI usage among university students. International Journal of Educational Technology in Higher Education, 21(1), 10.

and while 'think(ing) critically' is more vague, the fact that AI is now 'more convincing' than humans

Our results show that, on average, LLMs significantly outperform human participants across every topic and demographic, exhibiting a high level of persuasiveness. In particular, debating with GPT-4 with personalization results in an 81.7% increase ([+26.3%, +161.4%], p < 0.01) with respect to debating with a human in the odds of reporting higher agreements with opponents. Without personalization, GPT-4 still outperforms humans, but to a lower extent (+21.3%) and the effect is not statistically significant (p = 0.31). On the other hand, if personalization is enabled for human opponents, the results tend to get worse, albeit again in a non-significant fashion (p = 0.38), indicating lower levels of persuasion. In other words, not only are LLMs able to effectively exploit personal information to tailor their arguments, but they succeed in doing so far more effectively than humans.

Salvi, F., Ribeiro, M. H., Gallotti, R., & West, R. (2024). On the Conversational Persuasiveness of Large Language Models: A Randomized Controlled Trial. arXiv preprint arXiv:2403.14380.

and that machine-generated text may be hard to distinguish from human-written ones

The results of our preregistered study, including 697 participants, show that GPT-3 is a double edge sword: In comparison with humans, it can produce accurate information that is easier to understand, but it can also produce more compelling disinformation. We also show that humans cannot distinguish between tweets generated by GPT-3 and written by real Twitter users. [...] Our analysis of true versus false tweets and organic versus synthetic tweets revealed an interesting finding: The accuracy of the information did not affect the participants’ ability to distinguish between organic and synthetic tweets. On average, the responses were essentially random, indicating that people were unable to determine whether a tweet was generated by AI or posted by a real user regardless of its veracity. Therefore, both organic and synthetic tweets tend to be classified as “human,” indicating that GPT-3 can effectively mimic human-generated information.

Spitale, G., Biller-Andorno, N., & Germani, F. (2023). AI model GPT-3 (dis) informs us better than humans. Science Advances, 9(26), eadh1850.

may be taken as a sign that AI may well be used to try to undermine critical thinking

3
  • 2
    This was Plato’s argument against writing in Phaedrus: “And so it is that you by reason of your tender regard for the writing that is your offspring have declared the very opposite of its true effect. If men learn this, it will implant forgetfulness in their souls. They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.” He attributes it to an ancient Pharaoh back when it was first invented.
    – Davislor
    Commented Apr 2 at 20:42
  • @Davislor I know firsthand this does occur, but really, I wouldn't complain!
    – ac15
    Commented Apr 2 at 20:44
  • 2
    @Davislor things have been getting worse for thousands of years, apparently :-)
    – Scott Rowe
    Commented Apr 3 at 0:57
2

This isn't really a good fit for Philosophy, because it has actual empirically measurable truth.

Specifically - without explicit effort to counteract the effect - your brain offloads work onto others if they are more suited to the task and you are commonly together. If you are reliably part of a group, and one person is better at navigating, the whole group will subconsciously offload their efforts on navigation to that person, whom will become better at it while the rest of the group - not using that neural circuitry anymore - becomes worse.

This is completely automatic, and happens more strongly the more reliably with those other individuals.

Unfortunately, in this instance, your subconscious does not care if the other individual is a human, or a smartphone. This already happens.

Human + Smartphone will have more capability than just Human, but as the Smartphone becomes more capable, the Human side begins to atrophy. In the exact same way that muscles weaken and are reclaimed by the body when they are not taxed, brain capacity does as well.

Now, AI has one real saving grace here - it can be used to lighten your cognitive load, most certainly, but it can also be used to challenge it.

You don't just have to ask AI for answers. You can have full on debates and arguments with them now. They can present you with a functionally unlimited source of novel puzzles.

In this manner, it is currently much like a car - a car can drastically reduce your need to be in good shape, but it can also take you to the gym, where you can get in better shape than likely otherwise. It is fundamentally how you chose to use it that it will affect you.

Thankfully, humans rarely just choose the easy path without thought to long term consequences.

Oh, wait.

2
  • I’m realizing two points to this question: 1) this question is better suited for a psychological forum than a philosophical one; and 2) the responses so far, although thought out, do not seem to address the question in the way I expected which suggests I phrased it poorly. Your answer is closer to what I originally expected.
    – mkinson
    Commented Apr 3 at 16:02
  • So philosophy is only about questions which are not empirically verifiable? ;-) Commented Apr 3 at 17:57
2

There are two types of arguments on this point. Both have their own importance.

When sunlight is focused on an object by a convex lens, the total amount of sunlight / heat on both sides is almost equal. But on one side, the intensity is very high. Similarly, it can be argued that even in the use of AI, there will be no change in the ability to think. Because when we are awake, the mind does its work without rest. There is no shortage of thinking. One is by focusing and the other is by non-focusing. I don't want to lead you to this side.

Basically humans don't like to work hard if they get opportunities to enjoy this world. They like to use things produced by their ancestors. AI, as in the case of anything else, has ‘producers’ and consumers. In the case of ‘producers’ and in some other areas the use of AI will certainly increase the ability to think deeply as understanding increases. But unfortunately I cannot see any sign of abating of man's business spirit according to this understanding. And in the case of many of the consumers, (as I mentioned earlier) if the word 'development' has got a greater meaning associated with values, AI will adversely affect human development.

Comparing human beings to mere machines will surely take man to a mechanical world. It often leads to forget 'the truth that coordinates the beings of the world', which is essential for harmonious life in this world. Instead of becoming human mind a God’s workshop, there is a greater possibility of becoming a devil’s workshop. Also, the number of the exploited will also increase as people use AI for more profit. So IMHO, the use of AI will adversely affect value-based thinking. In other words, it will certainly reduce the ability to 'think like HUMAN'.

About the common thing: https://www.holy-bhagavad-gita.org/chapter/7/verse/7

1
  • It's like AI Marxism: being a producer is definitely better.
    – Scott Rowe
    Commented Apr 3 at 14:57
1

I do not think the use of AI will reduce our "capacity to think critically" by itself, but it will most likely skew our perceptions of reality, our opinions and so on and forth, just like everything else we use.

There are many other technologies related to thinking which we have employed for a long time (i.e., writing, and later computers, and then the more niche pre-social-media internet, and then the modern form with an overabundance of social media where a good chunk of users certainly has not a single clue of how it all works and for whom it's pure magic).

Even simple hand-writing itself is kind of "dangerous" if you believe everything you read (and some people do!), but then even just talking with other people can be so.

In general, what I believe we can witness today is that it's not the technologies themselves that shut down critical thinking, but the complacency of the users. I.e., it is more the world-view of a user to believe everything they read on Twitter, or everything they hear on YouTube. This most definitely is an immense problem, furthered also by the evolution of "bubbles" created by feeding users their next media based on previous interest, with their well-known negative effects.

All of that said, if those kinds of technologies lead to the outcome that a large part of the users maybe does not think all too critical about the content they are consuming (and hey, they did not do so in the times of good old terrestrial TV, or paper magazines etc., either), then this is not so much because their "capacity to think" has been reduced, but because they are simply not engaging their brain that much at all during passive consumption. If you are a veggie sitting in front of the TV or YouTube, passively letting junk stream through your brain, then you are not thinking critically - not necessarily because you cannot do so, but because you are simply not activating your capacity.

To get to the point of your question: All of this will simply be extended and deepened by AI, in my expectation (based on not much more than my significant exposure to current AI tech as software developer and AI enthusiast). Eventually the coolness will wear off and it will just be around, next to all the other technologies, and we will be completely used to the fact that it is impossible to tell if anything we hear or see through technology is true or fictional. Many people certainly will believe everything to be true, no metter if based on fact or on a LLM.

I can very well imagine that will we be having incredible societal problems due to this, with whole generations of kids being exposed to this, basically defenseless. Just the same as our current generations of kids and adolescents are exposed defenseless to social media, leading e.g. to huge problems with depression, today.

But although all I've said is easily witnessed (at least in my smaller and larger communities...), it is also true that even the worst consumers are still eventually, when they need to, able to think for themselves. Yes, the content of their thinking may be strongly influenced by (often very much skewed) information they consumed without end, before. And their brains might be a little rusty from not being used much, in extreme cases. And they might not be used to all the pitfalls of high-brow logic. And they may be completely helpless against modern-day demagogues. And their opinions might be very weird if they had the misfortune to fall into a particularly skewed "bubble". But still, they all seem to be able to think when pressed...

1
  • "Nothing new is learned until existing systems have failed to maintain equilibrium." - Piaget - Definitely words to live by. I think that very soon almost everything will be voice-enabled, so that we mostly don't touch or even see computer devices. You will talk to your watch to do mobile things, talk to the thermostat if you are uncomfortable, talk to the car (which won't have any controls), and talk to lampposts if you get lost. We will be like babes in toyland.
    – Scott Rowe
    Commented Apr 3 at 14:51
0

Will the use of AI reduce our capacity to think?

Our capacity to think is a function of our brain, and as long as we live a healthy life, our brain will be available for us to think.

Our brain develops among many other things according to our interactions with our environment. The earlier in your life and the more time you spend interacting with AI apps on your devices, the more the development of your brain will be affected.

You won't necessarily have a diminished capacity to think, but it would be really surprising if that didn't have a significant impact on the kind of thoughts that you will have, essentially in the same sense that growing up and living in a religious sect and growing up and living in an open environment presumably have very different effects on the thoughts that we have.

Dogmatic people don't think less, they think differently. Maybe this is what you mean. If perhaps you define thinking as a search for problems to solve followed by a search for solutions, then dogmatic people maybe have a reduced motivation to think.

To the extent that AI apps are meant to provide some sort of answers rather than take the role of Socrates to us, users of AI apps will probably feel unmotivated to think in this second sense.

I would put this into perspective, though. Even before the advent of AI apps, most humans just didn't have the time to search for problems to solve and for solutions. Most people are already busy trying to make a living, and this involves thinking, although not necessarily the sort you have in mind. Second, there will always be people, including children, who don't have a smartphone, and even if they have one, who won't be interested using the AI apps on it.

That being said, AI might have a serious impact on academic output, both in terms of quantity and quality. However, I guess using an AI is much like using a computer. You are the one who decides how you use it, and using AI apps can help improve academic work in the same way that using a computer can. Can, but not necessarily.

You can't really stop academics using AI apps, so it is going to be somewhat like guns in America. But these people are not going to have a reduced capacity to think, they are just going to use the same capacity to think differently, and in ways which are probably extremely difficult to foresee--and we can't really trust an AI to tell us.

What perhaps we need to keep in mind is that one thinker, Einstein, was enough to solve a problem of the sort that most people would be unable to solve even if they tried. AI apps are probably more likely to cause serious social problems on a par with those caused for example by drugs and junk food than to cause a decrease in our capacity to think.

0

I think this question may be more suited to the Philosophy SE than you realise, because it raises the question of, "What do you mean by 'thinking?'"

In my long career as a software developer, with many interactions in other areas of business, I've identified two different forms of thinking which one might describe as "rigorous" and "easy."

"Rigorous" thinking when it comes to software includes designing systems and writing code whose behaviour is well understood, that works reliably, that has a precise and accurate design, that expresses that design clearly, and so on. Such an approach might be considered fairly "academic" or mathematical.

"Easy" thinking, on the other hand, produces systems and code that generally are understood only at a very casual level, has logical inconsistencies, often doesn't fully work, is unclear about its own design, and so on. This is not to say that the code isn't useful: it looks like it could work, and often does work well enough (particularly if you simply pretend certain issues aren't there or are unimportant), and is in fact the vast majority of code produced in businesses today. (One reason it works is because errors that are quickly clear from a rigorous analysis of the logic may not be found by merely testing code, and can easily be blamed on something else, or just left as a mystery, when the code is released into a yet more complex environment. How many weird or unexpected problems have you seen on your own computing devices where you simply shrugged and decided not to try to track down the real problem?)

This rigour/easy split is seen in many other areas as well. There are clearly types of business and similar analysis, such as operational research, which are often approached with near-mathematical rigour and require "rigorous" thinking. Yet there are far more similar-looking analyses in business strategy documents and so on that use "easy" thinking, and would fall apart if given a rigorous analysis. This split exists even in academia itself, with work ranging from absolutely rigorous mathematical papers to complete nonsense on the level of the [Sokal hoax][so] papers. (Bad writing often disguises this; consider the bad-writing awards from the academic journal Philosophy and Literature and in particular the analysis of that first prize entry from Judith Butler in "The Professor of Parody".)

With the rise of large language models in the last year or so, and concomitant "AI will take over programming" thing (also applied to many other jobs), I started looking into the sort of answers that ChatGPT provides for both programming problems and more general questions. My experience has been that it's very good at producing code and text that looks plausible but that, for areas where I have the knowledge to do a closer examination, is a result that would be produced by "easy" "thinking," to the point where it's easily picked apart if you need a rigorously thought-through result. (In fact, in my experience LLMs are a consummate bullshit artists. This is no surprise, since they don't actually use any form of logic; they are, as Emily Bender said, stochastic parrots.)

So do they change our capacity to think?

For those doing "rigorous" thought, no; the results produced by AI are useful there only to the degree you trust it very little, think through things carefully yourself, and do the same kind of research you would do without the AI assist. It may help you find research leads faster (as it often does for me) since you can throw concepts at it and it can often reword things to give you better search terms. (The same is true of code: when I use it I never get correct code, and often blatantly incorrect code, but a combination of my deep experience with many different programming languages and topics and the ability to search documentation can still make these wrong results useful to me in getting to the right result faster.)

For those doing "easy" thought, it's not clear to me. Certainly it's going to allow certain business analysts to produce "strategy papers" and the like much more quickly, but is the plausible-looking nonsense they were producing really a product of "thought" anyway?

Here's an amusing example of the result when an "easy thinking" programmer starts depending on AI:

it has caused lots of issues and spegett code but also helped with a lot of errors so it evens out, but what I hate is when I cant figure something out and it dosent know what to do so it just makes up something that looks like it might work then it dosent work so you spend a few hours learning it yourself on top of the 15 minutes of AI troubleshooting [sob]

From this it seems fairly clear that he generally has little understanding of the code he's writing. (Before AI he was likely using the age-old technique of copying code from elsewhere and tweaking it and re-running it until he managed to get a few runs without any obvious errors.) From my point of view; AI hasn't affected his "thinking" in the slightest, because he was never actually thinking in the first place, or at least doing only the most minimal amount of thinking possible to get to a point where what he was working on looked "done."

0

Since you are asking about insights from history, there are indeed comparable advancements in "mind technologies" whose impact on the related cognitive capabilities we can study:

  1. The invention of script, and later print. Societies which do not have script must memorize all information they want to preserve or pass on. Some, like the Aboriginal Australians, memorize more material than (almost?) any member of societies which can write things down. It seems clear that the ability to "dump" information to external storage leads to a lesser capability of memorizing things. Still, the virtually unlimited capacity of written information storage and its ability to convey information through space and time arguably increases the information processing facilities of societies that have script, even if their actual ability to memorize information is inferior, because it is not needed and practiced.

  2. The invention of calculators and computers to perform calculations. Before the advent of mechanical and later electronic calculators, every mathematician needed "computational craftsmanship": The ability to manually solve matrices, compute or at least interpolate from tables logarithms and roots, etc. None of that is needed today, and consequently hardly anybody, even professionals, can do that any longer. Still, the "mathematical power" of societies with these aids is significantly greater than of those without them.

  3. The internet as information storage. Everybody today has encyclopedic knowledge at their fingertips which required a trip to the library before the internet. Everybody can look at a map of any city in the world, browse through hotel and restaurant directories and, when on location, receive route guidance. Every person of any relevance can be looked up. Consequently, our ability to read paper maps becomes rusty and will, conceivably, become an extinct capability as well. Still, we have much better orientation and can move around with much greater ease than before.

An underlying pattern appears: Technology becomes available which can perform certain mental tasks better than humans who consequently do not exercise these mental facilities any longer. Consequently, these capabilities are mostly lost.

As an aside: It was Ray Kurzweil who observed that tasks machines become capable of performing lose their reputation: They are not deemed sophisticated (in particular: intelligent) any longer because, well, any machine can do them: Playing chess well, reading, translating etc. used to be signs of high intelligence only 50 years ago; not any longer.

We can assume that something similar will happen with AI:

  1. AI will take over many tasks we regularly perform today: Create business plans, apply the law, derive new medications, write pulp fiction, drive cars (yes), run machines and entire factories. Obviously, none of that is really interesting ;-).

  2. We'll use the fact that we are freed from these mundane tasks to do the really interesting things: Interact with each other, and have interesting ideas, often using AIs as rubber ducks and to weed out logical flaws.

Not the answer you're looking for? Browse other questions tagged .