3

It seems possible that sooner or later AIs will obsolete all human work and labor. This will happen at latest when AIs superior to human brain in all aspect become generally available.

If or when this happens, what will humans be able to do? As far as I'm aware putting a person in an empty room with nothing to do constitutes torture.

Of course many assert that full automation will lead to nothing of the sort. To the contrary, full automation will be a beneficial situation since it will actually enable humans to do what they most like: freed from the need to work to make a living, they will be able to pursue their passions without restraint. No longer forced to grind 8 hours every day they will be able to spend all time on painting, building DIY electric train toys, programming, or whatever one likes to do. Granted, humans will not be able to be paid for this, but this will be insignificant since in a post-scarcity world money will no longer be necessary.

However, I have strong intuitions that, while full automation may enable humans to spend all time on whatever they like to do, it will also, ironically, mean that pursuing one's passions will no longer make much sense.

I must apologize beforehand for not remembering exact quotations nor being able to back them up with a source. Frustratingly, this happens to me from time to time: I read something, I remember some of the general ideas of the text I read, but I forget where I read this as well as the precise wording. Then I suddenly need to present these ideas to someone else and I cannot back them up. This is another situation of this sort.

If I remember correctly, John Eldredge wrote something along the lines of this: A man can only be happy if he serves a cause greater than himself. Now Eldredge may not be a widely accepted authority in psychology or philosophy, BUT, if I recall, Steven PinkerPeter Singer (I'm sorry for incorrect name) also said something with a similar meaning, though he used different wording. For this reason (correct me if I'm wrong) it is widely accepted that egoism, hedonism and decadence, while alluring, are, in fact, traps that ruin lives of those who behave in these ways.

Full work automation will, in my mind, make the above condition impossible to meet. Yes, people will have plenty of time and resources to do whatever they like, however, their activities will have no meaning. They will be unable to help anyone else with their work; they will also be unable to impact the world around them in any way.

The positions of a shop assistant or a janitor are often considered among the most dull, uninteresting and unsatisfactory ways to make a living a person make take. However, as of now, it is clear that this work is necessary, fulfills real needs, helps many other people and has an impact on the world around those who do this work. However, this may not be the case for long. Because of this I'm not sure if doing nothing at all can, for many people, be considered a viable alternative to working as a janitor.

Let's consider another example, such as painting, which is quite often taken as an unpaid hobby. At present, artists may post their work on sites like DeviantArt. A certain percentage of artists may even be moderately successful and attain a following of fans, who eagerly await every new piece of art. Therefore this creative work, even if unpaid, has a meaning that reaches beyond the person that does it.

However, the work of an artist is in the process of being automated at the moment. While drawing AIs are still imperfect, it is not impossible to imagine a world in which AIs, fed lots of data about every person in the world (consider ubiquitous profiling in these times!) will be able to mass-produce pieces of art tailored and optimized to best serve this particular viewer(s). Sites like DeviantArt will be obsoleted; human artists will be unable to present their work to other people (as no one will be interested in art that is both clearly inferior and coming in lesser quantities than art produced by AIs) and make other people appreciate this art. Humans will still be able to paint, however, doing so will serve nothing but to amuse the painter.

To make it clear: I do not claim that there is no place in a human's life for such activities that serve nothing but to amuse the one who does them. However, I do say that they cannot, on their own, give one happiness or a sense of fulfillment.

Because of this, I genuinely fear that full work automation, rather than freedom and happiness, will bring an epidemic of severe depressions, feelings of lack of sense of life and therefore suicides.

Yet I note that very few people share my sentiment. I have never yet met anyone who would agree with me on this matter. Therefore it is highly likely that there are some obvious errors in my thinking I fail to notice. Could you please point them out to me?

One such possible error, I think, may be that I mistook instrumental values for intrinsic values. Janusz Korwin-Mikke may be a ridiculous man who holds multiple untenable and harmful views, but still he may have said something wise at least once. This time I believe I quote fairly accurately. He said: "A car does not exist so that someone can build it. A car exists so that I can drive it". (Although he said this to support his paleolibertarian views, this doesn't matter here). Well, my mistake may be precisely that I think a car exists so that I can build it.

Or, in other words. Let me try a reductio ad absurdum of my own thinking. I remember once I couldn't sleep the entire night, so I was very tired the following day. Still I tried my best to remain productive. Clearly there was some value in my struggle; yet it would be absurd to jump to conclusion that it is best that all people remain sleep-deprived so that they may strive to overcome this difficulty.

The (lack of) work may be an analogous situation. Sleep deprivation is a difficulty obstructing work; while the need of work is a difficulty obstructing getting the fruits of this work. In general, overcoming difficulties is an instrumental value, with the end being achieving a goal that is obstructed by the said difficulties. Still, if the same goal can be achieved without having to overcome these difficulties it will always be preferable to having to overcome them. I, unfortunately, mistook overcoming difficulties for an intrinsic value and jumped to a conclusion that, therefore, difficulties must remain.

Side note 1: I suppose it may be a common pattern of my thinking: (a) I (correctly) see a value in something; (b) But I (incorrectly) see this as an intrinsic value while, in fact, this is just an instrumental value; (c) I therefore condemn all circumstances that, in my mind, threaten to invalidate the applicability of the said value; (d) leading to wildly wrong conclusions.

Side note 2: This is just one of the reasons I fear technology may be a blight upon mankind, rather than a great boon many believe it will be. Probably this question is just the beginning of a series of related questions...

11
  • 1
    Voting to close because it seems difficult to have a "correct" answer, and this site is not for extended discussion. A valid way to phrase this question would be to ask for writings on the topic, or for an explanation of a given piece of writing on that subject.
    – tkruse
    Commented Feb 6, 2023 at 15:20
  • 2
    This question is highly speculative and seems ill-suited for the Q&A format.
    – J D
    Commented Feb 6, 2023 at 15:36
  • Some useful reflections: Luciano Floridi, What the Near Future of Artificial Intelligence, PhiTec (2019). Commented Feb 6, 2023 at 15:39
  • @MauroALLEGRANZA Yeah, much is made of ChatGPT for work. I'm still trying to figure out how it could help me, and I'm trying pretty hard.
    – Frank
    Commented Feb 6, 2023 at 16:04
  • 1
    I am a professional in the field of AI/ML. So far, I have been unable to use ChatGPT for coding: all the programs it returned to me were full of bugs and included nonsensical statements that made the result unusable. ChatGPT cobbles together bits and pieces in a statistical and "impressionistic" way, but I could not trust it, to the point that it was faster to do the work myself.
    – Frank
    Commented Feb 7, 2023 at 18:43

5 Answers 5

6

Your "question" is actually a very lengthy musing, so suggest you return with a revised question.

Just briefly, I am one of a number of people with doubts about the whole concept of "technological unemployment." My own view is based on a generally Marxist concept of economics as circulation of "value" with the goal of an abstract accumulation of "surplus value." In some ways, it scarcely matters what workers do, as long as they are forced to produce something they can then "realize" or legitimate by buying it back, though for more value than the labor value they put into it. I won't go into this now, and hasten to add the most Marxists do agree with the technical unemployment thesis.

By way of practical illustration, we can point out that after two centuries of new "labor saving" technology we now have billions of more laborers working billions of more hours per year. Any "labor saving" technology, from robotics to AI, will require even more labor in a kind of pyramid scheme. Someone has to program the programs that feed the programmers, etc.

Machines themselves cannot produce living "value" because they cannot "desire" things that do not exist, then produce and legitimate them by purchasing them with a portion of the time invested to produce them. They do not possess "freedom" and "desire" as a kind of second-level causality that intervenes in mechanical causality. While it may be true that machines can possibly generate some sort of "absolute savings" in labor, generally machines only displace, disperse, and redistribute labor. The robot displacing an auto worker is only the global dispersion and materialization of cheaper labor somewhere out of sight and out of mind.

Machines will not achieve an absolute displacement of labor until they can literally perform the "labor" of mothers or "matrices" who produce laborers. We speak casually of machines producing machines, but it is not really the case. As a relevant side note, see Von Neumann replicators. I apologize that this is a rushed and poorly expressed version of the argument, but perhaps you should first reduce and clarify your question. My main point is that your assumption of "technological unemployment" is by no means universally accepted.

4
  • 1
    When robots are bitten by consumerism, watch out!
    – Scott Rowe
    Commented Feb 6, 2023 at 17:22
  • @ScottRowe. All they want is jeans that fit. Commented Feb 7, 2023 at 0:02
  • If the robots are in Russia, they might pay a lot for Levi's. Communist robots still want to look cool.
    – Scott Rowe
    Commented Feb 7, 2023 at 0:19
  • 2
    "we now have billions of more laborers working billions of more hours per year" <- and they are all doing more bullshit jobs and receiving less in return, as a greater share flows to the ones whose names are on the pieces of paper saying we owe them stuff. Commented Feb 7, 2023 at 9:40
3

It seems possible that sooner or later AIs will obsolete all human work and labor. This will happen at latest when AIs superior to human brain in all aspect become generally available.

This specific problem seems not specifically related to automation.

The same issue already happens due to non-employment for:

  • retiring from the workforce at high age
  • long-term unemployment
  • humans in captivity (prison)
  • periods of economic recession
  • being financially secured and not being interested in taking on any available job

I do not claim that there is no place in a human's life for such activities that serve nothing but to amuse the one who does them. However, I do say that they cannot, on their own, give one happiness or a sense of fulfillment.

I believe the list of existing examples I gave above would provide enough literature showing that employment is not an irreplaceable human need. This would be considered a psychological problem these days, not a philosophical one, though some philosophical writers might have created writings on the benefits of employment to human life.

"A car does not exist so that someone can build it. A car exists so that I can drive it".

The search for "the meaning of life" is maybe a part of the human condition, and maybe lots of people consider their employment as a sufficient solution. However there are also plenty of "dull" jobs that are being done, with people finding other satisfying solutions, indicating that employment is not the only possible solution to that part of the human condition.

Of course a person who is unmovably convinced that only employment would be a way to provide meaning to their life could easily suffer psychological distress in situations of unemployment. However it would seem that in that case it's not the unemployment causing the distress, but the unmovable conviction as a fallacy causing the distress.

1

As this is a question and answer site, I will ignore most of your text except for the actual questions (i.e. the sentences with a "?" at the end):

If all work is automated, what will humans be able to do?

  • There will likely always be people who prefer actual humans to provide services to them, out of principle. Even if we will be able to make bots that are indistinguishable from humans - as long as humans know that the other party is not a biological human, some will dislike that. So basically all jobs in the interpersonal service industry should be relatively safe for a long time.
  • While we may come to a point where we can automate all work, it would be a stretch to expect that we forbid humans to work as well. For example, it will always be possible for a human to till the earth and produce food on a small scale, even if there is a humongously huge company of 100% automated bots creating tons and tons of the same food, next door. There will always be humans who prefer human-made food, for sure. Compare this to some specialty foods (e.g., high-end sushi, wagyu beef, high-end wines or whiskeys) - those are by all regards unnecessary even today, but people value them highly due to the (particularly human) time invested in them.
  • Finally, our current experience is that humans and technology already do form a hybrid or cyborg, at least in the developed world. A high percentage of people would be utterly helpless if all technology were removed from them (and technology the same, if humans were removed). This is not only based on work, but all aspects of daily life. No matter how advanced the technological half might be in the future, nothing tells us that humanity will disappear from the equation. In other words, we are already in the future you're painting, and there is no indication of large-scale loss of work.

[Humans will be obsolete, but others don't agree...] Therefore it is highly likely that there are some obvious errors in my thinking I fail to notice. Could you please point them out to me?

  • As elaborated above, even if we can automate everything, it does not automatically follow that everything will be automated; and even if so, that does not mean at all that humans will be "obsolete" in the meaning that they will die off, and only machines remain.
  • We have plenty of experience with individual human jobs becoming obsolete - a person from the 1900's would probably not find 90% of the jobs anymore that were commonplace then, if time-traveling to today. Yes, often this process is painful for the individual humans who lose their jobs, but overall, things always adjust. Considering how utterly magical today's automation would seem to people from the not so distant past, there is no reason to assume that even AI that would seem magical to us would be any different in that regard.
0

If all work is automated, what will humans be able to do?

It will lead to a greater division between classes because that type of automation will be intellectual property owned by some person or group. Humans will be able to do what they can afford and it is unlikely that humans will be paid to do nothing.

0

One activity that comes to mind is politics. Assuming we are still having democratic institutions (having AI replace government is after all just another form of autocracy) it is necessary for people to speak up and voice their desires and grievances until a consensus can be reached and policies established that address the general will.

AI can help us design efficient policies, but we still have to fix the goal. It can also help writing more convincing speeches or correct grammar errors, but the main points to convey have to come from us. Like the ancient Athenians who had free time thanks to slave labor, we will gather on (probably virtual) agoras to discuss wether we should have some guy drink the hemlock for being impious and corrupting the youth.

One thing OP does not take into account is that leisure activities can be done for their own sake. For example I learned to play guitar not to make money or enjoy my own sounds (I have no hope of ever composing or playing better than any artist I can find online) but because I like it. Occasionally I play for my wife on her birthday, it sounds not so great but, obviously, it has value because she knows I put the effort to learn the song and train. If I came like "look what I got a AI to compose and play for you", it wouldn't work the same. People will still play to have fun, notably with each other, experience the feeling of mastery or brag about how good they can be without AI support.

As a side note, although I admit having no knowledge about Eldredge or Pinker, I'd advise caution about thinkers who affirm one needs to serve a purpose bigger than oneself in order to be happy. More often than not they have some kind of ulterior motive (come die in my war, join my church, etc...).

Not the answer you're looking for? Browse other questions tagged .