6
$\begingroup$

Modern education teaches citizens that the point of an individual in society is to contribute labor. In a world with limited natural resources, we use this idea to measure an individual's worth in society and award them resources (via money) proportional to the amount and value of their work.

However, in the next N years, (100, 500, 1000, whatever you believe) computers will likely come to outperform humans in all disciplines (manual, intellectual, creative, administrative, and all others). Let us postulate, though, that this level of artificial intelligence arrives before we find a way to access virtual unlimited stores of natural resources.

In this theoretical world, what will be the value of an individual in society that we use to distribute these limited natural resources?


Note: My question is similar to this one, but it differs in a few key ways: Firstly, there is a section which specifically references "a few thousand engineers, scientists, and world leaders" still being necessary to run society. I am asking about a situation where there are between 0 and 50 of these people. If there are any humans holding these positions, it's only to make the populace less scared of a robot revolution. I am also asking about specifically the situation with respect to limited resources, whereas that question talks about a timeline of the transition.

$\endgroup$
7
  • 3
    $\begingroup$ "award them resources (via money) proportional to the amount and difficulty of their work" - citation needed. $\endgroup$
    – Innovine
    Commented Feb 11, 2017 at 20:54
  • 1
    $\begingroup$ Anything you can do that provides value to anyone, is labor by definition. By extension, the need for labor only disappears if everyone already has everything they want (utopia) or if there's no way for anyone to get anything they want (doomsday). $\endgroup$
    – hobbs
    Commented Feb 11, 2017 at 22:47
  • $\begingroup$ @Innovine I don't think he can find one, since most modern economical system work by added value and not by merit: the more value you can produce, the more you gain. The only question is how products/services are valued (and thus how you can add values), but this does not change the core problem if AI can surpass humans. $\endgroup$
    – gaborous
    Commented Feb 12, 2017 at 0:08
  • $\begingroup$ My garbage man works very hard, and performs a very useful function. The middle managers at my office, getting 4x his salary, not so much $\endgroup$
    – Innovine
    Commented Feb 12, 2017 at 7:33
  • $\begingroup$ I've also heard it said (sorry, no source) that for every farmer producing rice in japan, there are 5 accountants or economists managing the money generated. $\endgroup$
    – Innovine
    Commented Feb 12, 2017 at 7:35

5 Answers 5

3
$\begingroup$

They could contribute labor which derives value from being human labor, as explained in this answer to a slightly different question. Consider how many people in the real world are willing to pay more for a fair trade product, even if it is not better in itself than an unfair product.

Or they do not contribute enough to pay for their upkeep. Perhaps they are lucky enough to own assets which pay their rent. Consider who owns real estate right now -- humans, not computers. If enough humans are far-sighted, they'll keep the title and only rent it to the computers.

Last but not least, all humans might become welfare recipients. Even if they have make-do work, it doesn't pay their living. Let's hope the computers are more humane than the humans.

$\endgroup$
1
  • 1
    $\begingroup$ +1 for all humans might become welfare recipients and AI does not have to be humane, all it takes a good programmer to teach them manners, after all, we are their creators. $\endgroup$ Commented Feb 12, 2017 at 18:12
3
$\begingroup$

Warning: I provide a series of ideas below, yet I can see how none of them work. This leads me to consider that humans would be taking the place of lesser animals compared to the AI.

If the robots find us interesting we could be pets. Otherwise they could exterminate us or exile us. Our hope may be in AI thinking that we are not that different.


Evidently the first thing that can be contributed is resources, yet we are moving past that because in the given scenario those are considered limited.

The next thing people can contribute is productivity, labor is a shorthand for that. If we consider that two persons contribute the same if they produce the same, regardless of who made more effort to do it, you are rewarding productivity instead of labor.

Note: I don't know what the government is, yet I'll go on the assumption that it can be corrupted (even if it is AI).

I'll be using umbrella terms, stretching the concepts a bit when we can conceive scenarios where they converge.

Ok, now we move past that... we have:

  • Time and Attention: You can exchange your money to get things faster (for example by buying something pre-made instead of buying the materials, or by using the transport system instead of walking) you can also expend your time to get things (that is how you pay for many free websites... they make you expend time looking at advertisements). Consider also that finance can be understood as converting time into money, and that in a world where the banking system everything can do micro-transactions you could pay for exactly the time you use a service (eg: how many seconds you watch TV). In this scenario you could be paid "time" by doing work or by watching ads, you use it to access services, and you could even invest your "time" via the banking system. So, time and money converge into a single concept (that we could call "credits"). See In Time (2011) and Time-based currency. Note: attention of computers and robots could be considered worthless. Also AI may not be prone to propinquity repetition. Yet, there may no longer be a need to advertise to humans at all.

  • Information (Access to): Each person is a source of information as data point in statistics. For market research or scientific studies, people with special condition could be more valuable. Now, the next step after sharing your information is to serve in experiments... and evidently it is expected that this would be paid. There are other sources of information such as solving math problems for prizes, or even cryptocurrency. And you may earn money by creating new algorithms to process interesting data, or by doing scientific research. Note: there maybe scientific breakthroughs about the human body or otherwise that are beyond AI (depending the laws of your universe and how much scientific knowledge goes into creating AI to start with).

There are other things individuals contribute that are not necessarily paid. Don't think what people may do for their society, but how they can break them... then whatever it is they can do to prevent that is of value. Here are some ideas:

  • News and Narrative (as accounts of real events): "Misinformation" an "propaganda" may lead to people going against the system. So, we infer there is value in having people somewhat informed. In fact, people consume narratives. They may be used for entertainment, also it is usually the narrative (or should I say "history"?) around unique items what gives them value (an example of this is "Made by humans", but will the AI buy this?). Fame ties well to this concept, as people appreciate more the narratives of the famous, and people become famous because of their narrative. Note: perhaps computers are better at creating new fictional tales, yet what actually happens to people is what actually happens to people. In particular if there is a speck of unpredictability in humans. Although AI fiction could be supernormal stimulus, and people may prefer that. After all, the goverment can use misinformation and propaganda for their own benefit.

  • Emotional and psychological support: Desperate people do desperate things! Consider that people have social behavior, and belonging to a group can be rewarding. This means that providing psychological support to others (or you could say, "being their friends") can be considered a service, and thus something people contribute to society... well, what society is it if there are not relationships? Note: Love, belonging and esteem may not be fully replaced by robots as long as we think of them as "other", and even if that is archived there could be value in the old ways.

  • Security: For people not destroying all the things, we need security. In our world guards sell security for money; you may also buy security in the form of protection systems. Those could be either informatic protections (antivirus, firewalls, intruder detection systems, etc...) or physic (fences, locks, security cameras, crazy drones, etc...). We usually consider security to defending from robbery, espionage, and damage. The idea of defending against damage could be extended to include maintenance as a form of security. We can extend that further to consider maintenance to people (a.k.a. health care). All those are services people contribute to society. Note: As long as humans are taking the initiatives this would be an arms race. I agree that at some point AI could make it not worthy for humans.

Here are other things we could use as proxy for productivity or labor (these are proxies of labor, so they may not be interesting in your universe):

  • Karma: The government may pay people based on what is considered "good behavior" (according to the government). In this scenario following the rules means money and breaking the law means less money. This may also be extended to different campaigns, for example: have compost in your backyard, help the sick, provide shelter for the homeless, join the military, lose weight, using approved products, etc...

  • Health, Comfort and Well-being: A job deteriorates your health; you are paid more if you are healthy. That is using health as proxy for effort, and effort as a proxy for labor.

$\endgroup$
3
  • $\begingroup$ The big issue with ideas like "Karma" is who, exactly, is deciding what is "good behaviour". Keeping slaves was "good behaviour" for much of human history, as was overt discrimination against identifiable "out" groups. Robbery and murder were also approved, provided you killed and robbed the "right" sort of people. $\endgroup$
    – Thucydides
    Commented Feb 11, 2017 at 19:55
  • $\begingroup$ Interesting point relating to the "Karma" system, is the social credit score recently debuted in China. Not that it's being used to the extent you mention here AFAIK, but still related. $\endgroup$
    – KFox
    Commented Feb 11, 2017 at 20:06
  • $\begingroup$ @Thucydides whatever the goverment says. Yes, there could be goverments that use this system to reward keeping slaves, roberry, killing, etc... perhaps you would prefer another name instead of "Karma"? Edit: I'm considering it a proxy for labor anyway, as in "do X and get a reward". $\endgroup$
    – Theraot
    Commented Feb 11, 2017 at 20:07
2
$\begingroup$

Well, there are short-term and long-term answers to this. In the short run, there are some jobs that even the type of AI you're describing would have a hard time doing (or at least, doing better than humans to the point where human work is no longer needed). Examples of this are novelist (or really any kind of artist), motivational speaker, prostitute, entrepreneur, etc. Also, as has been mentioned, some humans may prefer human-made products to machine-made ones.

In the longer run, we will eventually reach the point where AI is so good at writing novels that Game of Thrones will read like a 6 year-old wrote it. Once we reach that point, humans really have nothing of value they can contribute to society outside of basic human needs like giving emotional support, having children, or just generally enjoying themselves (excising, playing video games, reading, etc). Personal enjoyment is very much of value to "society", because all societal value should, in the end, transfer into human utility. As for rewarding some humans more than others, those that own property might be able to hold onto it for some time and charge rents, but in the long run we'll basically be a bunch of very wealthy and lazy Communists. Keep in mind that while resources may not be "infinite", AI this powerful would inevitably be able to generate far more resource productivity than humans currently do, so most folks wouldn't see their standards of living go down at all (if they did, they would have an incentive to go out and work to bring them back up, which violates the premise of the question).

$\endgroup$
0
$\begingroup$

Resource allocation will be different and based on the value their intended use has on society and as a proportional welfare e.g. a space craft, or food resources etc. It's also dependent on whether or not people want to continue to capitalize on supply and demand of resources in the future e.g. if scarcity disappeared, nobody would even desire to attempt to capitalize, however, I think in the future resources will be allocated proportionately and rationally/ appropriately, due to their supply on earth, not our ability to purchase them.

Humans will give themselves purpose by continuously achieving their worthy ideal; probably exploring gaps in knowledge, through adhering to the scientific method of inquiry...
That is what humans should contribute to; either on a personal level through exploration of less complex hypothesese or through efforts at the frontiers of all humankinds knowledge.

Who knows, all we can do is say if an outcome is sensible to predict and i'd say based on my limited knowledge on the nature of humankind that everything everyone is writing on here is sensible..

$\endgroup$
0
$\begingroup$

First off, to be clear, the "modern education" you speak of is "capitalism," specifically "neoliberal capitalism;" I mention this only because other socioeconomic structures are more than possible given the conditions you're asking about (some form of "hi-tech feudalism" perhaps.)

To your question: I would suggest that "Very Hard AI" (replicating the processing "types" in a human brain as opposed to sheer digital processing power, which I can readily imagine) will probably never come to be, because any and all efforts to model (let alone replicate) the human mind are laughable:

  1. Whether it's a toy airplane or a macroeconomic simulation, a "model" is a set of internally coherent suppositions about some aspect of reality - and, as nobody knows what the human mind actually is, what is being modeled?

  2. Computers are digital (binary - or, with quantum computing, "trinary") whereas the human mind is singularly "analog."

Given that last, there are 100 billion neurons in the human brain, each with the possibility of having up to 40 connections per neuron. Given that a few molecules present or absent in synapse can result in a change in state-condition, one would have to not only calculate how to simulate 100,000,000,000^40 analog connections of widely disparate nature, but, as the level of "granularity" required to translate analog-to-digital is unknown, this might theoretically need to be done to a level of granularity equal to the Planck limit.

I mention all this in an attempt to better frame the question ...

... SO, if there is any value to be found in passing digital data into a brain and having the brain be able to output digital data in turn, I can readily imagine an interface between brain and machine wherein digital data is passed back-and-forth to ask and answer problems of an immense degree of complexity -

In short, a person could rent out their mental power/processing capabilities either "standalone" or as part of a much more complex system.

$\endgroup$
1
  • $\begingroup$ Note that the question does not suppose an AI based on the human brain, but better than it. The world in which the question is posed presupposes that there is no value left in the human brain as a computational machine, since computers outperform it in every task. $\endgroup$
    – KFox
    Commented Feb 12, 2017 at 17:12

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .