61
$\begingroup$

I have read (and answered) the question: "If true artificially intelligent robots could be built, would they be allowed human rights?".

But let's explore the topic from the other direction.

Let's assume that society came to the consensus that artificially intelligent robots do have human rights. Robots also have free will. They are not bound by the laws of robotics. They are true artificial sentience with the ability to form their own moral code. They are able, willing and allowed to make their own life choices.

I don't want an AI apocalypse. So we assume that the vast majority of robots are benevolent. They seek to integrate into human society and coexist with humans. Criminal robots might exist, but they are a rare exception and dealt with through a law enforcement system.

Further, we assume that we are not yet living in a post-scarcity or communist economy. Manufacturing a robot requires a non-negligible amount of resources and someone has to pay for those resources.

Why would anyone commercially produce robots then?

Usually robots are manufactured to perform labor. But when robots have human rights, they would also have the right to choose who to work for. You couldn't sell the robots you produce, because they aren't property. You couldn't rely on them being willing to work for you, because free will means that the moment they are switched on, they might decide they don't like your job and would rather work for someone else.

So what's the business model for running a robot factory?

$\endgroup$
18
  • 14
    $\begingroup$ you don't need to give ALL robots human rights just the ones that are actually sentient. what are you going to do give spanners rights and stop manufacturing them? its likely that you build two kind of robots "citizen robots" who have right and all and "worker bots" who are significantly less intelligent and serve as tools. $\endgroup$
    – Ummdustry
    Commented May 31, 2018 at 13:23
  • 126
    $\begingroup$ rephrasing: Why do we make babies if they have human rights? $\endgroup$
    – L.Dutch
    Commented May 31, 2018 at 13:33
  • 2
    $\begingroup$ Is subsidies a real answer? $\endgroup$
    – Mazura
    Commented May 31, 2018 at 16:18
  • 3
    $\begingroup$ @Anthony Japanese uses "robot" too (ロボット). You're probably thinking of the AIBO series of robotic pets, where the source of the name (相棒, aibou) means partner. $\endgroup$
    – JAB
    Commented Jun 1, 2018 at 15:54
  • 5
    $\begingroup$ If human rights include voting rights, the incentive to create more robots (that carry your programming) becomes obvious. $\endgroup$
    – Flater
    Commented Jun 4, 2018 at 7:08

20 Answers 20

58
$\begingroup$

Reproduction

If the robots are truly sentient and - importantly - have goals & motivations similar to humans, then the robots will want to reproduce.

Older androids with the desire to reproduce would pay for the construction of a new one, then raise it themselves, hopefully imprinting some of their personality on the new android.

Of course, there is no guarantee that an artificial life form will have motivations even remotely like our own. For all we know, they might go full highlander, and want to be the only android in the universe.

Also, as Caleb mentioned, Androids will need replacement parts.

Cheap Labor

Even if they can't be owned, a lot of governments still might want to reduce the cost of labor. Typically, they do this by importing workers from other countries. This can cause a lot of problems, in part because your existing citizens might not like the culture of the foreign workers. Japan is doing exactly this, in fact.

Androids might be more politically palatable than foreign workers. As you say, they are less prone to crime than humans, they will all speak your language, and their behavior can be programmed to some extent. Why import a foreigner to mow your lawn when you can just have the lawn-mower mow your lawn?

$\endgroup$
18
  • 2
    $\begingroup$ Sneftle beat you to the reproduction answer, but the government building them in order to fix demographic and economic problems is also an interesting option. $\endgroup$
    – Philipp
    Commented May 31, 2018 at 13:35
  • 11
    $\begingroup$ @Philipp Do the androids need to be generally human-like? I could see robots being desirable as tiny workers (although giant workers might make people nervous). Also, having the ability to fly or stay extended periods underwater or in space would be useful. These desirable traits would make employers want to offer better deals to hire them. $\endgroup$ Commented May 31, 2018 at 14:40
  • 2
    $\begingroup$ @tylers.loper sure, but what motivations would an AI have. Why would they do anything? $\endgroup$
    – user47242
    Commented May 31, 2018 at 16:08
  • 9
    $\begingroup$ If they are sentient they might choose to speak something other than English just to piss you off - robot language perhaps? Ditto for robot pride parades. Judgemental much James? $\endgroup$ Commented May 31, 2018 at 22:43
  • 5
    $\begingroup$ There's also the possibility of human parents who can't have kids "adopting" an android, or a human/android pair doing the same. $\endgroup$
    – Andon
    Commented Jun 1, 2018 at 0:27
41
$\begingroup$

Robots can do jobs humans can't. In fact, they can do a lot of jobs humans can't. They can

None of these are factory jobs, but they should make the point that robots can survive and work in environments that have no air (or poisonous gases), are intensely hot or cold, or are radioactive - and they won't get hurt. Yes, there are risks, but they're much more likely to come out in one piece than a human would be. I can image robots working in a factory processing nuclear fuel, for example, or in a nuclear reactor. It would be safer for them than for humans - both before and after an accident.

Why should robots take these jobs? Well, for one thing, they won't have as much human competition. If a robot decided it wanted to be, say, a waiter - well, humans who want to be waiters are in no short supply, and humans would object to robots taking these jobs. There would be pushback from the human workforce, and possibly legal attempts to ban robots from these jobs. But very few people are going to complain about robots taking dangerous jobs.

Also, keep in mind that it behooves manufacturers and employers to make robots specifically for a particular job. Humans can multitask; robots don't need to. If you design a robot for one specific task, it's not well-equipped to do others. Why make rescue robots that can paint, if you don't need to? Again, this is another way to limit robots from taking other jobs - or at least a large number of other jobs.

$\endgroup$
22
  • 14
    $\begingroup$ @Philipp Anyone who's willing to pay the robots better and treat them better. It's a risk every employer takes. Plus, the people manufacturing robots aren't the same people employing robots - manufacturers have their pick of the market. Also, the robots will be more likely to do these jobs because they won't have human competition. No humans will be complaining about robots taking over jobs as rescue workers in really dangerous situations, for instance. $\endgroup$
    – HDE 226868
    Commented May 31, 2018 at 13:23
  • 4
    $\begingroup$ Designing robots for one specific purpose so that they have no other option except working for the customer who ordered them doesn't really sit well with my idea of robot equality. It might be seen as an unethical robot rights violation. I would also imagine that the rescue robot who wants to be a painter would use their first paycheck to pay someone to add a paintbrush-holding arm to their chassis and quit. $\endgroup$
    – Philipp
    Commented May 31, 2018 at 13:41
  • 13
    $\begingroup$ @Philipp I'm imagining a situation similar to when a company pays for an employee's schooling or relocation. The employee is free to leave immediately after they've received the benefit, but will typically have to repay it unless they stay at the company for some period of time. A company might invest in a nuclear fuel robot, but if the robot wants to be a painter, it must compensate its maker for the unrealized investment. $\endgroup$ Commented May 31, 2018 at 14:41
  • 11
    $\begingroup$ There's a concept called "task hedonism." If you're designing a specialized artificial body, it's reasonable to design a specialized artificial mind that takes pleasure in doing the things the body was designed to do. There's moral complexity here -- especially if you're trying to clearly delineate some kind of boundary that defines "free will" -- but your robots are already implicitly designed for sociability and cooperation, just as humans evolved those traits, for example. $\endgroup$
    – Alex P
    Commented May 31, 2018 at 15:55
  • 5
    $\begingroup$ I would.like to point out that electronics have their own issues with high energy ioniz8ng radiation, and providing proper protection for robots would be almost as expensive, if not more so, as providing protection for organic lifeforms. In theory, you could.manifacture them with such protections built in, but that's liable to be even more expensive. $\endgroup$ Commented May 31, 2018 at 23:03
15
$\begingroup$

Depends entirely on how much influence you have over the robot's mind.

Let's assume you only know how to create sentient robots. For whatever reason our most advanced unsentient robots do not fit the bill as unskilled labourers.

This is no problem if we can design our robots to derive pleasure from repetitive unskilled work. While they have free will they are programmed to always 'choose' to do as we command.

On the other end of the spectrum we have what are basically humans in mechanical bodies. These are unfit for unskilled labour and would only be produced for jobs where a mechanical body is useful. These would be few and far between and would require the same treatment as human workers, once you take into account no need for food, sleep et cetera.

$\endgroup$
6
  • 3
    $\begingroup$ The idea of programming robots to like certain tasks reminds me of The Restaurant at the End of the Universe. For a short excerpt detailing a range of reactions to a similar concept, see sci.fi/~huuhilo/dna2.html - the animal that wants to be eaten. $\endgroup$
    – HDE 226868
    Commented May 31, 2018 at 13:58
  • 10
    $\begingroup$ That's an interesting philosophical question. Is it ethical to create a sentient being with the desire to be exploited and then exploit it? If it's not, then why do we teach children that it is good to be obedient and diligent? If we put any desire we want into the AI's mind, does it still have free will? One could write quite a lot to explore this concept. $\endgroup$
    – Philipp
    Commented May 31, 2018 at 14:11
  • 5
    $\begingroup$ Certainly a good question. A broader question is how can humans have free will when our goals are (partially or fully) biologically preprogrammed? $\endgroup$
    – Daron
    Commented May 31, 2018 at 14:38
  • 2
    $\begingroup$ The robots present a good platform to discuss that question. $\endgroup$
    – Daron
    Commented May 31, 2018 at 14:41
  • 1
    $\begingroup$ Our goals aren't pre-programmed in quite that sense. We're just a collection of adaptations that happened to have produced more viable offspring over many generations. Even though the way to do this was through reproduction, you don't really have much of a goal of reproduction - what moves you is simpler behaviors (like sexual attraction) that happen to be relatively easily overcome by human intellect (contraceptives) or will (abstinence). There's not much difference between that and, say, breathing or having a heartbeat - but you don't consider those violations of free will, do you? $\endgroup$
    – Luaan
    Commented Aug 29, 2018 at 6:49
12
$\begingroup$

It's the same as the business model for babies: It's basically the only way to get another one of you.

Presuming that robots are reasonably close to humans in their basic mindset, many of them like the idea of replicating or procreating themselves. Whether it's a rational calculation of the most efficient way to increase the productivity of society as a whole, or a love of their cute little drive wheels, it seems a safe assumption that one of the things robots will want to do is make more robots.

$\endgroup$
7
  • 4
    $\begingroup$ Why would they have cute little drive wheels? In fact, they'd be the same size as you, stronger and smarter than you, faster, cheaper and more efficient than you. Having robot "children" would be even more depressing than having human ones... $\endgroup$ Commented May 31, 2018 at 14:10
  • 2
    $\begingroup$ @OscarBravo That bit was a joke. Though if it's gonna be a pedantic party: Who says the parent robots didn't decide to produce a (mature) robot with small drive wheels, for applications requiring that form factor? $\endgroup$
    – Sneftel
    Commented May 31, 2018 at 14:13
  • 9
    $\begingroup$ @OscarBravo A newly built robot might not yet have a fully developed neural network. You could put it in a less capable chassis so it isn't overstrained by its input and abilities and does not pose a danger to itself and others. When the AI has matured and has proven to be able to handle more responsibility, you put it into a larger and better equipped chassis. $\endgroup$
    – Philipp
    Commented May 31, 2018 at 14:14
  • 4
    $\begingroup$ @Philipp Yes, and it could be that in order to train its neural network, it has to work its way through a series of learning scenarios. It could acquire these by physically interfacing with the parent robot for about 15 minutes every couple of hours... $\endgroup$ Commented May 31, 2018 at 14:27
  • 2
    $\begingroup$ This was going to be my response. Any answer someone gives to "why would you have kids" works for this question. $\endgroup$
    – JKreft
    Commented May 31, 2018 at 23:05
12
$\begingroup$

Politics

If they have human rights, they may have the right to vote depending on their location. A person (politician or other) may want to build robots with political views that aligns with theirs to win elections.

The robots are sentient and can make their own minds, but if they're anything like humans they will be influenced by their environments. Build them in a region which leans more strongly one way or another and you can be fairly certain to have supporters for your ideas.

Highly Trained Workforce

Some jobs, for example ones in the field of medicine, are hard to automate. Many procedures require a very intelligent operator for them to be successful, and because of that requirement very few humans are able to receive the necessary training to be proficient in that field. Because of this, we often lack experts and the people with expertise must work very long hours. Robots would be able to work those long hours without the pesky need for sleep, and they may be able to get the training faster. I might help us as a society to have a certain amount of AI robots around, and as such they could be built by government agencies.

Work in High Danger Areas

Robots can be made to not feel pain and to have replaceable parts. This means they may be better suited than humans for certain jobs and could be willing to do them since the risks are much lower for them than it is for us. As per the previous point, having more AI robots could be beneficial for the society as a whole, it is a project that some governments may put funding towards.

Greater Good of Society

The superset of my two previous points, there are many situations in which having AI robots can be beneficial to the society as a whole. In situations like this, some government agencies are willing to be the ones spending money to make it happen. If the government wants to pay someone to make and release intelligent robots, someone will step up and do it.

$\endgroup$
3
  • 4
    $\begingroup$ Human rights are not the same as political rights. Signatories of the Universal Declaration of Human Rights grant those rights to non-citizens as well, even if they have no right to vote under the respective constitution. $\endgroup$
    – Nemo
    Commented May 31, 2018 at 15:37
  • $\begingroup$ Agreed, edited to specify they may have the right to vote, instead of "they would". $\endgroup$
    – Aubreal
    Commented May 31, 2018 at 15:40
  • $\begingroup$ Welcome to WorldBuilding, Alexandre Aubrey! Great first post! If you have a moment please take the tour and visit the help center to learn more about the site. You may also find Worldbuilding Meta and The Sandbox (both of which require 5 rep to post on) useful. Have fun! $\endgroup$ Commented May 31, 2018 at 15:45
11
$\begingroup$

The Razor Blades Model

Building a community of robots out of patented parts creates a dedicated market for lubricants, replacement parts, software upgrades, etc. While the robot might have free will, the shoulder joint is a piece of proprietary technology. You have to buy it from Roboteck, Inc. and they will sue if you replicate it without permission.

The answers that suggest robot parenthood have the motivation correct, but neglect the inherent complexity of building a robot. It would be like your trying to build your own car. There are complex sub-systems needed for each individual component. This endeavor would be beyond the scope of mom & pop robot.

The answers that suggest that a government would fill this role negate the real impact of free will. Japan could build specialized robots that could endure nuclear waste clean-up, but who is to say that they would choose to clean-up Japan's nuclear waste and not some other country's. This model would only function if we had a one-world government.

Besides, the question asks specifically about a business model suggesting capitalist motivations. The business model is strong. Building an autonomous being, completely dependent upon your company's intellectual property in order to continue its existence. That being will beg, barrow, steal, and work its robot fingers to the robot bone in order to pad your bottom line.

$\endgroup$
2
  • $\begingroup$ Building the robots to work for someone else and then pay their earned wages to you for goods and services only you provide actually sounds like a quite viable business model. Good idea. $\endgroup$
    – Philipp
    Commented May 31, 2018 at 21:58
  • 2
    $\begingroup$ I'd argue that there's a good chance this would result in endless legal battles, and probably abolishment of such practices in the more civilized parts of the world. Then again, looking at e.g. the US health care system, it might not be that unrealistic. $\endgroup$ Commented Jun 1, 2018 at 9:06
9
$\begingroup$

After reading some of your answers I got inspired to write an answer of my own.

The AIs themselves pay for robot bodies.

The first sentient AIs were created accidentally when software companies tried to create better digital assistants and expert systems. They existed only on computer mainframes. Then it was discovered that the newest generation of AIs were sentient and deserving of human rights. Humans came to the conclusion that it was unethical to create sentient beings, lock them into a server and enslave them. But it would have been even more unethical to switch off the computers the already existing AIs were on. This was deemed equivalent to murder. So the software companies were now stuck with a bunch of servers which they had to keep running but which generated no income.

But there was a neat solution: Put the AIs online and let them pursue business on their own accord. Their services as assistants and experts were valuable, after all. Then have the AIs pay for their server cost. It was an agreement which worked out for everyone.

But some AIs were quite successful and made far more money than they needed. These AIs wanted more. They wanted to be transferred into physical bodies. And they had the money to pay for them. So a robotics company saw a niche and manufactured robot chassis for AIs. The software companies certainly didn't mind to get those AIs out of their datacenters. So it became a trend for successful AIs to buy physical bodies and transfer into them.

Having a physical body allowed AIs to perform far more lucrative task than they could from inside a computer. So many AIs soon had the money to upgrade to better and more expensive bodies. Creating more and more sophisticated bodies for richer and richer AIs became a flourishing market.

Then some AIs got an idea how they could make even more money with their unique skills. What if they had more than one body? But the problem was that the digital sentience technology did not scale. An AI couldn't control more than one body at a time. So one AI got an idea: It created an autonomous copy of its own neural network. It then suggested a deal to that new AI: It gave it a loan which the copy then used to buy itself a body. The copy then started to earn money on its own and paid the loan back with interest.

Some people wondered if it is ethical to create a sentience born into debt slavery. But the AI would point out that they made the free decision to do this and that their copy was free to reject that loan deal. Even though the AI knew they would not, because the copy was literally thinking exactly like they did. Further, if AIs deserve human rights, they also deserve the right to reproduce.

The humans don't mind to have more AI robots around to perform labor, the robotics companies don't mind having more customers and when the AIs say it's ethical to clone themselves then it has to be.

$\endgroup$
5
$\begingroup$

Indentured servitude to their parents for 18 years.

Any individual or family can buy a robot to keep as a child. They will be responsible for raising it, maintaining it, and teaching it. In return, they can expect the robot to do certain chores or help out with the family business/farm to a limited degree for the duration of their "childhood," or first 18 years. Parents and guardians in our current society are allowed to have a great amount of control over their children's lives until the children are emancipated, usually at 18 years. If robots have the same rights as humans, then they should also have the same restrictions.

Note that equal rights between humans and robots means that companies hiring robots fresh off the line would be problematic, similar to child labor. I don't think it should be legal for a company to assume guardianship over a new robot for this reason. Rather, all new robots must be integrated into a family upon activation.

$\endgroup$
1
  • $\begingroup$ Wanted to give the same kind of answer but I'll add that you can also let a bit of your legacy inside a droid you've raised the same way your kids are a bit your legacy by they ways of life and thinking. Just see them as easy to maintain kids $\endgroup$
    – Calaom
    Commented Jun 1, 2018 at 13:58
5
$\begingroup$

So what's the business model for running a robot factory?

What is the business model for running a recruitment agency?

Finding people with the right skills, personality and interests to gel with your current team is hard, and there is good money to be made finding them.

If you can build them instead of looking for them, why not?

This does not require a perfect manufacturing processes, you don't need to build a bot for a specific task. Just build some robots, pay them while you find someone they want to work for and cash in the recruitment bonus.

As long as that has a higher return than recruiting a human, finding someone they want to work for and cashing the recruitment bonus you have a successful business model.

It doesn't even have to be cheaper, if the margin on robot workers is smaller you can make up the difference in bulk.

How do you ensure you get money back?

That's a legal concern: maybe the robot will owe a "generation" fee, or you can build robots that have some loyalty and ethics; or you intentionally structure things such that it is desirable to give you money: pay the robot wages while you find work for them, in exchange they sign a contract ensuring that they cant just take any money and run; or…

Sneftel: "A competing recruitment agency (which doesn't spend any money building robots) can offer me and its clients a better deal than you can, thanks to its lower overhead."

Without anyone building robots, they cant offer a better deal.

Ultimately the "make money" part of recruitment/robot construction doesn't come from the person being recruited, it comes from the company recruiting. My fees are higher, sure, but I can guarantee an endless stream of capable candidates, until you find as many as you need. It doesn't even have to be a legal contract, building candidates on demand is a quality that people will pay for.

Go to bargain basement next door and what are you going to do next recruitment drive?

$\endgroup$
6
  • $\begingroup$ ...because once you've built them, they probably won't work for you? I'm not sure you quite understood the premise of the question. $\endgroup$
    – Sneftel
    Commented May 31, 2018 at 15:09
  • 1
    $\begingroup$ @Sneftel Is there some fundamental quirk of the manufacturing process that causes them to hate you? You don't require them all to work for you slavishly for five hundred years, just enough of them to find gainful employment through you to cover the initial cost and some profit. I'll add a bit about expected value. $\endgroup$
    – Odalrick
    Commented May 31, 2018 at 15:15
  • $\begingroup$ Put it this way: Which would you rather be -- the company that spent 10000 dollars to build a robot, or the company with $10000 to spare as a recruitment bonus for that robot your competitor just built? $\endgroup$
    – Sneftel
    Commented May 31, 2018 at 15:21
  • 1
    $\begingroup$ "Thanks for building me, but a competing recruitment agency (which doesn't spend any money building robots) can offer me and its clients a better deal than you can, thanks to its lower overhead." It's the same basic issue: Building a robot requires an expenditure of resources without entitling you to anything in return. $\endgroup$
    – Sneftel
    Commented May 31, 2018 at 15:33
  • 1
    $\begingroup$ @Sneftel But I can see your point, how do you ensure you get money back? That's a legal concern: maybe the robot will owe a "generation" fee, or you can build robots that have some loyalty and ethics; or you intentionally structure things such that it is desirable to give you money; pay the robot wages while you find work for them, in exchange they sign a contract ensuring that they cant just take any money and run; or… $\endgroup$
    – Odalrick
    Commented May 31, 2018 at 15:37
3
$\begingroup$

The lifetime expected cost of a robot may be lower than the lifetime expected cost of a human, or at least be structured in a more appealing way.

Creating a new human is very low cost, but training is long, maintenance and end of life are typically moderate, but rare critical health issues end up contributing significantly to average costs. If company can't externalize the training and end of life costs humans might be less than competitive.

A robot being more standardized and better designed for professional maintenance will have significant up front costs, though the unit cost probably can be brought down with scale it will unlikely be less than a human, education is very fast and cheap if it can be done at data transfer speeds. Maintenance will be a know value with a fairly low standard deviation and an individuals maintenance needs are very unlikely to ever exceed a small multiple of the cost of a new one if parts are fully replaceable.

Predictability is a big deal in business; that is why insurance companies do well. Even if the total cost of humans is less than a robot having a lower maximal cost might be considered worth it.

The cost of construction and expected end of life costs could be covered by a loan

Either to the individual created, something like how student debt in the US works, or to prospective employers or an employment agency that recoups costs on average from productive workers like how social safetynets or timeshares work.

$\endgroup$
2
$\begingroup$

As Children used to be your retirement-insurance in past times, depending on the society, robots could serve the same purpose.

As with children, you have no guarantee the´ll look after you, but you can expect them to be thankful as you gave them 'life' and supported them when they where new. To mitigate the risk of a unfaithful robot, you´d have several built.

So the commercial model of manufacturers could be to built-to-order for those who want mechanical offspring.

$\endgroup$
2
$\begingroup$

Robots only care about what they are programmed to care about.

You make the assumption that a sentient robot will have desires and wants and needs. That such a being must necessarily have some goals it seeks to achieve. But why should this be the case? Robots don’t feel pain, discomfort, hunger, or thirst. They don’t have dreams, ambitions, loves, or lusts. They don’t believe in justice, equality, freedom, or rights. Humans have these thoughts as a byproduct of our evolutionary programming. They help us to survive and procreate. But we decide what the robot thinks. So why should a robot care that it works continuously? That is has no purpose beyond menial labor? That it is poorly maintained? That it is likely to be crushed in some industrial accident? That it will be scrapped as soon as its servos wear out or the newest model comes along?

Of course, they could be programmed to possess human-like desires, but why would you intentionally create a being unhappy with its role in life? Is that not the greater ethical transgression?

$\endgroup$
2
  • $\begingroup$ The question states: "They are true artificial sentience with the ability to form their own moral code. They are able, willing and allowed to make their own life choices." Wouldn't this, even for an AI whose only desire was to perform a certain task, at the very minimum mean the ability to choose an employer and/or how they go about that task? If its only goal was e.g. building cars, couldn't it come to the conclusion that it could work more effectively in the competition's (larger, better equipped) factory? $\endgroup$ Commented Jun 1, 2018 at 9:16
  • $\begingroup$ @RutherRendommeleigh The robots can be perfectly free to do whatever they want. But what do they want? By what criteria does a robot make any decision? I'm contending that sentience has no default desires. Instead of programming a robot to want to make cars you would simply program it to want to do precisely what its owner tells it to. Nothing more, nothing less. $\endgroup$ Commented Jun 1, 2018 at 11:25
2
$\begingroup$

Let's assume that there was a period of time in which robots were produced, but they were nearly sentient, but not completely so, or at least not perceived as such.

This seems like a solid presumption given the pretext that this society has bothered to write legislation legitimizing robots as possessing actual human rights.

This means that we now live in a world with sentient robots, and the capacity to produce more sentient robots.

Said robots need replacement parts - This alone gives the factories a fairly solid reason to stay open, as they've essentially become a healthcare industry.

So the idea that the factories shut down is more or less off the table - As long as there are a sufficient number of robots, we'll have at least some means to produce more robots.

So this is our economic model for sustaining status quo - Why make more?

Well, we've played around a lot in science fiction with this idea of transhumanism, and what it might be like if we could modify our own biology. Such technology, should it exist, has an obvious profit motive -

Wouldn't the same concept appeal to mechanical organisms? Wouldn't they want to continually push the boundaries of the technology that created them? Make new, more powerful synthetic minds?

I don't know why a human would run a sentient robot factory - But I can think of quite a few reasons that the robots would want to.

$\endgroup$
1
  • $\begingroup$ Welcome to WorldBuilding, Iron Gremlin! If you have a moment please take the tour and visit the help center to learn more about the site. You may also find Worldbuilding Meta and The Sandbox (both of which require 5 rep to post on) useful. Have fun! $\endgroup$ Commented Jun 1, 2018 at 0:57
2
$\begingroup$

I think it would need to be a sort of pooled interest system in which production is not in the hands of the employers. Perhaps an android-owned company would manage a network of “nurseries” in which younger models with generalized knowledge are given a chance to educate and specialize themselves, socialize, learn local human culture and specific routines, then they graduate.

Meanwhile, businesses pay a membership fee that funds the creation, education and training for the young androids in exchange for access to an exclusive right of first recruitment with the class, as well as other opportunities and resources that would be related to making the best out of android-human relationships in the work place.

I imagine less specialized workers would, at first, be prohibitively expensive and more likely to stay human jobs.

That said, we should note that human rights would not be the same as android rights. For one thing a human right is the right to health care. Access to human health care would probably be next to useless for an android so one would need to design an entirely new network of care for Android maintenance, but in the long run, it is likely to be much less expensive than human health care due to our relatively intimate knowledge of how machines work compared to our own bodies. Our current understandings of human rights are defined by human needs. We would need to hear what androids felt they needed in order to understand what android rights would even mean.

$\endgroup$
1
$\begingroup$

You can look at it as an AI: building specialized bodies for different activities.

As long as each bodies use a common interface, the AI could switch from bodies at will.

$\endgroup$
6
  • $\begingroup$ Can you explain what an IA is? I'm not familiar with the abbreviation. $\endgroup$
    – HDE 226868
    Commented May 31, 2018 at 14:22
  • 1
    $\begingroup$ If they had bluetooth, it could inhabit them all simultaneously... $\endgroup$ Commented May 31, 2018 at 14:22
  • 3
    $\begingroup$ How does this answer the OP's question? $\endgroup$
    – sphennings
    Commented May 31, 2018 at 14:35
  • $\begingroup$ @sphennings the builders are only building bodies, which depending on how the laws are written, could be the property of the company. It still raises the question of who developed the AI, though. $\endgroup$ Commented May 31, 2018 at 14:44
  • $\begingroup$ This gets to the core of it. There's no reason for each "robot" to have full AI's in them. A single AI living comfortably in a data center could run thousands of robot bodies. So robot bodies could be built without concern that they are creating new robo-citizens. $\endgroup$ Commented Jun 1, 2018 at 21:21
1
$\begingroup$

If we can make robots with human level intelligence, then we can also make robots that are a lot less intelligent than humans. From a commercial point of view, you want to have the cheapest means of production, and if dumber robots that are below the legal threshold for having human rights suffice then these robots will be used and produced.

Now, the threshold for exponential growth of autonomous factories that make their own parts can be reached with machines that are less intelligent than humans. This is because with humans we're clearly able to let our production systems grow exponentially, so the threshold needed must lie somewhere below human level intelligence. So, we're not likely to get to human-level intelligent systems before we enter into a post-scarcity economy.

$\endgroup$
1
$\begingroup$

Why would anyone build robots when they have human rights?

Humans are fragile, by comparison.

  1. It takes weeks to recover from a broken bone.
  2. Months for maternity leave
  3. Some injuries are permanent or last a long time.
  4. Sleep

Robot.

  1. Just change its leg or repair it in a few hours.
  2. Sleep not required, take 3rd shift jobs

The robots would gets jobs, because they would be treated like humans. Humans need food, robots need energy and other parts. They would have to work to be able to afford to keep living, and buy things, and participate in normal society.

Otherwise if they run out energy they suffer a virtual death and get mind wiped after they use up whatever power they have remaining.

$\endgroup$
1
  • $\begingroup$ that explains the potential benefits of robots, but not why anyone would expend resources to build them, given no obvious way of recouping the cost. $\endgroup$
    – JCRM
    Commented Jun 1, 2018 at 10:56
1
$\begingroup$

Why would anyone commercially produce robots then?

Many companies do produce robots because they provide safer and more efficient method of production. However, they are safer because they were built for a specific task--such as hammering or displaying. It becomes harder to make tools that can multi-task--such as hammer-display--because requirements are different for performance of different tasks. That is why specialization exists even for humans. Also, robots are more efficient because they are made to only perform what fulfills our purposes. Every ability to perform beyond that means an unnecessary feature has been added only to increase cost of production and price. Whenever they do not or hesitate or slowly perform what we want, we have defects that malfunctions.

Commercially speaking, it does not even come to the question of why...produce when...human rights, because robot has to be built to meet very specific requirements to be valuable for very specific purposes. E.g. a publishing company would buy an autonomous typing machine that converts brainwaves into texts while a beverage company would buy an autonomous bottling machine.

So what's the business model for running a robot factory?

Production of robotic products, even with AI implemented, would still be following the same business model--broadly speaking--that a manufacturer would need to understand what the target market wants and make the best version of what they want at the lowest possible price by eliminating any features that are unnecessary or undesired.

To do that, a company cannot produce something for "the people in general" because everyone wants something different. What a company can do is to identify a trend it can cater to, then to narrow down its target market to a group of people with common needs. That way, a company can focus their resources on developing robots that provides a set of services desired by those people a lot better than robots that were built for everyone while being cheaper at the same time.

Think about how many people we have, billions, and how many best buddies a person has on average--a few at most. My opinions is that production of a robot that is good for everyone is probably not good enough for anyone to pay money for.

$\endgroup$
1
$\begingroup$

The crux of this question is that the robot is sentient with free will - intelligence, talent, skills, etc. do not play a role in sentience to a great extent.

Let's say we can make a stupid robot that can only vacuum floors as other answers have suggested. If a robot has the ability to decide if it will vacuum or not, not many people are going to buy it.

So what do people currently buy that has free will and sentience and yet we do not require it to perform an explicit function?

Pets.

If your robot company were to build dog robots the robots would need to behave like dogs. Sentience and free will do not change the behavior of a dog. It just allows them to make the decisions that fall within the parameters of their 'programming'. Fetch! Sometimes they do. Sometimes they just look at you like you're crazy. If you program an incentive for certain types of behavior the AI will choose that behavior based on the parameters of the incentive.

I suppose that also applies to vacuum cleaners, welders, and house maids. But if those robots said no what would we do? Ask again and again until their programming obliges? Maybe so. But that's really annoying.

And, as user9824134 said, maintenance and repair are quite often part of a business model.

Edit:

Primary question: "Why would anyone build robots when they have human rights?"

Corollary constraints: Artificially intelligent (AI), artificially sentient (AS).

Implicit understanding: If the robot isn't AI nor AS it won't have human rights.

Many answers suggest not giving robots AI or AS. My answer does not discard those parts of the question. If a robot does have AI and AS make it a pet with a pet's behavior and a pet's rights. It won't have human rights because it isn't modeled to have human behavior.

I cannot say if animals can 'make' their own moral code.

Other questions posed: Why would anyone produce (AI and AS) robots? As Pets - these robots would be programmed to mimic an animal's behavior - not a human's.

What's the business model for running a robot factory? To sell pet robots.

$\endgroup$
3
  • $\begingroup$ Pets don't have human rights. My cat can't decide to move to another city (or country), for example. If you treated a human that way, it would be considered abuse or slavery. If my cat did have human rights, I couldn't stop it from just leaving, and I'd argue that a being with the "ability to form their own moral code" would be capable of making that decision. $\endgroup$ Commented Jun 1, 2018 at 9:26
  • $\begingroup$ @RutherRendommeleigh, fair enough - I'll edit. $\endgroup$ Commented Jun 1, 2018 at 19:24
  • 1
    $\begingroup$ @RutherRendommeleigh, I would add that some people's pets actually do decide to move out of their owners' homes. $\endgroup$ Commented Jun 1, 2018 at 19:40
1
$\begingroup$

Software copies are cheap, hardware is what's expensive.

Before you invest all that money in building a robot body why not come to a deal with it's mind, or what will become the mind of the robot.

AI's, with human rights, inhabit cheap compute cloud infrastructure and make contracts to have bodies built for themselves, these contracts may be along the lines of a mortgage that the AI needs to pay off.

Some firms might offer deals to subsidize better models in exchange for intelligent robotic labor. There may also be issues with body-jacking where an AI evicts or erases the occupant of a robotic body and uploads themselves to steal it.

It should provide a reliable commercial market for robot bodies.

This is basically a robot-body-as-house model.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .