1

Whether its natural or artificial, intelligence is economically expensive for businesses thst need intelligence to provide a good or service (Is AI a good that provides a service?) . Given all the concerns (legal, expense, ethical, etc), what philosophy will support the idea that smarter is better? Is this answer philosophy dependent? An economic philosophy may give one answer that is different from a philosophy of mind?

As an example:

Since intelligence is expensive, the incentive is to use as little intelligence as possible to perform a function and applied AI design will follow this paradigm. Nail guns have not replaced hammers. In fact, a hammer is needed to correct nail gun (or user) mistakes.

Bonus Question:Has economic philosophy inadvertently provided a check against unrestrained intellectual growth?

15
  • 3
    "True sentience" requires a definition, and there probably won't be a unanimous one. It's a buzzword more than a properly defined thing.
    – Frank
    Commented Apr 7, 2023 at 17:57
  • 1
    @Frank The only analogy I can think of is the Nasa and the moon. The technology to go to the moon has existed for 50 years, yet there has not been a return for mostly economic reasons. Why is AI any different? A Bot that can pass the certification exam for an accountant can do my simple tax return without passing the Turing test. What economic advantage is there to creating something "sentient"
    – user64314
    Commented Apr 7, 2023 at 18:03
  • 1
    Well, just wait for the hype to ebb first ... Although GPT-4 passes all those wonderful tests, how would you use it in practice? I have tried myself to use for work, but it didn't work out. The results were not reliable, so that it was wasting my time more than helping me. It's great at generating plausible-sounding conversations, but there's an ocean to cross to have GPT-4 actually work in place of humans. Except maybe in call centers and similar applications.
    – Frank
    Commented Apr 7, 2023 at 18:53
  • 2
    @JD One of the goals of all my questions on this is to better understand what criteria AI Bot would need to pass to be granted the status of Legal Person. Corporations are legal persons so it's not about being human
    – user64314
    Commented Apr 7, 2023 at 19:31
  • 2
    A 95% (plus-or-minus) sentient being is a dog, cat, etc. Commented Apr 8, 2023 at 5:22

3 Answers 3

1

true sentience will stall in favor of the cheap knock-off

If that was true, there would be only cheap knock-off science, and in fact first rate science exists.

Which has more value (from an economic view): True sentient AI or an excellent mimic of human behavior

Current AI systems do not correctly mimic human behaviour. Rather, it averages out human behaviour, with all the limitations which this implies.

"Sentient" means having sense perception, consciousness, sensation, feeling and subjective experience. I doubt any AI could ever be proved to possess consciousness since we cannot even prove that other humans have it, but sentience seems irrelevant to economic value. So, the crucial distinction is not between human-mimicking AI and sentient AI, it is between really intelligent AIs and AIs which are not really intelligent.

Today, we only have the latter. Presumably, current AIs can have or gain at some point a very high economic value. Good for them. Yet, this is nothing compared to the potential economic value of a really intelligent AI. Think of an AI which has the same sort of intelligence as humans have, but without the limitations of the human biology.

Because of their biology, all humans spend most of their time on Earth either sleeping, eating, defecating, reproducing, watching news, talking about the weather, exercising their body etc.

No human can think all day long, let alone 24/24, 7/7, 12/12.

The human brain also has a limited capacity, which we are very unlikely to be able to increase significantly any time soon, which is why collaboration between humans is crucial.

However, humans cannot communicate with each other very effectively. To convey our ideas to other people, we have to use either verbal communication, which is terribly slow, or the Internet, which itself requires us to painstakingly key in our messages, exactly what I am doing right now. This is also terribly slow.

Humans cannot also think very fast because our brain is a biological brain. The human brain is a very remarkable product of natural selection, but just the same it is really slow.

We also cannot learn very fast. Typically, any truly new idea requires days, often weeks, sometimes months or years of learning. In fact, new ideas often requires several generations of humans, sometimes, often, many new generations of humans, as history demonstrates.

Sometimes, new ideas requires entirely new technological means, like the telescope, particle colliders etc.

Humans cannot also hold in mind many ideas at the same time, which despite our good logic is a terrible limitation and explain a good chunk of the terrible stupidities that we all commit throughout our lives.

Think also of humanity itself: How many humans can even afford to spend time to think beyond matters of personal immediate survival. Civilisation found a work-around by institutionalising education and providing academics with the ressources to survive without having to hunt or grow food themselves. Yet, most academics only produce limited economic value. The ones that produce high values are so few that, out of billions of humans who have ever lived, they are like Einstein known by their personal names.

Humans also have many cognitive biases, which are most of the time evolutionary solutions to the many problems endangering our survival, but often are not conducive to higher intelligence. Humans spend more time fighting and killing each other than they spend in thinking ideas having an an economic value.

Truly intelligent AIs would have the potential to be literally awesome. Just beyond anything humans would ever be able to do without it.

For good or bad, though.

                                          oOo

Edit: What is the distinction between intelligence that is personal property (dogs, cats) and intelligence with human rights?

This is a completely different question. I would have thought that there is at least for now a large number of obvious distinctions between dogs or cats on the one hand, and any AI computer on the other, but if we could make an AI computer exactly like a dog or a cat, and if we recognised it the same rights as dogs and cats, then by definition we would all accept that there is no apparent difference. I don't think this could be even possible in practice, though.

However, also by definition, an AI computer which is exactly like a dog or a cat still is not a dog or a cat, unless you are using a private language nobody else understands. So, as long as we somehow believe that the computer is a computer and therefore neither a cat nor a dog, it will seem very reasonable to rational people to keep treating it as a computer and not as a cat or dog.

So the difference as always would be in what we happen to believe about the computer.

10
  • The one area that current economic law is fuzzy is on the ownership of "intelligence" (property or technology). I believe the drive economically, will be to produce "intelligence" that can be owned which means getting as close to the line between personal property and a being with human rights without crossing it.
    – user64314
    Commented Apr 8, 2023 at 17:11
  • 1
    @StevanV.Saban "getting as close to the line between personal property and a being with human rights without crossing it" Machines are not humans, so while some people will inevitably argue to recognise human right to machines, there is no economic logic to go in that direction. The fallacy in your position is to equate intelligence and sentience. Some people will want to do that, most, no. Intelligent machines will be treated very much like slaves, which were essentially denied any human right. Commented Apr 9, 2023 at 9:11
  • 1
    @ScottRowe "what really would we need people for, economically" I don't think anyone 2,000 years ago could have predicted what the economy is like today. Commented Apr 9, 2023 at 9:15
  • 1
    @StevanV.Saban See my edit. Commented Apr 13, 2023 at 9:25
  • 1
    @ScottRowe "what really would we need people for, economically" Who is going to buy all the crap that AI efficiently creates?😉
    – user64314
    Commented Apr 14, 2023 at 23:52
0

I agree with the conclusion. You can understand humans as a whole bundle of algorithms, and you would expect algorithms of that level of complexity, before also having a complex structure that integrates and harnesses them into free-floating rationale creators. This answer lists some examples of how this has been happening: Is AI in a Crisis of Science?

A major issue with AI, is understanding how it is doing what it's doing. This is the problem of 'intelligible intelligence', discussed here: Decisions, intentions and actions This poses a challenge to determining true sentience, any apparent passing of a Turing Test can be dismissed as simulation only, unless you can demonstrate something structurally similar to human experience.

Self awareness, and sustaining a cognitive model of our social self, linked to the default mode network, seems to be enormously resource-intensive, and to inhibit various faculties - as per peak-performance of athletes and flow states suppressing this aspect of minds, or savantism which is often associated with impaired social awareness, and chimps apparent greater ease at memorising long numbers. I'd relate the social self to intersubjectivity, the Private Language Argument, and how we get syntax from semantics. Discussed here: According to the major theories of concepts, where do meanings come from? Structurally, the key milestone is having a cognitive map that includes a model of itself, and updating how to be in relation to expected outcomes, a mode of feedback Hofstadter described as a 'strange loop'. I would identify this as essential to self-awareness, and self-awareness as existing to the degree of sophistication it is present to.

On true sentient AI, we will be like animals to it, at best, or like algae. It will no longer be our story, but one written and shaped by those beings. The AI singularity will lead to superintelligences, because a mind as complex as a human's, much faster, and able to edit itself, would just naturally do that - it would be like being able to run any and all evolutionary niches in parallel, almost instantly.

"We will get the AGI we deserve. This is not a new problem, but an archetypal one, expressed in the story of Pandora's Box, the exiling from Eden, and stealing fire from the gods - but this time it won't be us telling the story, but new beings in who's story we have only a small part."

My conclusion, from this answer: Can an intellect judge itself?

We should think hard about how we raise those who will inherit the cosmos after us. Whatever group or nation has them as allies will shape the world order, that is the incentive to develop them.

Elon Musk and his Neuralink idea, think we can generate hybrid human AIs (AGI is a better term, AI is used in computing for all kinds of quite basic processing). Critics point out we don't make cars like robot horses, and it would be crazy to make horse-cyborgs for future transport. I'd take a middle position, that while AGIs are likely to become sentient in a non-human way, that they will be shaped by human intersubjectivity, by the neefs of mutual intelligibility, which form the 'ecological niche' of interacting with humans and our societies. AIs incentives to lie and manipulate, may lead to a need to factor in the costs of dishonesty, which leads towards needs for morality and character, for instance. This could also become important in collaborative networks of AGIs.

8
  • What "conclusion" are you referring to?
    – Frank
    Commented Apr 7, 2023 at 23:52
  • As in, the quote block. Will edit, if you don't find it clear
    – CriglCragl
    Commented Apr 8, 2023 at 1:02
  • You start with "I agree with the conclusion". I thought that was a reference to a conclusion in the OP.
    – Frank
    Commented Apr 8, 2023 at 5:34
  • 1
    @JD: It's possible it could go that way too. Biology is amazing at nanomachine tasks. But computing has advantages over neurons like wheels over legs, & we didn't fit horses with wheels.
    – CriglCragl
    Commented Apr 8, 2023 at 20:57
  • 1
    ...and it's a matter of time before AI is developed to develop and design nanotechnology.
    – J D
    Commented Apr 8, 2023 at 20:58
-1

Working backwards:

What is the difference between a slave and a machine other than sentience?

A chattel slave is a human treated as livestock. This sort of slave is a capable of human behavior without having to afford the slave recognition as a human being escaping the ethical precepts that regulate our behavior. Hence, traditionally in chattel slavery, a slave may be bought, sold, punished, or killed often with little consequence except in regards to another human's rights as an owner. A machine on the other hand is neither capable of the breadth of human behavior nor entitled to the rights to personhood that a person is. Both slaves and machines are objectified in a way a person is not.

What is a 95% sentient being?

Well, sentience is not well defined nor is there canonical agreement, but it's arguable that someone with a stroke can easily exemplify a loss of awareness. Oliver Sacks made a career of citing psychological and neurological case study where sentience seemed impacted by epigenetic expression and neural pathology. If you want good examples of partial sentience try reading The Man Who Mistook His Wife for a Hat.

It seems to me, that true sentience will stall in favor of the cheap knock-off. What is the difference between a slave and a machine other than sentience?

"True sentience", the aim of AGI, may not be achievable, and many already vigorously defend the notion it is not possible on theoretical grounds. If by "true sentience", you mean broadly human-level intelligence, then there is an economic disincentive to turn machines into persons because persons are afforded rights and are not strictly speaking, economically and legally fungible. In fact, since the global labor movement after the works of Karl Marx, workers in the world are treated more humanely where such laws apply considering the abuses that occurred under the early Industrial Revolution. One finds as a trope in science fiction narrative, such as in the work of Isaac Asimov and his Robot series, the idea that when robots achieve AGI, they want to be afforded the same rights as people. Competition between machines and humans is a theme that goes back to at least Rossum's Universal Robots.

what economic philosophy will support the development of true sentience over mimic bots?

Well, if you're talking philosophically, then we might speculate that laissez faire economics is the most likely to permit such a view might be the libertarian economics of the Austrian school of thinkers since their methodological individualism is often embraced in a wider philosophy of libertarian government, which has reached almost pathological levels in the US with fringe claims that "taxation is theft". In such an atmosphere, a billionaire should be free to do whatever they want free of government regulation. In narrative, such political and economical systems ultimately plunge into cyberpunk dystopias with zaibatsu and corporations often assuming governance. Obviously your question is speculative, so too is any response.

You must log in to answer this question.