Intelligence: Artificial and otherwise
Never mind the quantity, THINK FOR YOURSELF

Intelligence: Artificial and otherwise

“With all technology there is a good side, a bad side, and a stupid side that you weren’t expecting. Look at an axe—you can cut a tree down with it, and you can murder your neighbour with it. And the stupid side you hadn’t considered is that you can accidentally cut your foot off with it.” Margaret Atwood

An algorithm walks into a bar. The bartender asks him what he wants to drink. “Well,” looks around and says: “ I’ll have what most people drink.”

“If we open up ChatGPT or a system like it and look inside, you just see millions of numbers flipping around a few hundred times a second,” says A.I. scientist Sam Bowman. “And we just have no idea what any of it means.”

As we proceed there might be little room for the human being. But then again, I wonder how creative we will be in incorporating it in our lives. There is still time. Best to find out —today— how you can get involved in the applications and their humane use and privacy.

Computer processing power doubles every two years, much faster than we humans have evolved! [See Exponential Growth ] Many believe that before the end of this century, computer software will learn how to build its own infrastructure. It will set its own political agenda; develop self-replicating nanotechnology, all outside the understanding and control of the human programmers.

Stealth also seems to be the way ‘deep learning’ invades our lives. It is a type of neural network that can discern complex patterns in huge quantities of data. Shoshana Zuboff (Big other: surveillance capitalism and the prospects of an information civilization. Journal of Information Technology. 4 April 2015) describes how they are preying on dependent populations who are neither its consumers nor its employees and largely ignorant of its procedures and what all these data may lead to. She notes that surveillance capitalism reaches beyond the conventional institutional terrain of the private firm and accumulates not only surveillance assets and capital, but also rights and operates without meaningful mechanisms of consent. Surveillance has been changing power structures in the information economy. This might present a further power shift beyond the nation-state and towards a form of corporatocracy. (A new term that does not bode well.) So the ‘magnificent seven’ may well prove to be the ‘frightful seven’.

"We are healthy only to the extent that our ideas are humane." Vonnegut

Creativity explored, the series is based on the satirical travels of ship’s surgeon Gulliver. In part 4 he finds himself in Laputa. With this name Swift satirised the way science started to become a goal in itself. For all of us it is a good to wonder how far we control the machine and where does the machine control our lives.

Laputa an island on a cloud! Perhaps you sometimes wondered about all this scientific work, well Swift saw (1726) that many had lost all contact with reality. His idea was that all inventions only have quality if they contribute to the general good (i.e. with the heart of a human). [The book came about as a result of an assignment! The Scriblerus Club proposed to satirise the follies and vices of learned, scientific, and ‘modern’ man. Swift’s topic was to satirise the popular voyages to faraway lands. ]

The field of artificial intelligence was born in 1955 when a small group of researchers drew up a proposal for a project at Dartmouth. “An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”

Q to A.I.: Think of computer programs that use the processor as little as possible. A.I. came up with programs that are always in slumber.

An important sign of the coming of the new intelligence was I.J. Good’s 1965 paper “Speculations Concerning the First Ultra-intelligent Machine” Good—who worked alongside Alan Turing, and helped build and program one of the first electrical computers—laid out a simple and elegant proof that’s rarely left out of discussions of artificial intelligence and the Singularity:

‘Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make . . .’

Q to A.I.: Make a list with numbers that are all in sequence. A.I. came up with an empty list—A.I. had discovered that empty lists never show mistakes.

A.I. and super-intelligence [Term coined by N. Bostrom, Oxford University. Future of Humanity Institute] is being born. Some programmers worry that, once out of the cradle, A.I. will create a world in its own image. A world governed by numbers. May mean no more thinking in terms of body – mind – soul – spirit.

Well, luckily there is a group of scientists who ask special attention for the ethical and existential aspects. One of them is Nick Bostrom. He believes that super-intelligence, which he defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest", is a potential outcome of advances in Artificial Intelligence. Although AI super-intelligence is potentially dangerous to humans, he feels that we are not powerless to stop its negative facts.

In his book “Superintelligence: Paths, Dangers, Strategies”, he argues that true A.I. might pose a threat to humanity . He opens with an illuminating tale:

A flock of sparrows decide to raise an owl to protect and advise them. They go looking for an egg to steal and bring back to their tree, but, because they believe that their search will be so difficult, they postpone studying how to domesticate owls until they succeed. … It’s not known how the story ends.

The open end points at the book’s core question: Will an A.I., if realized, use its vast capability in a way that is beyond human control?

In people, intelligence is inseparable from consciousness, emotional and social awareness, the complex interaction of mind and body. An A.I. need not have any of such attributes. Bostrom believes that machine intelligences—no matter how flexible in their tactics—will likely be rigidly fixated on their ultimate goals. How will we succeed in creating a machine that respects the nuances of social cues? That adheres to ethical norms, even at the expense of its goals?

Q to A.I.: When asked about the easiest way to get rich for youngster, A.I. answered: ‘Kill your parents.’

So what Bostrom and others as Tegmark are saying: ‘If we achieve super-intelligence there are some problems that might arise, and what are we going to do about that?’ Nobody knows the answers for certain, and people tend to fall into two camps. On the one hand, there are those who think it is probably hopeless. And the other camp thinks it is easy enough and will be solved automatically.

The history of science is no real help. [In 1933 Einstein was sceptical about an idea of a former student Leo Szilard about splitting atoms at will (nuclear energy).]

There is Clarke’s First Law: ‘When a distinguished but elderly scientist states that something is possible he is almost certainly right. When he states that something is impossible, he is very probably wrong.’

A.I. is active in your app when suggesting a word; your smartphone finds Wi-Fi, no trouble. Algorithms combine your favourite songs from period or style. However, when something goes wrong algorithms are not easy to convince that correction is needed! No problem with a playlist, I guess, but it is certainly a problem when you are entering a request for something more serious. There seems to be little possible when someone is mistakenly selected as a ‘bad payer’.

With people at the control harm might be prevented or corrected, with algorithms the harm is done and woe on your crusade for correction. Something similar seems to be the case with landings on the moon. In the 60's & 70's, quite a number of landings on the moon were successful. Nowadays that seems to be not so easy—whereas science has moved on! Perhaps the difference is the corrections by human astronauts.

Of course bureaucracy created the opportunities: humans were told to only do as prescribed: don’t think, follow procedure. Still the chance of repairing the mistake was there. In every bureaucracy there is always a person who still dares to think. However, when there’s no skin in the game [and algorithms do not think for themselves] the misery caused will be more than any robot can imagine.

There will be no more erring on the safe side.

The trouble of technology is that once it has been invented, it will be used, without any regard for usefulness and consequences. The important thing is to keep in mind what is impossible for artificial intelligence. It can 'play' but has no playful mind. It can estimate the future but not worry about it. It can record the past without being happy or sad about it. The artificial intelligence cannot cry or laugh, cannot judge beauty or ugliness, cannot experience friendship or compassion. But most importantly, how could an artificial intelligence wonder anything about itself, about what happens after death, or rather, after its batteries have run out! Who am I? What is the nature of my mind? An artificial intelligence or living robot wouldn't be able to ask itself that question and certainly wouldn't know the answer to it."

Mathieu Ricard (The Monk and the Philosopher)

Computers, even intelligent ones, miss, lack humour, real interest and empathy. Exactly the things you need for a humane world.

Adapted from: Creativity Explored, Voyage to Laputa

Wish you healthy thinking




To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics