2

I was quite taken with David Deutsch's dismissal of the fear of the AI general intelligence singularity, based on the idea that any true intelligences will in some sense start with hybrid human-machine intelligences. I pictured human-machine cyborgs, or simulated human brains like the Human Brain Project is attempting.

But I just encountered Eliezer Yudkowsky's strong dismissal of this view:

  • I don't think that humans and machines "merging" is a likely source for the first superhuman intelligences. It took a century after the first cars before we could even begin to put a robotic exoskeleton on a horse, and a real car would still be faster than that.

  • I don't expect the first strong AIs to be based on algorithms discovered by way of neuroscience any more than the first airplanes looked like birds.

  • I don't think that nano-info-bio "convergence" is probable, inevitable, well-defined, or desirable.

  • I think extrapolating a Moore's Law graph of technological progress past the point where you say it predicts smarter-than-human AI is just plain weird. Smarter-than-human AI breaks your graphs.

  • The only key technological threshold I care about is the one where AI, which is to say AI software, becomes capable of strong self-improvement. We have no graph of progress toward this threshold and no idea where it lies (except that it should not be high above the human level because humans can do computer science), so it can't be timed by a graph, nor known to be near, nor known to be far. (Ignorance implies a wide credibility interval, not being certain that something is far away.)

  • I think outcomes are not good by default - I think outcomes can be made good, but this will require hard work that key actors may not have immediate incentives to do. Telling people that we're on a default trajectory to great and wonderful times is false. From

This throws the spotlight for me, on what would is involved in transmission of human intelligence, between individuals and to any AI. It has to be transmission in some sense. For instance with https://en.m.wikipedia.org/wiki/AlphaZero even though it was able to adapt from Go to Chess, this involves some intelligence about game design and aims. A container for calculations, in the same way that a human baby's brain and it's engagement with energy flows is a container, which must develop considerably to redefine the game.

Is there anything beyond speculation, about how much a true general AI intelligence would need language, training, and cultural transmission, or whether it could somehow become 'self-arising' - could it in some sense become an entire new evolutionary tree, developing itself as evolution developed us?

Perhaps there is a parallel issue about how much descendents resemble antecedents, how intelligence is preserved and transmitted, and how much we can be self-defining. There seems a tension between individual and community intelligence..

8
  • 1
    My cell phone and I make a good cyborg. I think of artificial intelligence more as artificial logic. Commented Jul 6, 2018 at 21:28
  • Imagine a electronic circuit which can launch thousands of nuclear weapons the moment it decides to give a signal. We have had that for 60 years now, but we just keep it under a human master switch. Commented Jul 7, 2018 at 16:01
  • What do you mean by language? The whole computing is based on languages: computational problems are called langauges. Or do you mean human languages?
    – rus9384
    Commented Jul 8, 2018 at 20:30
  • @rus9384 Good point. I guess I had in mind computations, but language is essential, and they express assumptions and shape thought. The AGI could refine & develop it's language and expressiveness, but from an original base. On the other hand, our neuronal signalling patterns seem only indirectly to shape our thought processes..
    – CriglCragl
    Commented Jul 8, 2018 at 20:59
  • Reading about Neuralink, it seems the answer may be a race of technologies. We 'merge' with rollerskates & other physical extensions, will we soon merge with computational extensions? waitbutwhy.com/2017/04/neuralink.html
    – CriglCragl
    Commented Jul 10, 2018 at 5:28

3 Answers 3

1

A true general intelligence needs information which may be included in itself when created or it may receive this from data it analyzes. It must have information but it may receive basic principles when it was created so it can start to self improve. In other words, the algorithm itself can contain the needed information for a machine to self improve.

Intelligence is transmitted as information. It depends on the actual implementation how that happens. It is like saying it depends on the language of the machine.

1
  • I made an edit to clarify the answer which you may roll back or further edit. Do you have any references to others with similar positions? References would support your answer and give readers a place to go for more information. The use of "information" suggests to me "integrated information theory". Is this position similar to yours? Welcome to this SE. Commented Jul 8, 2018 at 23:13
1

This seems related to the nature vs. nurture debate for human beings -- does our behavior come from our genes (nature) or the environment (nurture)? Clearly both play a role, but their relative importance remains an issue of debate. One might further divide nurture into nurture from human culture (direct instruction from parents and others, observing what other humans do, reading books, etc.) and other (learning to move your body, playing with sand to learn about physics, etc.). One can ask the same question for other animals.

For the first artificial general intelligence (AGI), we can frame the question the same way -- how much of its behavior will be directly encoded (nature), how much will result from learning from human beings through teaching and demonstration (nurture/culture), and how much from its own observations of and experiments on the physical world (nurture/other). I don't know how to answer this question exactly. But as far as current AI research goes, it has become much more learning (nurture) oriented than it was at one point, and the idea of learning from human demonstration is also very popular, especially in robotics (as a quick web search will reveal).

0

"When Kasparov was defeated back in 1997, he didn’t give up the game. A year later, he returned to competitive play with a new format: advanced, or centaur, chess. In advanced chess, humans partner, rather than compete, with machines. And it rapidly became clear that something very interesting resulted from this approach. While even a mid-level chess computer can today wipe the floor with most grandmasters, an average player paired with an average computer is capable of beating the most sophisticated supercomputer – and the play that results from this combination of ways of thinking has revolutionised the game." https://www.theguardian.com/books/2018/jun/15/rise-of-the-machines-has-technology-evolved-beyond-our-control-

1
  • 1
    It is a matter of some debate whether humans still have much to add to a centaur team with the most recent AI players; see, e.g., this page.
    – present
    Commented Aug 3, 2018 at 23:18

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .