0

For the scope of this question, let's consider an intelligent system as something with the properties of accomplishing some kind of goal(s). By this definition, all people are technically intelligent systems somehow, one could say (what defines a "goal" and to what extent is it actually "accomplished" is tough).

But speaking more typically here, people would more so associate "intelligent system" with something like a machine; something not biological and without "consciousness" because the word "system" may conditionally drive people to think of a "systematic approach" as "non-human" because something "power-driven" may imply mechanical, despite the fact that people are "power-driven" as well (energy).

Consider the latter though. A tool is usually built for a purpose, or a set of purposes. An example is the "smartphone". Smartphones are believed to be "smarter" because they enable you to do more computing tasks like you would on a desktop, multi-purpose machine with multi-purpose software and systems, available tasks, and possibilities to use/build these tasks. This is, however, implying that the desktop computer of 'X' architecture with 'Y' design is "smarter" than previous generation cellphones.

Knowing that the whole scope of computers, including software and hardware, is wide and this will get off-topic by having me explain everything in greater detail, so that's to be avoided. In effect, the "common computer" is generally considered intelligent today because it was made to be more efficient, multifaceted, user-friendly, and having a massive scope of possibilities to do a massive array of things.

A can opener is "stupid" because it can only open cans. Then again, is a computer really smart, or is it just set up to make you think it's smarter from your interaction and interface with it today?

Those "cool looking" smartphones seem smarter everyday, but relatively little to nothing changes on the lowest-scale of its operation: logic gates, a power source, and engineering design of electrical flow, among other related components, which allow you interaction, memory, processing, and display.

A smartphone isn't necessarily getting smarter; it's more so tricking people into thinking it is from their level of interaction with it. The system is still, on the lowest-scale, like any other computing algorithm, regardless of design, to process data/instructions and give output from input, and related functions.

If we consider the Turing-complete algorithm and set of rules for computing at its level of operation of its foundation of design, a smartphone is actually no smarter than a 1960s mega-box punch card machine.

Just because you can run so much "cooler" stuff on your smartphone doesn't necessarily make it smarter. People, like machines, can be considered intelligent machines.

What is the relation between an intelligent system in a machine and an intelligent system in a human?

Can we simply say that machines are smarter than us? Are we biased and selfish by assuming we are smarter and more capable than a modern computing device of any kind? Are we both smart?

3
  • 1
    "relatively little to nothing changes on the lowest-scale of its operation" - that's pretty dismissive about the advances that went into the software that went into making it appear "intelligent." Commented Jan 8, 2015 at 16:51
  • The software is just higher-level representation of what is finalized into the operable scale of logic gates and electronic energy. Changing the software at that scale never changes the hardware per-se, but can optimize for better performance. In other words, better software doesn't mean different hardware. As I said, appearing more intelligent doesn't mean operating more intelligent. Think like people who pretend to be smarter than they are by throwing around words they memorized for a specific occasion. Of course, it's major trouble in comparing humans and their minds to computer hardware. Commented Jan 8, 2015 at 18:29
  • The term intelligence/smart is a relative term and using it as absolute term is the cause of all such philosophical questions. If there are 2 person, one a very skillful sportsman and other a very skillful physicist, who do you think is more intelligent/smart?
    – Ankur
    Commented Jan 9, 2015 at 9:20

2 Answers 2

1

My short answer ("What is the relation between an intelligent system in a machine and an intelligent system in a human?"): They are quite distinct things (or properties).

You can "isolate" intelligent system in a machine logically (and, to some sense, physically): It is the exact way (configuration) in which the machine is organized at a particular time. We may simply call it "software". Whether the software as it runs changes the physical machine in a predictable/deterministic way or not, the software can always be isolated independent of the hardware (both in the sense that the stuff that the software runs on or a particular physical configuration of it).

In biological machines like humans (I call them "machines" in the context of your question to be more specific), on the other hand, it is not that clear that the intelligence comes from a software-like logical/physical organization. You may never be able to isolate the "software" of human as in machine case. The hint comes possibly from quantum-level limitations where you may not talk about a "particular" state of matter being the "actual" reality. Human intelligence may also have similar roots (as proposed by, say, Penrose). Note that this distinction does not prohibit having machines with human-like (or even superior) intelligence. It only claims that you may never isolate the software ("actual" physical/logical configuration) of the machine if it will be equivalent to human intelligence. Can we say that, then, we should not call it "intelligent system IN the human" but "intelligent system OF human"?

1

I believe that your definition is rational, both for its use on machines and humans.

The difference between machines and humans seems to work its way down to the organization of a machine versus the organization of a human. However, I think the defining difference between organic intelligence and mechanical intelligence at our current level of technology its its flexibility. A machine is very good at a small set of tasks, while a human is very diverse in his or her talents.

Consider the edge cases. A person who seems to "only be good for one thing" often picks up wordings usually associated with machines. A machine which seems to be highly adaptable often picks up wordings associated with humans.

I believe the implementation reason for machines being good at a small set of tasks stems from the human need to be able to program them. To a non-developer or non-engineer, the capabilities of a smartphone are magic, and perhaps worth of being given human traits, such as a name. However, they are in a society with a bunch of developers and engineers who tell them "its only a machine. I can see how it is built." The developers and engineers can see the CPUs and the memory and the caches and so on. They can see that, while the smartphone appears tremendously flexible, it is actually quite set in its ways.

They then proceed to demonstrate their ability to force the smartphone to do exactly what they want, demonstrating that they can predict how the machine will respond to stimulus.

With organics, it is much more difficult to draw clean lines to divide an entity into blocks. Every time we do (such as dividing a body into muscles, brains, hearts, etc), we find the inter-connectivity between these is so complicated that we can't demonstrate an ability to predict how the body will respond to stimulus.

Consider how many times we have been told "doing X is healthy," only to find out a decade later that it was actually unhealthy, we simply didn't understand the human body well enough. Consider that we can model the brain and say "our sense of hearing is here," and then find a model of a person who underwent a stroke, whose brain has completely remapped their sense of hearing elsewhere. We really have a hard time predicting what an organic living creature will do.

On the border between these is the field of Artificial Intelligence. We have tools such as neural networks where even the masters teaching them say "We know why they work, statistically, but we can't look at a neural net and say 'here is why it solves the problem.'"

Another major issue for machine intelligence is that machines can be stopped, and their internal state examined. We can stop them because we made them that way. We can take an AI, and clone it, without worrying about any moral implications. However, as we make devices which try to run faster and faster, we're going to have to eventually stop making them easy to stop and examine. Consider an AI which runs on one of IBM's new neural chips. Consider a chip where they make the neurons so slow that each one is a little imperfect. An AI which runs on a given chip may not be clonable to another chip because the AI took advantage of those imperfections. Or consider an AI running on a chip that handles signals so blazing fast that there is no way to stop it mid-process and examine its state.

Not the answer you're looking for? Browse other questions tagged .