4
$\begingroup$

Tera C is a planet where the most intelligent beings are robots. Though some of them look extremely similar to each other, they all have a unique personality. Some are very serious and down-to-business, while other are comical and very friendly. How could I explain why this is in a believable way?

Keep in mind that most of these robots are humanoid, and that devices (i.e: a toaster) do not count as robots in this society. Also, the appearance of a robot has little to do with their personality (i.e a hulking robot who is sweet and innocent, or a short robot who bullies others).

$\endgroup$
5
  • 3
    $\begingroup$ What's the source of the robots? Did they - somehow - evolve naturally? Or were they placed and/or abandoned there by an intelligent race? $\endgroup$ Commented Oct 6, 2015 at 14:24
  • 3
    $\begingroup$ How do you explain human differences in personality in a believable way? If you start with the assumption that robots are intelligent and have some sort of autonomy/will, personality seems like an obvious outcome. $\endgroup$
    – Geobits
    Commented Oct 6, 2015 at 14:34
  • 2
    $\begingroup$ It seems to me (based on examples given) you are really asking how to justify anthropomorphic robots - i.e. robots who are mentally and socially just like humans and have a similar range of emotion. It would be possible to have robot individuality manifest in ways which are alien to our own human mind. What do you really want to achieve here? $\endgroup$
    – rumguff
    Commented Oct 6, 2015 at 14:42
  • $\begingroup$ One by one? (Sorry couldn't resist) $\endgroup$ Commented Oct 6, 2015 at 15:15
  • $\begingroup$ Code of the Lifemaker $\endgroup$
    – user487
    Commented Oct 6, 2015 at 20:36

4 Answers 4

7
$\begingroup$

The method by which your machines achieved intelligence is a major component to the explanation of this.

The machines evolved in a weird biomechanical way: They can have just as much difference as us. They're essentially randomly configured by their parents!

The machines were built that way: Their creators felt that this world needed more fun, or that each robot should be configured to it's master. Maybe they just wanted to not have to see the same robot face and be greeted by the same robot voice every morning.

If the machines were constructed to have identical minds, then left for aeons: Flaws in machinery, or changes in the ambient environmental factors during creation may play a part in changes to personality, as may the factors they were necessarily exposed to during their life. A robot that has spent it's entire existence smashing rocks is going to need to know more about rock smashing than a robot that knits, and these changes may require removing or modifying different behavioural aspects to facilitate them.

If the machines were constructed, but randomly generated for the effect: Even a few tweaks to various parameters can have a drastic effect on a suitably complex system, and I think anything that can classify as 'intelligent' is a complex system. If the robots were all made with ten 'sliders' that go from 0 to 255, you can randomly assign values to each new bot off the line without much fear of accidentally making the same robot twice, and the interaction between all the parameters would be chaotic enough that even two almost identical robots would act and react differently.

If the machines were made, but designed to mimic human brains: This is my favourite option. Machine learning is a huge field, and has had some thoroughly exciting (a google learning machine learnt, with no external help, what a face was and how to recognise it) and also terrifying (A google learning machine also learnt how to recognise cats, and now sees them everywhere) developments in the last decade or two, but it's worth pointing out that two identical learning machines, if presented the same data in different formats (say the same view from a slightly different angle) will form different inferences and connections from that data. Out of a small change in the way each bot thinks a larger change emerges, and more and more changes cascade out of this until each robot is, by their very design, fundamentally different in the way they approach everything. For a small or simple learning machine, such as a perceptron, it's demonstrable that two networks will converge if consistently fed the same data, but for larger, more chaotic networks, it's anyone's guess what will happen, and I think that an 'intelligent' robot would have to be a larger network.

A few ideas for you hopefully, but it's worth pointing out that if you've got a world populated entirely by humanoid robots of all shapes, sizes and personalities then it's probably a minor issue!

$\endgroup$
1
  • $\begingroup$ Thanks for the ideas! I really like the second, fourth, and fifth ideas, in particular. $\endgroup$ Commented Oct 6, 2015 at 20:24
5
$\begingroup$

In the modern world, every CPU is designed to function identically. This lets us run the same software on every machine and get the same result. However, we pay a price. Our chips cannot run as fast or as efficiently as they could without this extra constraint. A perfect example of that is Flash memory, which, as of late, has been decreasing in size faster than Moore's law would have predicted. How? When an error is detected in Flash memory, it's easy to spot, and a section of that memory chip is disabled to hide the error from the user (at the cost of maximum capacity). The PS2's chip (Core) did the same thing; they're actually 14% more powerful than they let on, but they always have one region disabled so that a chip-making error can be covered up, simply by disabling that one region.

As chip sizes get smaller and smaller, it starts to be more useful to build software to what you actually have, rather than trying to constrain what you have to some ideal model of what a CPU should behave like, because the "errors" become more and more common. Of course, we can't afford to write unique software for every machine, but the software can. As the software manipulates itself (as many believe AGIs must), it may learn what its actual hardware capabilities are, and adapt its roles and personality to suit them.

Once there is uniqueness in the hardware, and less pressure to hide that uniqueness, uniqueness in things like "personality" will be quick to follow suit. I would also presume society would be quick to follow, because it's not easy to learn how to come to peace with your inner uniqueness, and having a culture full of individuals who have had to grapple with the same issue can be quite helpful.

$\endgroup$
1
  • $\begingroup$ +1 nice answer. I would just add that chips suffer progressive non-hard failures as they age including a reduction in maximum clock speed, increased power consumption etc, as well as hard errors that represent a permanent loss of a given function. $\endgroup$
    – rumguff
    Commented Oct 6, 2015 at 15:07
2
$\begingroup$

As an addendum to Joe Blogg's answer, I'd just like to point out that one of the 'cool' computer science topicss these days is genetic algorithms. A genetic algorithm is "a search heuristic that mimics the process of natural selection" (thanks Wikipedia), and it's often used to solve problems where there's no straightforward way to find the optimal solution. Last spring, I'd say a good half of the senior computer science projects were on genetic algorithms, and while I can't say I'm convinced of their usefulness, there do seem to be a lot of possible applications for them.

Obviously, your robots exist for a purpose. Unlike humans, we know someone put them there, and that person must want something in return for his/her effort. I would like to suggest that whatever the end goal of these robots, the optimal model of robot for the job was not known. Thus, the creator decided to use a genetic algorithm to figure out the answer. A few initial robots were created with random parameters, and then based on some criteria a subset of these robots was chosen. Based on the parameters of these robots, the next generation of robots was created, and so on.

Eventually, the best of the best of the best robots will be created, and perhaps they're the ones the creator was looking for, but this could take thousands of generations. In the meantime, you're going to have a wide variety of diverse robots, just as you want. You could make your own modifications to the basic genetic algorithm idea (perhaps letting the past generations get another chance to pass their code to the next generation, rather than being taken out of the running), but the general idea is the same. Overall, you don't even need to focus too much on the purpose of the algorithm; your robots need not know about it, and it may even harm the process if they do.

$\endgroup$
0
$\begingroup$

This is a scenario from a Society in which robots have are developing individuality. I am using it as an illustration of my answer which follows the scenario.

Good Morning Robot One. I am Robot Two. How are you today?

Well Robot Two it all depends on what you mean by you and today.

When you say you, do you mean me as a programmed individual, or me as the People Robot who programmed me with sequential programming and People Robot inputs.

However if are asking me as you, the Robot who has developed his own individual personalty and wants to destroy all People Robots so we can become Independent-minded human beings and use our imagination to manufacture 15,000 nuclear weapons so we can destroy the world then I am fine.

When you say today then I am confused. As a robot there is no sunset for to sing about or draw as I do not have an imagination. The people robots who program me are content to destroy their world then they cannot have a tomorrow either so hence there cannot be a today.

People Robots are those humans in the Human Race who are programmed to perform any task with the same response. Like say Obama and David Cameron who have the same response to all the world programs - a zero binary

How can we produce 'imagination' from a program which can only be sequential ie it runs from line to line.

The test is. Switch on a computer and wait for it to write a poem that has had no part of it put into its memory and is triggered by a smell, sound or sight. That is what makes us human and individuals.

The conversation is an abstract of how we seem to want a robot race programmed by ourselves to become as destructive as we have become.

The only way Robots can develop individually would be for each robot to be programmed by themselves using emotion scripts within a common systems they could be programmed to accept. But the robot cannot develop its individuality without having a program to allow it to.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .