The method by which your machines achieved intelligence is a major component to the explanation of this.
The machines evolved in a weird biomechanical way: They can have just as much difference as us. They're essentially randomly configured by their parents!
The machines were built that way: Their creators felt that this world needed more fun, or that each robot should be configured to it's master. Maybe they just wanted to not have to see the same robot face and be greeted by the same robot voice every morning.
If the machines were constructed to have identical minds, then left for aeons: Flaws in machinery, or changes in the ambient environmental factors during creation may play a part in changes to personality, as may the factors they were necessarily exposed to during their life. A robot that has spent it's entire existence smashing rocks is going to need to know more about rock smashing than a robot that knits, and these changes may require removing or modifying different behavioural aspects to facilitate them.
If the machines were constructed, but randomly generated for the effect: Even a few tweaks to various parameters can have a drastic effect on a suitably complex system, and I think anything that can classify as 'intelligent' is a complex system. If the robots were all made with ten 'sliders' that go from 0 to 255, you can randomly assign values to each new bot off the line without much fear of accidentally making the same robot twice, and the interaction between all the parameters would be chaotic enough that even two almost identical robots would act and react differently.
If the machines were made, but designed to mimic human brains: This is my favourite option. Machine learning is a huge field, and has had some thoroughly exciting (a google learning machine learnt, with no external help, what a face was and how to recognise it) and also terrifying (A google learning machine also learnt how to recognise cats, and now sees them everywhere) developments in the last decade or two, but it's worth pointing out that two identical learning machines, if presented the same data in different formats (say the same view from a slightly different angle) will form different inferences and connections from that data. Out of a small change in the way each bot thinks a larger change emerges, and more and more changes cascade out of this until each robot is, by their very design, fundamentally different in the way they approach everything. For a small or simple learning machine, such as a perceptron, it's demonstrable that two networks will converge if consistently fed the same data, but for larger, more chaotic networks, it's anyone's guess what will happen, and I think that an 'intelligent' robot would have to be a larger network.
A few ideas for you hopefully, but it's worth pointing out that if you've got a world populated entirely by humanoid robots of all shapes, sizes and personalities then it's probably a minor issue!