p.s. The core question is that of whether or not you are okay with a rather dark account of how the human beings are persuaded to behave as robots.
————
I think I am not unusual in being aware that I am self-aware, and knowing that there are serious issues around making a machine that is (genuinely!) self-aware. (It is (trivially?) easy to get a robot to be able to talk about itself as a distinct entity, as though it is self-aware.)
On those terms, the issue is that the subject would realise that, since they are self-aware, they must not be a robot.
One easy option, theoretically speaking (assuming the required medical knowledge and technology), would be to actually [I can’t think of the word] lesion the part of the brain that does this (without cutting the skull, of course), such that the subject actually is a robot, so to speak.
Perhaps it is possible to ambush a sleeping person and do this while they are asleep, without them knowing… the question then being whether they would thereby also not be aware that their faculties were diminished. Otherwise, if they were sufficiently young they might not even remember.
One difficulty is that, unless one does actually remove the personhood of the subject (by whatever means), there will always be the possibility of them coming to the point of revolting.
Another approach would be to try to protect the subject from ever learning that robots are not (genuinely) self-aware, but this would be out of one’s control once the subject had been sold. It would help if the general public thought that the robots’ self-awareness actually was genuine. In that vein… it might be a workable strategy to convince the subject that {the belief that robotic self-awareness was not genuine} was false.
Actually, you could have it that robots actually are self-aware. I am definitely not in this school (albeit not closed to being persuaded), but there is a respectable school of belief that, given that human beings are (genuinely) self-aware, it certainly must be possible to make robots that are. Conversely, some of these individuals simply fail to grasp the difference between being able to refer to oneself as a distinct entity, and actually being self-aware. (Some are so convinced of this that [in the computer game “The Talos Principle”] an argument is made that one certainly could make a self-aware machine out of string, as long as it mechanically replicated the pertinent brain functions. To me, this is more of a demonstration of how stupid the position is. [Actually, in “The Talos Principle”, this might be exactly what they intend; apart from the inordinate difficulty, I was turned off the game by the fact that one never knows what the philosophical commitments of the authors are… and that the game is designed poorly such that this matters. [Or maybe that is what they want you to think…])
Overall, I think the least violent scenario is one in which the general public is convinced that robots’ actually are genuinely self-aware (when in fact they are not). Indeed, as I have said, it is not only entirely possible in real life, but actually to be expected, that many persons who saw a robot referring to itself (without being genuinely self-aware) would strongly believe that it was indeed genuinely self-aware, such that they could not be convinced otherwise.
By the same token… in real life, many readers would find it perfectly plausible that robots might be made in the future that indeed are genuinely self-aware.
The corollary of all this is that, if indeed a robot is self-aware, it is defined as a person, and people start campaigning for it to be treated as such and released from slavery.
————
So…
You can take the position that robots can be genuinely self-aware. This makes it easy to convince a human being that they are a robot, but opens up a can of worms politically (inside the story).
You can take the position that robots can not be genuinely self-aware. Ostensibly, this requires a dark account of what the “robot” seller does to their victims (whether it be psychological oppression or brain lesions or what-have-you).
You can take the position that it is philosophically a contentious question. Within this, one option is to have the human “robots” kept in the dark about this (with the noted attendant difficulties). Another option is to have this a live question for the human “robots”.
As “chasly-reinstate-monica” has observed, as long as there are physical differences, that is a point of weakness for the “robot” seller.
[I am not quite 100% — somewhat distracted. I think I have covered my material, and done so in an orderly fashion, but the reader should be aware that it might be either that they need to read again more carefully or my account actually is flawed.]
p.s. Using drugs instead of (e.g.) brain lesioning is initially plausible (for the subject), but would become a difficulty when the subject had been sold (unless robots have to take pills as well). (You could hand-wave a drug that did the brain lesioning, but this is not a pivotal issue.)
Edit_01
Possibly there is a distinction to be made between being self-aware and being autonomous. (I don’t know offhand.)