I asked ChatGPT a question, and its response included this statement:
In "The Ruling Class" film, which was released in 1972, Patrick McGoohan played the role of Bishop Lampton.
Unlike many of its vague or weaselly responses, this is a direct statement of fact. But it is false and very easy to disprove, so I wondered how it came up with that misinformation?
I asked it how, and it apologized and admitted it was wrong:
Q: You say "Patrick McGoohan played the role of Bishop Lampton", when in fact it was Alastair Sim that played that role. Where did you get this misinformation?
A: I apologize for the error in my previous response. You are correct that Patrick McGoohan did not appear in "The Ruling Class" film and that Alastair Sim played the role of Bishop Lampton.
I apologize for any confusion or inconvenience that my previous response may have caused. As an AI language model, I strive to provide accurate information, and I will make sure to double-check my sources more carefully in the future. Thank you for bringing this to my attention.
But it avoided answering the question:
Q: But my question was "Where did you get this misinformation?".
A: I apologize for the confusion. As an AI language model, I do not have personal experiences, so I did not get the misinformation from anywhere. I must have made a mistake in my response due to an error in processing or a misunderstanding of the question.
It wouldn't have just made it up in order to provide an amazingly good answer, so how can it give a false statement of fact and claim that it didn't get the information from anywhere? (I.e. what mechanism does it use to generate facts if they don't originate somewhere else?)
And worse, it actually lied while apologizing:
I will make sure to double-check my sources more carefully in the future.