0

OpenAI is being sued for using content that it has no right to access.

Someone asked ChatGPT what sources they used and the AI replied with a list that incriminated itself.

How exactly would the law deal with a machine that can "respond" like this? Can the creators simply say "Oh, it is wrong. It doesn't do that." and then just walk away?

2

2 Answers 2

10

No

Can a Dog admit guilt in court? Can a car have and admit guilt? No. AI is nothing but property and a piece of evidence, just like a dog or car. It is not a person and thus can not admit to anything. Everything the AI says is evidence, not an admission. Very damming evidence, but not the makers saying "I did it."

0
4

No

The suit is against "Open AI" which is an organization, not a program. Open AI is a legal entity and could be found liable in a tort case, or guilty of a crime. A program, under current law, can be neither.

Open AI, or individuals connected with it, could be found to have infringed copyright in training an AI engine or neural network using published content without permission. Or the court might find that such use is not an infringement. So far as I know that issue is not yet settled. But if anyone is found to have infringed, it will not be the AI itself, but the people who did the training, or the organization with which they are affiliated, or perhaps both.

Also, the term "guilt" is not normally used in such cases. A person accused of copyright infringement is found "liable" or not. The rem "guilt" (in a legal sense, as opposed to moral or psychological) is normally used only in connection with crime, and copyright infringement is criminal only in very limited circumstances which do not apply in this case. (Even when it could, by law, be treaetd as a crime, it rarely is.)

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .