10

This question is inspired by recent news about some of the strange, out-of-control behavior from Microsoft's new Bing chat AI, but I am asking hypothetically here.

If an AI chatbot such as Bing Chat or ChatGPT said factually untrue things that did measurable harm to a real person's reputation, would that person have a case against the company that owns the chatbot for defamation? If not defamation, maybe something else? My understanding is that a key part of defamation is malicious intent, which does not really apply to a non-sentient piece of software. However, if the AI says something that does real harm to a persons reputation, couldn't the company be held responsible for this? What if the company was aware of the harm being done but chose not to take action? This seems similar to the situation where a company is held responsible for the words or actions of an employee.

7
  • Related LawSE question Can an AI admit to guilt?
    – user35069
    Commented Feb 20, 2023 at 19:43
  • 1
    @Rick you mean that the company should not be declared responsible for the actions of their product? I want to say whatever, the product represents the company... Am I right?
    – Velma
    Commented Feb 20, 2023 at 19:47
  • @Rick - Interesting, but a key difference here is that I'm asking about the legal liability of the company, not the AI itself. Commented Feb 20, 2023 at 19:57
  • 1
    @AitzazImtiaz I've no idea, I was just pointing out a related post that may be of interest to the OP.
    – user35069
    Commented Feb 20, 2023 at 20:04
  • @Rick Its alright :D I was just curious to know if that meant something
    – Velma
    Commented Feb 20, 2023 at 20:04

2 Answers 2

2

AI-generated text can easily be defamatory: that is simply a matter of content. Let's say the text is "John Q Smith murdered my parents", and that the statement is untrue. The scenario, as I understand it, is that Jones is chatting with a bot maintained by Omnicorp, and the bot utters the defamatory statement. A possible defense is that the literally false statement also cannot be taken to be believable – to be defamation the statement also has to be at least somewhat believable and not just random hyperbolic insulting. Since these bots are supposed to be fact-based (not ungrounded random text generators like Alex), I thing this defense would fail.

It may be necessary in that state to prove some degree of fault, viz that there was negligence. For example, a person who writes a defamatory statement in their personal locked-away diary is not automatically liable if a thief breaks in and distributes the diary to others. It is very likely that the court would find the bot-provider to be negligent in unleashing this defamation-machine on the public. It is utterly foreseeable that these programs will do all sorts of bad things, seemingly at random. Perhaps the foreseeability argument would be very slightly lessened a couple of months ago, at this point it is an obvious problem.

There is some chance that the bot-provider is not liable, in light of "Section 230" which says that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider". If the bot is an information content provider, the platform operator is not liable. The bot is one if that entity "is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service". Needless to say, claims of "responsibility" are legally ill-defined in this context. It is not decided law what "responsibility" programs have for their actions. If the court finds that the program is not responsible, then the platform is relieved of liability as publisher.

The software creators are not liable for creating a machine with the capacity to create defamatory text, but the software creators could be the same as the platform-operators in a particular case.

Malicious intent is relevant for a subclass of defamation cases: defamation of public figures. You can defame famous people all you want, as long as you don't do so with malice. This is a special rule about public figures.

6
  • The special rule about public figures is US specific.
    – Dale M
    Commented Feb 20, 2023 at 22:22
  • 1
    So is Section 230. Given the US tag, it seemed appropriate.
    – user6726
    Commented Feb 20, 2023 at 23:42
  • 5
    "For example, a person who writes a defamatory statement in their personal locked-away diary is not automatically liable if a thief breaks in and distributes the diary to others." That's not because of lack of negligence, but because one of the elements of defamation is presenting the statement to a third party. As for Section 230, the bot is an agent of the company that created it, so it is not "another" information provider. Commented Feb 21, 2023 at 4:19
  • 3
    Since these bots are supposed to be fact-based is doing a lot of heavy lifting here. Bing Chat has terms of service that state: the Online Services are not error-free. Even if these agreements don't completely eliminate the possibility of liability, I think the general understanding of what AIs are supposed to be capable would make it very difficult for an AI to produce a justiciable level of defamation without the intention of its authors.
    – Will
    Commented Feb 21, 2023 at 11:19
  • 2
    I don't know any court cases that go even close to this (are there any?) but everything screams that there cannot even be the slightest chance that a bot like ChatGPT, which makes it abundantly clear that the answers have no ingrained semantics but are simply statistics based on crawling the net, can be in any way, form or shape be considered "guilty" of anything at all, least of all defamation? Are there any precedents? I mean, there were big wars (at least in Germany) in the 2000's about providers being liable for content of public platforms (they generally aren't)...
    – AnoE
    Commented Feb 21, 2023 at 15:05
9

If an AI chatbot such as Bing Chat or ChatGPT said factually untrue things that did measurable harm to a real person's reputation, would that person have a case against the company that owns the chatbot for defamation?

There can be liability for defamation, although the circumstances would determine who the liable party is.

For instance, an owner's warning to the user about a risk of inaccuracies may have the effect of shifting to the user the issue of requisite degree of fault. See In re Lipsky, 460 S.W.3d 579, 593 (2015). The user ought to be judicious as to whether to publish the chatbot's output. Ordinarily, negligence suffices for liability in a scenario that involves special damages, i.e., concrete, ascertainable harm.

My understanding is that a key part of defamation is malicious intent, which does not really apply to a non-sentient piece of software.

Under defamation law, malice is not about feelings or emotional state. The term refers to reckless disregard for the truth or falsity or the statement or to publication despite publisher's awareness of the falsity of the satement. Id at 593.

Regardless, malice needs to be proved only if the plaintiff is a public figure or in claims of defamaton per se, where damage to a person's reputation is presumed (and hence the damage does not need to be proved).

What if the company was aware of the harm being done but chose not to take action?

The terms of use might protect the company against liability. Absent any such protections, the company might be liable because its awareness and inaction are tantamount to the aforementioned reckless disregard for the truth of its product's publications.

3
  • Although even publishing a "defamatory statement" itself might not, if surrounded by other context (eg, "Look at this outrageous and untrue statement the bot generated"), be defamatory. Commented Feb 22, 2023 at 0:57
  • You don’t appear to have considered that the chatbot might be the one distributing the information - if I ask it about you and it responds with a defamatory statement, that statement has just been distributed to a third-party - me.
    – Dale M
    Commented Jun 17, 2023 at 12:36
  • @DaleM "if I ask it about you and it responds with a defamatory statement, that statement has just been distributed to a third-party - me." That scenario relates to the last paragraph of the answer. The owner of the chatbot is liable by default, but liability can be preempted by means of a clear and conspicuous disclosure to the third-party (i.e., the consumer) about the unreliability of the information being provided. Information that the publisher readily qualifies (for example, in the terms of use) as "likely inaccurate" tends to lose its defamatory nature. Commented Jun 17, 2023 at 14:12

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .