2

(I am an IT student writing a report proposing using blockchain technology to attribute "art" files, used in AI models, to their authors. I know next to nothing about copyright laws for creative works)

Suppose metadata in the files used for AI models to generate new derivative art can be attributed to their original authors. Would it be enough for authors to file for plagiarism under lack of attribution and compensation for art generated this way? If not, what must be changed in the legal system so that artists get justice for supposed art theft?

3
  • 2
  • 4
    There is no such thing as "filing for plagiarism" which is an academic rather than a legal concept. The law is concerned with copyright infringement (using works in which someone has a copyright without permission) and not plagiarism (referring to works or concepts without attribution of the source).
    – ohwilleke
    Commented May 19, 2023 at 17:09
  • Please take any discussion about how generative AI works to chat.
    – feetwet
    Commented May 21, 2023 at 15:11

2 Answers 2

20

"Plagiarism" is an academic concept, not a legal one

Plagiarising the work of another without attribution is academic misconduct in every reputable academic facility and can lead to disciplinary action. But it's not against the law, and you can't be sued for doing it.

Copyright violation is against the law

You violate copyright when you copy or make a derivative work from the copyrighted work of another without permission or without an exemption under the law.

In some jurisdictions, authors and artists have moral copyright, which operates alongside proprietary copyright and gives certain rights, including the right of attribution and the right for their work to be treated respectfully. In those jurisdictions, even if you have the copyright holder's permission, you must still respect the moral rights.

Let's make some things explicit by considering a particular artwork. Say, this one:

Leonardo's Mona Lisa

This particular piece is not subject to copyright because a) it was created before there was such a concept, and b) da Vinci died in 1519, so if there had been a copyright law, copyright in this work would have long expired. So, you can make as many copies of this as you like.

Now, let's consider what the situation would be if Leonardo's alchemical pursuits had been more successful and instead of dying in 1519, he died last Tuesday.

If you want to make a copy of this image, you must have Leonardo's heir(s) permission or be operating under an exemption under copyright law in your jurisdiction.

When you train your AI, you will need to make a copy of the image. Do you have permission? Do you have a relevant exemption?

If you obtained your images by scraping websites then the answers are no and (probably) no.

Whether the image has metadata identifying the author is irrelevant to answering the questions. Whether there is any way of identifying the artist is also irrelevant - you still need their permission even if you don't know who to ask.

If your AI, when prompted, generates an image that is strikingly similar to a copyrighted image it was trained on, that is a derivative work and you need permission for that. Under current law, the programer(s) are likely the copyright violators rather than the users of the AI.

1
  • Comments have been moved to chat; please do not continue the discussion here. Before posting a comment below this one, please review the purposes of comments. Comments that do not request clarification or suggest improvements usually belong as an answer, on Law Meta, or in Law Chat. Comments continuing discussion may be removed.
    – Pat W.
    Commented May 19, 2023 at 19:44
6

When constructing the training collection

If copyrighted works are copied without licence in order to construct the training data for an AI, that is prima facie infringment and the person constructing or copying that training collection is infringing (subject to a fair dealing or fair use exception). See Mark Lemley and Bryan Casey, "Fair Learning", Vol. 99, No. 4 Texas Law Review (2021); Canada, House of Commons Standing Committee on Industry, Science and Technology, "Statutory Review of the Copyright Act" (2019).

If the AI produces an infringing work

If the AI produces work that infringes, there are several entities who might be liable, including the person who trained the AI and the person who distributed the trained AI.

The programmer, without doing more, is not likely liable for infringement

Contrary to the position of the other answer, the consensus is that the mere programmer of the untrained AI model, unless they also trained or distributed the AI, is not infringing. They are not liable in copyright for what people train the AI to do or for what they use it for afterwards.


Note: The other answer suggests that "striking similarity" is the test for what makes a derivative work. But "striking" or "substantial" similarity is part of the test for prima facie infringement.

1
  • Comments have been moved to chat; please do not continue the discussion here. Before posting a comment below this one, please review the purposes of comments. Comments that do not request clarification or suggest improvements usually belong as an answer, on Law Meta, or in Law Chat. Comments continuing discussion may be removed.
    – Pat W.
    Commented May 19, 2023 at 19:43

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .