Skip to main content

Are AI outputs protected speech? No, and it’s a dangerous proposition, legal expert says

VentureBeat/Ideogram
VentureBeat/Ideogram

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Generative AI is undeniably speechy, producing content that seems to be informed, often persuasive and highly expressive. 

Given that freedom of expression is a fundamental human right, some legal experts in the U.S. provocatively say that large language model (LLM) outputs are protected under the First Amendment — meaning that even potentially very dangerous generations would be beyond censure and government control. 

But Peter Salib, assistant professor of law at the University of Houston Law Center, hopes to reverse this position — he warns that AI must be properly regulated to prevent potentially catastrophic consequences. His work in this area is set to appear in the Washington University School of Law Review later this year. 

“Protected speech is a sacrosanct constitutional category,” Salib told VentureBeat, citing the hypothetical example of a new more advanced OpenAI LLM. “If indeed outputs of GPT-5 [or other models] are protected speech, it would be quite dire for our ability to regulate these systems.”

Arguments in favor of protected AI speech

Almost a year ago, legal journalist Benjamin Wittes wrote that “[w]e have created the first machines with First Amendment rights.”

ChatGPT and similar systems are “undeniably expressive” and create outputs that are “undeniably speech,” he argued. They generate content, images and text, have dialogue with humans and assert opinions. 

“When generated by people, the First Amendment applies to all of this material,” he contends. Yes, these outputs are “derivative of other content” and not original, but “many humans have never had an original thought either.” 

And, he notes, “the First Amendment doesn’t protect originality. It protects expression.” 

Other scholars are beginning to agree, Salib points out, as generative AI’s outputs are “so remarkably speech-like that they must be someone’s protected speech.” 

This leads some to argue that the material they generate is the protected speech of their human programmers. On the other hand, others consider AI outputs the protected speech of their corporate owners (such as ChatGPT) that have First Amendment rights. 

However, Salib asserts, “AI outputs are not communications from any speaker with First Amendment rights. AI outputs are not any human’s expression.”

Outputs becoming increasingly dangerous

AI is evolving rapidly and becoming orders of magnitude more capable, better at a wider range of things and used in more agent-like — and autonomous and open-ended — ways. 

“The capability of the most capable AI systems is progressing very rapidly — there are risks and challenges that that poses,” said Salib, who also serves as law and policy advisor to the Center for AI Safety

He pointed out that gen AI can already invent new chemical weapons more deadly than VX (one of the most toxic of nerve agents) and help malicious humans synthesize them; aid non-programmers in hacking vital infrastructure; and play “complex games of manipulation.” 

The fact that ChatGPT and other systems can, for instance, right now help a human user synthesize cyanide indicates it could be induced to do something even more dangerous, he pointed out. 

“There is strong empirical evidence that near-future generative AI systems will pose serious risks to human life, limb and freedom,” Salib writes in his 77-page paper

This could include bioterrorism and the manufacture of “novel pandemic viruses” and attacks on critical infrastructure — AI could even execute fully automated drone-based political assassinations, Salib asserts.

AI is speechy — but it’s not human speech

World leaders are recognizing these dangers and are moving to enact regulations around safe and ethical AI. The idea is that these laws would require systems to refuse to do dangerous things or forbid humans from releasing their outputs, ultimately “punishing” models or the companies making them. 

From the outside, this can look like laws that censor speech, Salib pointed out, as ChatGPT and other models are generating content that is undoubtedly “speechy.” 

If AI speech is protected and the U.S. government tries to regulate it, those laws would have to clear extremely high hurdles backed by the most compelling national interest. 

For instance, Salib said, someone can freely assert, “to usher in a dictatorship of the proletariat, the government must be overthrown by force.” But they can’t be punished unless they’re calling out for violation of the law that is both “imminent” and “likely” (the imminent lawless action test). 

This would mean that regulators couldn’t regulate ChatGPT or OpenAI unless it would result in an “imminent large-scale disaster.”

“If AI outputs are best understood as protected speech, then laws regulating them directly, even to promote safety, will have to satisfy the strictest constitutional tests,” Salib writes. 

AI is different than other software outputs

Clearly, outputs from some software are their creators’ expressions. A video game designer, for instance, has specific ideas in mind that they want to incorporate through software. Or, a user typing something into Twitter is looking to communicate in a way that’s in their voice. 

But gen AI is quite different both conceptually and technically, said Salib. 

“People who make GPT-5 aren’t trying to make software that says something; they’re making software that says anything,” said Salib. They’re seeking to “communicate all the messages, including millions and millions and millions of ideas that they never thought about.”

Users ask open questions to get models to provide answers they didn’t already know or content 

“That’s why it’s not human speech,” said Salib. Therefore, AI isn’t in “the most sacred category that gets the highest amount of constitutional protection.”

Probing more into artificial general intelligence (AGI) territory, some are beginning to argue that AI outputs belong to the systems themselves. 

“Maybe that’s right — these things are very autonomous,” Salib conceded. 

But even while they’re doing “speechy stuff independent of humans,” that’s not sufficient enough to give them First Amendment rights under the U.S. Constitution. 

“There are many sentient beings in the world who don’t have First Amendment rights,” Salib pointed out — say, Belgians, or chipmunks. 

“Inhuman AIs may someday join the community of First Amendment rights holders,” Salib writes. “But for now, they, like most of the world’s human speakers, remain outside it.”

Is it corporate speech?

Corporations aren’t humans either, yet they have speech rights. This is because they are “derivative of the rights of the humans that constitute them.” This extends only as necessary to prevent otherwise protected speech from losing that protection upon contact with corporations. 

“My argument is that corporate speech rights are parasitic on the rights of the humans who make up the corporation,” said Salib. 

For instance, humans with First Amendment rights sometimes have to use a corporation to speak — an author needs Random House to publish their book, for instance. 

“But if an LLM doesn’t produce protected speech in the first place, it doesn’t make sense that that becomes protected speech when it is bought by, or transmitted through a corporation,” said Salib. 

Regulating the outputs, not the process

The best way to mitigate risks going forward is to regulate AI outputs themselves, Salib argues.

While some would say the solution would be to prevent systems from generating bad outputs in the first place, this simply isn’t feasible. LLMs can not be prevented from creating outputs due to self-programming, “uninterpretability” and generality — meaning they are largely unpredictable to humans, even with techniques such as reinforcement learning with human feedback (RLHF). 

“There is thus no way, currently, to write legal rules mandating safe code,” Salib writes. 

Instead, successful AI safety regulations must include rules about what the models are allowed to “say.” Rules could be varied — for instance, if an AI’s outputs were often highly dangerous, laws could require a model to remain unreleased “or even be destroyed.” Or, if outputs were only mildly dangerous and occasional, a per-output liability rule could apply. 

All of this, in turn, would give AI companies stronger incentives to invest in safety research and stringent protocols. 

However it ultimately takes shape, “laws have to be designed to prevent people from being deceived or harmed or killed,” Salib emphasized.