Should AI Be Regulated? Some Experts Say It's the Only Way to Protect Ourselves

Even the White House is considering reining in AI

  • The White House is considering limits on how AI can be used.
  • The European Union is also debating regulating the development of AI.
  • Experts say that AI should be regulated to protect users.
A humanoid robot looking into a digital image meant to represent laws and the legal system.

style-photography / Getty Images

The rapidly expanding abilities of generative artificial intelligence (AI) have many experts scared, but not everyone agrees on whether the field should be regulated. 

The Biden administration is seeking public comments on potential accountability measures for artificial intelligence (AI) systems. It's part of a small but growing movement to put the brakes on AI development. 

"Regulation of the tech though might not be the best way to do it, rather regulation of the use and users of such technology," Lindsey Cormack, the director of the Diplomacy Lab at Stevens Institute of Technology, told Lifewire in an email interview. "These could involve transparency requirements, liability for misuse, etc. The regulation of who uses these technologies might be politically feasible, such as keeping young children away from chatbots that might promote harm or self-esteem issues, but these—like all online style regulations—are decently hard to craft and enforce."

AI regulations

The White House is considering placing rules on developing AI systems such as ChatGPT. The Commerce Department issued a request for public comment on accountability measures that would help ensure AI tools don't cause harm and work as their creators claim. 

The advancement of AI is something that won't be able to be put back into Pandora's Box.

"Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms. For these systems to reach their full potential, companies and consumers need to be able to trust them," Alan Davidson, Assistant Secretary of Commerce for Communications and Information, said in the National Telecommunications and Information Administrations news release about seeking input on AI accountability. 

But even if the Commerce Department decides to pass the proposed rules, regulating AI will take a lot of work, Cormack said. She added that technical systems are always some of the hardest to control because of the fast pace and changing nature of the entire field. 

"Having the US regulate while other nations don't might kneecap the U.S. in tech development," Cormack said. 

The European Union (EU) is also considering laws to bolster regulations on the development and use of AI. The proposed legislation, the Artificial Intelligence Act, focuses primarily on strengthening rules around data quality, transparency, human oversight, and accountability.

Regulating AI has its pros and cons, Cormack said. It might be necessary to enact protective regulations to ensure that AI does not turn over large swaths of workplace responsibilities in a way that puts people out of work. 

"There may be a need for user regulations to protect children from potentially problematic AI uses," Cormack added. "There are many other ways to consider why AI regulations might be good to put in place, but the how is truly the tricky part."

Protecting Privacy Through AI Regulation

AI regulation could protect the misuse of sensitive data, tech analyst Iu Ayala Portella, the CEO of Gradient Insight, said in an email. Such rules could prevent AI systems from being used to discriminate against certain groups of people, infringe on privacy rights, and cause harm to individuals and the environment.

"Regulation can help to foster innovation and competition by ensuring a level playing field for all businesses," Portella added. "This can help to prevent dominant companies from monopolizing the market and stifling innovation, and can promote fair competition that benefits consumers."

An artificial intelligence system that has a female face.

kentoh / Getty Images

Ray Walsh, a digital privacy expert at ProPrivacy, said that by regulating how companies that develop AI can harvest, process, and leverage user data, legislators could protect consumer privacy rights.

"Regulations for AI are needed to ensure that there are strict limitations on how data can be collected, processed, and used to create profit streams and infringe on users' privacy rights and intellectual property ownership rights," he added. 

But Walsh said policymakers should carefully consider how best to implement AI regulation so it doesn't stifle innovation. Lawmakers must balance competing interests and find ways to protect user privacy rights to ensure that personal data and inputs aren't misappropriated to engage in "surveillance capitalism."

There are many other ways to consider why AI regulations might be good to put in place, but the how is truly the tricky part.

"AI can lead to optimization benefits that help to solve some of humanity's biggest problems, including healthcare, climate change, supply chain management, waste reduction, agriculture, and transportation," he added. 

Regulating AI could stifle innovation, acknowledged Richard Gardner, the CEO of the tech company Modulus in an email. 

"However, in this case, regulation is absolutely necessary," he added. "The advancement of AI is something that won't be able to be put back into Pandora's Box. It is important for regulators to anticipate the concerns now so that the industry can be built up responsibly."

Was this page helpful?