How Google's Protections For Responsible AI May Not Go Far Enough

Fears are growing that the tech could be dangerous

  • Tech executives say regulations are needed to ensure humans are safe from AI.
  • Google is building safeguards into its AI.
  • But some experts say that current AI safety measures aren’t sufficient.
Someone using a laptop with an AI (artificial intelligence) chatbot graphic overlaying the image.

Poca Wander Stock / Getty Images

Tech companies promise to build safety measures into artificial intelligence (AI), but some experts say their efforts may not be enough. 

OpenAI CEO Sam Altman called on Congress to implement regulations to address risks posed by AI during testimony this week on Capitol Hill. Meanwhile, pioneering researcher Geoffrey Hinton recently quit his role at Google to speak about the dangers of the technology. The developments are signs of growing concerns around AI. 

"While Google's recent efforts to increase transparency and security around AI-generated content and bolster its safe browsing features are commendable, they are the first few steps on a long journey," Ani Chaudhuri, the CEO of data governance company Dasera told Lifewire in an email interview. "There is still a significant gap to fill regarding user protection from AI."

AI Safety Concerns

Altman told lawmakers that his worst fear was that advanced AI technology could "cause significant harm to the world" without proper guardrails.

"If this technology goes wrong, it can go quite wrong, and we want to be vocal about that," Altman said at a Senate subcommittee hearing on privacy, technology, and the law. "We want to work with the government to prevent that from happening."

A human-like face emerging from an abstract landscape made of metallic cylinders, artificial intelligence concept.

piranka / Getty Images

Altman isn't the only high-profile figure in the AI world to express concerns about the technology. Hinton said he retired from Google to speak openly about the potential risks as someone who no longer works for the tech giant.

"If you parse Geoff Hinton's comments on why he left Google, it seems like they have now thrown caution to the wind and gone full steam ahead to launch these AI-enabled products as soon as possible to catch up against the competition," Vinod Iyengar, the AI expert and Head of Product at ThirdAI, told Lifewire via email. 

Lifewire reached out to Google for comment. A company spokesperson pointed to a blog post describing Google's efforts to create responsible AI. 

With its AI-powered chatbot Bard, Google has put out disclaimers to ensure users know the product is still experimental. 

"There's a couple of product design choices that are useful in this respect—by showing alternate drafts," Iyengar said. "This is a good way to signal to the users that the model is still a probabilistic one and no one answer is perfect."

There is still a significant gap to fill regarding user protection from AI.

Bard also seems to have clamped down its 'personality,' so the responses are more prosaic or academic and less human sounding, Iyengar noted. Whereas ChatGPT comes across as an enthusiastic and helpful assistant, Bing (at least in the early versions) was more spunky.

"Having said that, Bard still hallucinates quite significantly contrary to its own claims as compared to ChatGPT," Iyengar said. "This is, of course, anecdotal, but seems to be the experience of many in the community. This might indicate that the model has not gone through enough human feedback-led fine-tuning." 

Alysia Silberg, the CEO of venture capital firm Street Global, which invests in AI, said in an email interview that Google takes "significant" safety measures by implementing guardrails to enhance safety with its AI products.

"Despite moving in the right direction in assuring security around self-learning machines, however, one must note that the technology is evolving rapidly," she added. "So there will always be persistent challenges. Nevertheless, Google's approach of maintaining continued vigilance, transparency, and collaborating with experts can help enhance user safety in the AI domain."

Time to Regulate AI?

There's a tug-of-war between those who say the AI industry can police itself and others like Altman, who claim that government regulations are needed. To keep users safe, Silberg noted that Google and the industry should establish robust privacy policies and prioritize ethical AI development. 

"These measures can help protect users and boost trust in AI technologies," she said. 

Google and other tech giants need to share how AI models are trained and have a standard set of protocols for evaluating these models and ensuring that it passes them, Iyengar said.

"Similar to how everyone expects most websites to be SOC compliant or healthcare technology to be HIPAA compliant, AI and LLMs should also have their set of safeguards in place," he added.

Was this page helpful?