California proposes government cloud cluster to sift out nasty AI models

Big Tech's home turf set for law to ward against 'unsafe behavior'

The State of California is proposing legislation to regulate the use of AI, including building a computing cluster to check for their safety.

Introducing the bill yesterday, Senator Scott Wiener, a San Francisco Democrat, aimed to ensure the safe development of large-scale artificial intelligence systems by "establishing clear, predictable, common-sense safety standards for developers of the largest and most powerful AI systems," a statement said.

Wiener said the California legislature had an opportunity to apply lessons from the last decade relating to the "unchecked growth" of new technology without evaluating, understanding, or mitigating the risks.

He said the new law would do that "by developing responsible, appropriate guardrails around development of the biggest, most high-impact AI systems to ensure they are used to improve Californians' lives, without compromising safety or security."

The law also commits the State of California to build CalCompute, a public AI research cluster that will allow startups, researchers, and community groups to help them "align large-scale AI systems with the values and needs of California communities."

The introductory text for the bill – formally SB 1047 – says that existing law requires the government to evaluate the impact of the proliferation of deepfakes generated or manipulated by AI that would falsely appear to be authentic or truthful, and feature depictions of people appearing to say or do things they did not say or do without their consent, for example.

"This bill would enact the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act to, among other things, require a developer of a covered model, as defined, to determine whether it can make a positive safety determination with respect to a covered model before initiating training of that covered model, as specified," the text says.

The bill would define "positive safety determination" setting out that a "developer can reasonably exclude the possibility that the covered model has a hazardous capability … when accounting for a reasonable margin for safety and the possibility of post-training modifications."

It says the State's Department of Technology would commission consultants to create a public cloud computing cluster, CalCompute, designed to conduct research into the safe and secure deployment of large-scale artificial intelligence models and foster "equitable innovation" that includes, among other things, a fully owned and hosted cloud platform.

Federal legislators have yet to pass AI regulation. In October last year, president Joe Biden issued an executive order to create safeguards that may mitigate societal risks stemming from increasingly powerful AI technology.

The EU is in the process of introducing legislation to govern AI. Some uses will be banned altogether, while general-purpose AI – which includes but is not limited to GenAI – will be required to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, and report serious incidents to the European Commission, for example. The Act could see fines up to €40 million or 7 percent of annual worldwide turnover, whichever is higher, imposed on organizations failing to comply.

The UK has proposed a light-touch approach to regulating AI. The consultation in preparation for developing legislation that addresses risks inherent in deploying AI in society outlines a "pro-innovation approach to AI regulation," using existing regulators. Guidance for regulators was published earlier this week. ®

More about

TIP US OFF

Send us news


Other stories you might like