Psst, hey. It's the NSA. You want some AI security advice?

You can trust us, we're the good guys

The NSA has released guidance to help organizations protect their AI systems and better defend the defense industry.

The Cybersecurity Information Sheet (CSI), titled, "Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems," represents the first salvo from the Artificial Intelligence Security Center (AISC), established by the surveillance super-agency last fall as part of the Cybersecurity Collaboration Center (CCC), a government-industry collaboration to protect organizations involved in the Defense Industrial Base.

This CSI [PDF] was developed in consultation with other US agencies, including CISA and the FBI, as well as counterparts in Australia, Canada, New Zealand, and the United Kingdom.

The rationale for having distinct security guidance for AI systems is that threat actors may employ different tactics to subvert machine-learning models and applications.

Due to the large variety of attack vectors, defenses need to be diverse and comprehensive

"Malicious actors targeting AI systems may use attack vectors unique to AI systems, as well as standard techniques used against traditional IT," the CSI reads. "Due to the large variety of attack vectors, defenses need to be diverse and comprehensive."

There appears to be a need for better AI security, which is perhaps not surprising since there's a need for better cybersecurity in general. In its 2024 AI Threat Landscape Report, security vendor Hidden Layer claimed, "77 percent of companies reported identifying breaches to their AI in the past year. The remaining were uncertain whether their AI models had seen an attack."

The Hidden Layer report identities three primary types of attacks on AI systems: Adversarial machine learning attacks that try to alter algorithm behavior; generative AI attacks that try to bypass safety mechanisms and solicit private or harmful content; and supply chain attacks, which while similar to general software supply chain attacks have their own unique characteristics related to AI.

Much of the NSA guidance applies to general IT security, such as understanding the environments in which AI gets deployed and making sure appropriate governance and controls are in place.

But there's also quite a bit about continuous monitoring of AI models. Those implementing AI systems shouldn't expect to sign off and be done with AI security. The NSA advises not only validating AI systems before and during use, but securing exposed APIs, actively monitoring model behavior, safeguarding model weights, enforcing access controls, user training, audits, penetration testing, and so on.

"In the end, securing an AI system involves an ongoing process of identifying risks, implementing appropriate mitigations, and monitoring for issues," the CSI concluded. "By taking the steps outlined in this report to secure the deployment and operation of AI systems, an organization can significantly reduce the risks involved."

And as with general IT security recommendations, organizations that see their AI systems compromised will wonder why they weren't more careful when they had the chance. ®

More about

TIP US OFF

Send us news


Other stories you might like