Jump to Content
Transform with Google Cloud

The Prompt: What to think about when you’re thinking about securing AI

August 23, 2023
https://storage.googleapis.com/gweb-cloudblog-publish/images/GettyImages-1240608630.max-2600x2600.jpg
Anton Chuvakin

Security Advisor, Office of the CISO, Google Cloud

John Stone

Director, Office of the CISO, Google Cloud

It's time to start thinking about how to secure your organization's AI. Here's how to get started.

Business leaders are buzzing about generative AI. To help you keep up with this fast-moving, transformative topic, “The Prompt” brings you our ongoing observations from our work with customers and partners, as well as the newest AI happenings at Google. In this edition, Google Cloud Office of the CISO’s Anton Chuvakin, security advisor, and John Stone, director, explore how securing AI systems is similar to securing traditional enterprise systems — and how it’s different.  

What does it mean to secure AI? As artificial intelligence rapidly becomes a ubiquitous part of our lives, we must consider its security implications. Is securing AI equivalent to securing other technology systems? Or is it wholly unique? 

When asked how to define securing AI on a recent episode of our Cloud Security Podcast, Phil Venables, vice president and CISO, Google Cloud, stated that securing AI includes elements of software security, and also data security and data governance. 

“AI security is the practice of protecting AI systems from unauthorized access, use, modification, or disclosure. It involves securing the software, data, and environment that AI systems rely on. Security and risk teams need to understand the risks associated with AI systems in order to protect them,” he said. “It represents an interesting combination of software security elements, such as code provenance, data security elements, such as data governance, and other controls and safeguards, such as API security.”

Google launched the Secure AI Framework (SAIF) to provide a high-level, conceptual framework for thinking about how to secure AI. As a next step, we want to highlight how the approaches for securing AI as compared to traditional systems are similar — and different.

While AI does represent a new security world, it’s not the end of the old security world, either. Securing AI does not magically upend security best practices, and much of the wisdom that security teams have learned is still correct and applicable. As a new Google Cloud report on securing AI emphasizes, many of the same security principles and practices apply to traditional systems and AI systems.

One of the most important differences is that AI systems can introduce new security risks that are not present in traditional systems. For example, some AI systems can be more easily fooled by adversarial examples, which are carefully-crafted inputs that can cause an AI system to make incorrect predictions. AI systems can also be used to generate synthetic data that can be used to attack other systems. 

However, many of the same security principles that apply to traditional systems also apply to AI systems. For example, it is still important to implement security controls on AI systems, including network access control, threat detection, and data encryption. Additionally, it’s important to train AI systems on data that is representative of the targeted use cases, and to test them for vulnerabilities, misconfigurations, and misuse. 

The approach we recommend is to look for more similarities (where your existing security controls and approaches largely work) and more differences (where new tools and methods need to be invented). Following the first principle of SAIF, using secure infrastructure presents a solid starting point for any AI project.

Our new research identifies sets of similarities and differences so that organizations can get started in better securing AI systems today. Here are four key differences between securing AI systems and securing non-AI systems:

AI systems are more complex. AI systems are often composed of multiple components, including machine learning models, data pipelines, and software applications. This complexity makes them more difficult to secure than traditional systems.

AI systems are more data-driven. AI systems rely on data to train and operate. This data can be a source of vulnerabilities, as attackers can manipulate it to cause the system to malfunction.

AI systems are more adaptive. AI systems can learn and adapt over time. This makes them more difficult to defend against attacks, as attackers can continuously update their techniques to exploit new vulnerabilities.

AI systems are more interconnected. AI systems are often connected to other systems, inside and outside of an organization. This interconnectedness can create new attack vectors, as attackers can exploit vulnerabilities in one system to attack another.

While AI does represent a new security world, it’s not the end of the old security world, either. Securing AI does not magically upend security best practices, and much of the wisdom that security teams have learned is still correct and applicable.

We have also identified four similarities between securing AI systems and securing non-AI systems:

Many threats are the same. Both systems need to be protected from unauthorized access, modification, and destruction of data — as well as other common threats.

Many vulnerabilities are also the same. Traditional enterprise software and AI systems are susceptible to common application security vulnerabilities such as input injection and overflows. Security misconfigurations are also a serious problem.

Processed data needs to be secured. Both systems store and process sensitive and regulated data, sometimes in large volumes. The data types may include personal information, financial data, and intellectual property.

Supply chain attacks matter. Supply chain attacks can cause severe harm to both systems.

It’s true that there are new and unique new security risks that come with AI systems. Organizations investing in AI should take care to reduce the risks of adversarial examples and prompt injections. These risks need to be considered during the development and use of AI systems. Boards of directors and the C-suite should be aware of them, and the CISO and security teams should work with their colleagues on the AI teams that utilize and deploy these systems.

To succeed, security teams should build on the foundation of traditional application security, data security, and system security, and add to the mix their new knowledge of AI use cases, AI threat, and AI-specific safeguards.

Next steps

Check out the full research paper, “Securing AI: Similar or Different,” to learn more about the similarities and differences between securing traditional enterprise software systems and AI systems. We also recommend reviewing the SAIF visual, the SAIF guide, and our AI red team report.

Posted in