Software

AI + ML

AWS CISO tells The Reg: In the AI gold rush, folks are forgetting application security

'Everybody's learning as they go. But there's a rush to get these apps out'


RSAC As corporations rush full tilt to capitalize on the AI craze and bring machine-learning-based apps to market, they aren't paying enough attention to application security, says AWS Chief Information Security Office Chris Betz.

"Companies forget about the security of the application in their rush to use generative AI," Betz told The Register during an interview at the RSA Conference in San Francisco last week.

There needs to be safeguards and other protections around these advanced neural networks, from training to inference, to avoid them being exploited or used in unexpected and unwanted in ways, we're told: "A model doesn't stand on its own. A model exists in the context of an application."

Betz described securing the AI stack as a cake with three layers. The bottom layer is the training environment, where the large language models (LLMs) upon which generative AI applications are built. That training process needs to be robust to ensure you're not, among other things, putting garbage in and getting garbage out.

"How do you make sure you're getting the right data, that that data is protected, that you're training the model correctly, and that you have the model working the way that you want," Betz said.

The middle layer provides access to the tools needed to run and scale generative AI applications. 

"You spend all this time training and fine tuning the model. Where do you run the model? How do you protect the model? These models are really interesting because they get handed some of the most sensitive data that a company has," Betz said.

So it's imperative that that right data makes it into and out of the LLM, and that the data is protected throughout this process, he explained.

Securing the top layer — the applications using LLMs or those built on top of AI platforms — sometimes gets lost in the push to market.

"The first two layers are new and novel for customers," Betz added. "Everybody's learning as they go. But there's a rush to get these applications out." That rush leaves the top layer vulnerable.

During the annual cybersecurity event, AWS and IBM released a study based on a survey of 200 C-level executives conducted in September 2023. It found 81 percent of respondents said generative AI requires a new security governance model. Similarly, 82 percent said secure and trustworthy AI is essential to the success of their businesses.

However, only 24 percent of today's gen-AI projects have a security component, according to that survey, meaning the C-suite isn't prioritizing security.

"That disparity, I think, is part of that race to the market," Betz said. "And as I've talked with customers, and as I've seen public data, the places where we're seeing the security gaps first are actually at the application layer. It's the traditional technology where we've got people racing to get solutions out, and they are making mistakes." ®

Send us news
5 Comments

Cloudflare debuts one-click nuke of web-scraping AI

Take that for ignoring robots.txt!

So much for green Google ... Emissions up 48% since 2019

AI datacenters blamed for the increase, even as Chocolate Factory bets on AI to fix it

Big Tech's eventual response to my LLM-crasher bug report was dire

Fixes have been made, it appears, but disclosure or discussion is invisible

AMD buys developer Silo AI in bid to match Nvidia's product range

First it comes for market leader's GPUs ... now it's nibbling at software

OpenAI develops AI model to critique its AI models

When your chatbots outshine their human trainers, you could pay for expertise ... or just augment your crowdsourced workforce

'Skeleton Key' attack unlocks the worst of AI, says Microsoft

Simple jailbreak prompt can bypass safety guardrails on major models

A friendly guide to local AI image gen with Stable Diffusion and Automatic1111

A picture is worth a 1,000 words... or was that a 1,000 TOPS

Palantir, Oracle cosy up to offer Karp firm's tech across Big Red's cloud

Foundry and AI plaform available in OCI

64% of people not happy about idea of AI-generated customer service

Not unreasonably, nearly half worried it would give them the 'wrong answers'

Meta training AI models on citizen data gets a hard não from Brazil

Zuckerborg's justification isn't good enough, says watchdog

On-prem AI has arrived – the solution to cloudy problems no one really has

Which isn't to say Nvidia and hyperscalers will win, says analyst Steve Brazier, as regulators circle

China’s homebrew openKylin OS creates a cut for AI PCs

Devs of OS named for a mythical beast join in the 'local models will will deliver legendary productivity' trope