News, news analysis, and commentary on the latest trends in cybersecurity technology.

Trend Micro, Nvidia Partner to Secure AI Data Centers

With companies pouring billions into AI software and hardware, these installations need to be protected from cybersecurity threats and other security lapses.

3 Min Read
hand with circuits and electronic patterns shaking a human hand.
Source: Andriy_Popov via Alamy Stock Photo

Trend Micro and Nvidia are partnering on cybersecurity tools to protect private artificial intelligence (AI) clouds, with a focus on data privacy, real-time analysis, and rapid threat mitigation.

Trend Micro's AI-powered security tools, including Vision One-SPC (Sovereign and Private Cloud) and Cyber Security LLM, will now be served off Nvidia's GPUs, which are already widely used in AI data centers. Several of the widely known large language models (LLMs) today run on Nvidia's GPUs, including ChatGPT and Microsoft's BingGPT.

"The top cloud and AI vendors use Nvidia technology, and the top server vendors offer Nvidia-certified hardware. We aim to secure all of it," says a Trend Micro spokesperson.

This offering could appeal to banks and pharmaceutical companies running private clouds or companies that invested heavily in protecting proprietary data used in AI models, says Frank Dickson, group vice president at IDC's security and trust research practice.

"You're making a huge investment in a resource to drive a return for your company. You want to make sure it's used appropriately," he says. "If I put a fast engine in it, I want really good brakes."

GPUs for Security

GPUs are faster at analyzing telemetry and recommending security measures than conventional CPUs, analysts say.

Trend Micro's security tools running off Nvidia GPUs will do a more sophisticated job detecting and protecting against intrusion and threats, as AI protection requires specialized AI hardware, says Dickson.

"The Nvidia processor is meant to go really superfast on a very selective group of tasks that are compute-intensive and highly parallel," Dickson says.

AI data poisoning — where hackers tweak the data and hurt the model — is a growing concern and better protected by security tools running off GPUs, says Technalysis analyst Bob O'Donnell.

"The way I interpret it, some of these workloads that traditionally focused on CPUs are being done on GPUs and quicker," he says.

Besides data tampering, Trend Micro's tools will also shut down unauthorized access and identify abnormal trends from logs and network activity, the company said.

Role of NIM

Nvidia provides a proprietary interface called NIM (Nvidia Inference Microservices) for Trend Micro to serve its AI-based security services off GPUs. NIM is a relatively new concept introduced by Nvidia at its GTC conference in March.

NIM allows companies to run their software on GPU accelerators at a significantly faster pace than CPUs. SAP, ServiceNow, and Snowflake are among 200 companies using NIMs for customers to interact or serve software.

The GPU container includes the OS elements to serve applications, and the NIM provides access to the right data, dependencies, LLMs, and programming tools.

Trend Micro is also taking advantage of the flexibility provided by NIM to plug its cybersecurity tools into Nvidia's pretrained AI tools. A security AI model from Nvidia named Morpheus warns system administrators of pending threats by identifying abnormal activity in network logs, logins, and by cross-checking activity against security bulletins.

"Every company, I believe, in the future will have a large collection of NIM, and you would bring down the experts that you want. You connect them into a team, and you don't even have to figure out exactly how to connect them," said Nvidia CEO Jensen Huang during a keynote address at the Computex trade show in Taipei.

Technalysis' O'Donnell, who attended Computex, saw demonstrations on the show floor of Trend Micro security products taking advantage of AI accelerators in PCs and servers.

The concept of deploying applications using NIM to secure AI data centers is new, he says.

"People are figuring them out. There are lots of questions on what kind of security issues there are or would be," O'Donnell says.

IDC's Dickson concurs, saying there may be challenges in implementing the security software stack, and it will involve a lot of learning and training.

"As you apply this, especially since every server farm is going to be unique, you're going to have to discover that uniqueness and apply these tools to that unique environment," Dickson says.

About the Author(s)

Agam Shah, Contributing Writer

Agam Shah has covered enterprise IT for more than a decade. Outside of machine learning, hardware, and chips, he's also interested in martial arts and Russia.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights