Neural networks can hide malware, and scientists are worried

This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.

With their millions and billions of numerical parameters, deep learning models can do many things: detect objects in photos, recognize speech, generate text—and hide malware. Neural networks can embed malicious payloads without triggering anti-malware software, researchers at the University of California, San Diego, and the University of Illinois have found.

Their malware-hiding technique, EvilModel, sheds light on the security concerns of deep learning, which has become a hot topic of discussion in machine learning and cybersecurity conferences. As deep learning becomes ingrained in applications we use every day, the security community needs to think about new ways to protect users against their emerging threats.

Hiding malware in deep learning models

Every deep learning model is composed of multiple layers of artificial neurons. Based on the type of layer, each neuron has connections to all or some of the neurons in its previous and next layer. The strength of these connections is defined by numerical parameters that are during training, as the DL model learns the task it has been designed for. Large neural networks can comprise hundreds of millions or even billions of parameters.