
Deep neural networks are a subset of machine learning algorithms used for a wide variety of classification problems. These include image identification and machine vision (used by autonomous vehicles and other robots), natural language processing, language translation and fraud detection. However, it is possible for a nefarious person or group to adjust the input slightly and send the algorithm down the wrong train of thought, so to speak. To protect algorithms against such attacks, the Michigan team developed the Robust Adversarial Immune-inspired Learning System (RAILS).
“RAILS represents the very first approach to adversarial learning that is modeled after the adaptive immune system, which operates differently than the innate immune system,” said Alfred Hero, the John H. Holland Distinguished University Professor and CCMB Affiliate faculty member, who co-led the work published in IEEE Access.
While the innate immune system mounts a general attack on pathogens, the mammalian adaptive immune system can generate new cells designed to defend against specific pathogens. It turns out that deep neural networks, already inspired by the brain’s system of information processing, can take advantage of this biological process, too. “The immune system is built for surprises,” said Indika Rajapakse, associate professor of computational medicine and bioinformatics and co-leader of the study. “It has an amazing design and will always find a solution.”