Intel and the Georgia Institute of Technology (Georgia Tech) have been selected to lead a Guaranteeing Artificial Intelligence (AI) Robustness against Deception (GARD) program team for the Defense Advanced Research Projects Agency (DARPA).
Intel is the prime contractor in this four-year, multimillion-dollar joint effort to improve cybersecurity defenses against deception attacks on machine learning (ML) models.
While rare, adversarial attacks attempt to deceive, alter or corrupt the ML algorithm interpretation of data. As AI and ML models are increasingly incorporated into semi-autonomous and autonomous systems, it is critical to continuously improve the stability, safety and security of unexpected or deceptive interactions. For example, AI misclassifications and misinterpretations at the pixel level could lead to image misinterpretation and mislabeling scenarios, or subtle modifications to real-world objects could confuse AI perception systems. GARD will help AI and ML technologies become better equipped to defend against potential future attacks.
In another commonly cited example, ML used by a self-driving car was tricked by visual alterations to a stop sign. While a human viewing the altered sign would have no difficulty interpreting its meaning, the ML erroneously interpreted the stop sign as a 45 mph speed limit posting. In a real-world attack like this, the self-driving car would accelerate through the stop sign, potentially causing a disastrous outcome. This is just one of many recently discovered attacks applicable to virtually any ML application.
To get ahead of this acute safety challenge, DARPA created the GARD program. GARD aims to develop a new generation of defenses against adversarial deception attacks on ML models.
Current defense efforts are designed to protect against specific pre-defined adversarial attacks, but remain vulnerable to attacks when tested outside their specified design parameters. GARD intends to approach ML defense differently – by developing broad-based defenses that address the numerous possible attacks in given scenarios that could cause an ML model to misclassify or misinterpret data.
In the first phase of GARD, Intel and Georgia Tech are enhancing object detection technologies through spatial, temporal and semantic coherence for both still images and videos.