Malicious and Adversarial Uses of AI
Adversaries will always do what they can to stay ahead of the game. This could mean designing their own AI and machine-learning techniques to penetrate systems. So, while AI can relieve some of the monotony on the protective side, it can also improve the efficiency of determined attackers. James Sherer, partner at BakerHostetler, says attacks predicated on people and their perceived shortcomings are not going away. Rather, perpetrators have begun to use more creative vectors that deploy those same types of approaches.
“We understand that any advances made in these technologies, even if cordoned off and initially directed toward ‘good,’ have the potential for misuse and weaponization,” Sherer explains. “Further, if AI systems work correctly, they can become part of the scenery within a security profile. (This) puts them at risk for compromise by hackers focused on undermining the AI approach itself and underscores the need for organizations to continue to monitor the operations of those AI systems and not rely upon their operation without question.”
AI holds great potential to support and augment human experts through vulnerability analysis, but its potential is also worrisome, according to Daniel Riek, senior director, AI Center of Excellence, and Mike Bursell, chief security architect, Office of the CTO, at Red Hat.