Artificial intelligence (AI) systems are expanding and advancing at a significant pace. The two main categories into which AI systems have been divided are Predictive AI and Generative AI. The well-known Large Language Models (LLMs), which have recently gathered massive attention, are the best examples of generative AI. While Generative AI creates original content, Predictive AI concentrates on making predictions using data.
It is important for AI systems to have safe, reliable, and resilient operations as these systems are being used as an integral component in almost all significant industries. The NIST AI Risk Management Framework and AI Trustworthiness taxonomy have indicated that these operational characteristics are necessary for trustworthy AI.
In a recent study, a team of researchers from the NIST Trustworthy and Responsible AI has shared their goal of advancing the field of Adversarial Machine Learning (AML) by creating a thorough taxonomy of terms and providing definitions for pertinent terms. This taxonomy has been structured into a conceptual hierarchy and created by carefully analyzing the body of current AML literature.
The hierarchy includes the main categories of Machine Learning (ML) techniques, different phases of the attack lifecycle, the aims and objectives of the attacker, and the skills and information that the attackers have about the learning process. Along with outlining the taxonomy, the study has offered strategies for controlling and reducing the effects of AML attacks.
The team has shared that AML problems are dynamic and identify unresolved issues that need to be taken into account at every stage of the development of Artificial Intelligence systems. The goal is to provide a thorough resource that helps shape future practice guides and standards for evaluating and controlling the security of AI systems.
The terminology mentioned in the shared research paper aligns with the body of current AML literature. A dictionary explaining important topics related to AI system security has also been provided. The team has shared that establishing a common language and understanding within the AML domain is the ultimate purpose of the integrated taxonomy and nomenclature. By doing this, the study supports the development of future norms and standards, promoting a coordinated and knowledgeable approach to tackling the security issues brought about by the quickly changing AML landscape.
The primary contributions of the research can be summarized as follows.
- A common vocabulary for discussing Adversarial Machine Learning (AML) ideas by developing standardized terminology for the ML and cybersecurity communities has been shared.
- A comprehensive taxonomy of AML attacks that covers systems that use both Generative AI and Predictive AI has been presented.
- Generative AI attacks have been divided into categories for evasion, poisoning, abuse, and privacy, and predictive AI attacks have been divided into categories for evasion, poisoning, and confidentiality.
- Attacks on several data modalities and learning approaches, i.e., supervised, unsupervised, semi-supervised, federated learning, and reinforcement learning, have been tackled.
- Possible AML mitigations and ways to handle particular attack classes have been discussed.
- The shortcomings of current mitigation strategies have been analyzed, and a critical viewpoint on their efficiency has been provided.
Check out the Technical Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our 36k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and LinkedIn Group.
If you like our work, you will love our newsletter.
Don’t Forget to join our Telegram Channel.
Tanya Malhotra is a final year undergrad from the University of Petroleum & Energy Studies, Dehradun, pursuing BTech in Computer Science Engineering with a specialization in Artificial Intelligence and Machine Learning. She is a Data Science enthusiast with good analytical and critical thinking, along with an ardent interest in acquiring new skills, leading groups, and managing work in an organized manner.
🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others…