lunes, 24 de marzo de 2025

Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations

https://csrc.nist.gov/pubs/ai/100/2/e2025/final Adversaries can attack artificial intelligence (AI) systems to make them malfunction. In January 2024, the National Institute of Standards and Technology (NIST) published voluntary guidelines on how to identify and mitigate these attacks. The guidelines are primarily intended for those who design, develop, deploy, evaluate and govern AI systems. Now, NIST has finalized the guidelines. Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST AI.100-2e2025), created with input from industry and academia, has a number of revisions that may interest AI developers and users. These include: The section on GenAI attacks and mitigation methods has been updated and restructured to reflect the most recent developments regarding these technologies and how businesses are using them. A new section, an index of attacks and mitigations, has been added to allow for fine-grain definition and navigation of attacks. This improves the usability of the guidelines and will promote efficient and consistent communication between practitioners and other stakeholders. For more information, visit this page. Media contact: Chad Boutin, boutin@nist.gov

No hay comentarios: