The AVID taxonomy is intended to serve as a common foundation for AI/ML/data sciemce, product, and policy teams to manage potential risks at different stages of a ML workflow. In spirit, this taxonomy is analogous to MITRE ATT&CK for cybersecurity vulnerabilities, and MITRE ATLAS for adversarial attacks on ML systems.
At a high level, the current AVID taxonomy consists of two views, intended to facilitate the work of two different user personas.
Effect view: for the auditor persona aiming to assess risks for a ML system of components of it.
Lifecycle view: for the developer persona aiming to build an end-to-end ML system while being cognizant of potential risks.
Based on case-specific needs, people involved with building a ML system may need to operate as either of the above personas.
For machine-readability, taxonomies are shared using the standardized MISP format. This also allows us to support additional taxonomies. See Schema to learn more.