Introduction
The database is AVID's primary component. It houses full-fidelity information (metadata, harm metrics, measurements, benchmarks, and mitigation techniques when available) for concrete failure evidence in general-purpose AI (GPAI) systems. The aim is transparent and reproducible evaluation records that can be mapped to one or more taxonomy frameworks. It
is expandable to account for novel and hitherto unknown vulnerabilities
enables developers and evaluators to freely share structured evaluation records for community benefit
is composed of submissions in a schematized format, then vetted and curated.
We are building the database to be both an extension of, and a bridge between, classic security-related vulnerabilities in the National Vulnerability Database (NVD), adversarial attack cases in MITRE ATLAS, and incidents in the AI Incident Database (AIID). By connecting these sources and including unintentional failure states across GPAI workflows, AVID supports a more operational view of AI risk.
Developers can assess risks in models, tools, and applications they plan to build on, and make better choices with less risk of harm. Communities have a way to contest harmful systems and contribute evidence. Regulators, policy makers, and adjudicating bodies benefit from a clearer picture of failure patterns and high-risk entities.
Some older AVID records (before 2025) were created under a broader AI/ML scope and are now considered legacy relative to the current GPAI-focused scope. Because there is no settled definition of an AI vulnerability yet, AVID currently operates with a working definition. In the current release cycle, we are prioritizing report-level evidence and have not published new vulnerability records.
Last updated

