Framework

AVID stores instantiations of AI risks---categorized using the AVID taxonomy---using two base data classes: Report and Vulnerability. A report is one example of a particular vulnerability occurring, supported by qualitative or quantitative evaluation. A vulnerability (vuln) is a high-level evidence of an AI failure mode, similar to the NIST CVEs in the context of software vulnerabilities.

As an example, the vuln AVID-2022-V001arrow-up-right is about gender bias in the large language model bert-base-uncased. This bias is measured through multiple reports, AVID-2022-R0001arrow-up-right and AVID-2022-R0002arrow-up-right, which measure gender bias in two separate contexts, using different metrics and datasets, and record salient information and references on those measurements.

The above formulation is similar to how incidents and incident reports are structured in the AI Incident Databasearrow-up-right. See Figure D.1 for a schematic representation of this structure.

Figure D.1. Schematic of the structure of the AVID taxonomy, vulns, and reports.

To account for diverse levels of details that different groups of AI risk examples can entail, we designate a class for each vulnerability and report. Each such vuln/report class extends the respective base class to a slightly different structure that enables storage of information at different granularities as required. For example, we currently support four vuln/report classes: evaluations of large language models (LLM Evaluation), incidents from AIID (AIID Incident), vulnerabilities from CVE (CVE Entry), and externally sourced reports (Third-party Report). These classes share the same core vuln/report field structure, with slight differences in values filled under references and tags.

Last updated