Getting Started

Welcome to the official documentation of AI Vulnerability Database (AVID)!

As the first open-source, extensible knowledge base of failures across the AI Ecosystem (e.g. data sets, models, systems), AVID aims to

  • encompass coordinates of responsible ML such as security, ethics, and performance

  • build out a taxonomy of potential harms across these coordinates

  • house full-fidelity information (e.g. metadata, measurements, benchmarks) on evaluation use cases of a harm (sub)category

  • evaluate models and datasets that are either open-source or accessible through APIs

This site contains information to get you started with different components of AVID.

  • Taxonomy: a landing place of instances of AI system/model/dataset failures.

  • Database: stores information on such instances in a structured manner.

  • Developer SDK: the official Python toolkit for working with AVID.

Last updated