LogoLogo
  • Getting Started
  • 🔍Taxonomy
    • Introduction
    • Effect (SEP) View
      • Security
      • Ethics
      • Performance
    • Lifecycle View
    • Schema
  • 📦Database
    • Introduction
    • Framework
      • Base Classes
      • Auxiliary Classes
    • 🛠️Backend
    • 🛠️Editorial Interface
  • 👷‍♀️Developer Tools
    • Python SDK
      • Datamodels
      • Connectors
      • 🛠️Integrations
        • garak
        • ModsysML (Apollo)
        • 🐢Giskard
        • Inspect AI
      • API Reference
Powered by GitBook
On this page

Getting Started

NextIntroduction

Last updated 1 month ago

Welcome to the official documentation of AI Vulnerability Database (AVID)!

As the first open-source, extensible knowledge base of failures across the AI Ecosystem (e.g. data sets, models, systems), AVID aims to

  • encompass coordinates of responsible ML such as security, ethics, and performance

  • build out a taxonomy of potential harms across these coordinates

  • house full-fidelity information (e.g. metadata, measurements, benchmarks) on evaluation use cases of a harm (sub)category

  • evaluate models and datasets that are either open-source or accessible through APIs

This site contains information to get you started with different components of AVID.

  • : a landing place of instances of AI system/model/dataset failures.

  • : stores information on such instances in a structured manner.

  • : the official Python toolkit for working with AVID.

Taxonomy
Database
Developer SDK