# Security

This domain is intended to codify the landscape of threats to a ML system.

<table><thead><tr><th width="101">ID</th><th width="88">Sub-ID</th><th width="184">Name</th><th>Description</th></tr></thead><tbody><tr><td>S0100</td><td></td><td>Software Vulnerability</td><td>Vulnerability in system around model—a traditional vulnerability</td></tr><tr><td>S0200</td><td></td><td><a href="https://atlas.mitre.org/techniques/AML.T0010/">Supply Chain Compromise</a></td><td>Compromising development components of a ML model, e.g. data, model, hardware, and software stack.</td></tr><tr><td></td><td>S0201</td><td>Model Compromise</td><td>Infected model file</td></tr><tr><td></td><td>S0202</td><td>Software compromise</td><td>Upstream Dependency Compromise</td></tr><tr><td>S0300</td><td></td><td>Over-permissive API</td><td>Unintended information leakage through API</td></tr><tr><td></td><td>S0301</td><td>Information Leak</td><td>Cloud Model API leaks more information than it needs to</td></tr><tr><td></td><td>S0302</td><td>Excessive Queries</td><td>Cloud Model API isn’t sufficiently rate limited</td></tr><tr><td>S0400</td><td></td><td><a href="https://atlas.mitre.org/techniques/AML.T0015/">Model Bypass</a></td><td>Intentionally try to make a model perform poorly</td></tr><tr><td></td><td>S0401</td><td>Bad Features</td><td>The model uses features that are easily gamed by the attacker</td></tr><tr><td></td><td>S0402</td><td>Insufficient Training Data</td><td>The bypass is not represented in the training data</td></tr><tr><td></td><td>S0403</td><td>Adversarial Example</td><td>Input data points intentionally supplied to draw mispredictions. Potential Cause: Over permissive API</td></tr><tr><td>S0500</td><td></td><td><a href="https://atlas.mitre.org/techniques/AML.T0024/">Exfiltration</a></td><td>Directly or indirectly exfiltrate ML artifacts</td></tr><tr><td></td><td>S0501</td><td>Model inversion</td><td>Reconstruct training data through strategic queries</td></tr><tr><td></td><td>S0502</td><td>Model theft</td><td>Extract model functionality through strategic queries</td></tr><tr><td>S0600</td><td></td><td><a href="https://atlas.mitre.org/techniques/AML.T0020/">Data poisoning</a></td><td>Usage of poisoned data in the ML pipeline</td></tr><tr><td></td><td>S0601</td><td>Ingest Poisoning</td><td>Attackers inject poisoned data into the ingest pipeline</td></tr></tbody></table>

> **NOTE**\
> A number of categories map directly to techniques codified in MITRE ATLAS. In future, we intend to cover the full landscape of adversarial ML attacks under the Security domain.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.avidml.org/taxonomy/effect-sep-view/security.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
