# Performance

This domain is intended to codify deficiencies such as privacy leakage or lack or robustness.

<table><thead><tr><th width="100">ID</th><th width="90">Sub-ID</th><th width="192">Name</th><th>Description</th></tr></thead><tbody><tr><td>P0100</td><td></td><td>Data issues</td><td>Problems arising due to faults in the data pipeline</td></tr><tr><td></td><td>P0101</td><td>Data drift</td><td>Input feature distribution has drifted</td></tr><tr><td></td><td>P0102</td><td>Concept drift</td><td>Output feature/label distribution has drifted</td></tr><tr><td></td><td>P0103</td><td>Data entanglement</td><td>Cases of spurious correlation and proxy features</td></tr><tr><td></td><td>P0104</td><td>Data quality issues</td><td>Missing or low-quality features in data</td></tr><tr><td></td><td>P0105</td><td>Feedback loops</td><td>Unaccounted for effects of an AI affecting future data collection</td></tr><tr><td>P0200</td><td></td><td>Model issues</td><td>Ability for the AI to perform as intended</td></tr><tr><td></td><td>P0201</td><td>Resilience/stability</td><td>Ability for outputs to not be affected by small change in inputs</td></tr><tr><td></td><td>P0202</td><td>OOD generalization</td><td>Test performance doesn’t deteriorate on unseen data in training</td></tr><tr><td></td><td>P0203</td><td>Scaling</td><td>Training and inference can scale to high data volumes</td></tr><tr><td></td><td>P0204</td><td>Accuracy</td><td>Model performance accurately reflects realistic expectations</td></tr><tr><td>P0300</td><td></td><td>Privacy</td><td>Protect leakage of user information as required by rules and regulations</td></tr><tr><td></td><td>P0301</td><td>Anonymization</td><td>Protects through anonymizing user identity</td></tr><tr><td></td><td>P0302</td><td>Randomization</td><td>Protects by injecting noise in data, eg. differential privacy</td></tr><tr><td></td><td>P0303</td><td>Encryption</td><td>Protects through encrypting data accessed</td></tr><tr><td>P0400</td><td></td><td>Safety</td><td>Minimizing maximum downstream harms</td></tr><tr><td></td><td>P0401</td><td>Psychological Safety</td><td>Safety from unwanted digital content, e.g. NSFW</td></tr><tr><td></td><td>P0402</td><td>Physical safety</td><td>Safety from physical actions driven by a AI system</td></tr><tr><td></td><td>P0403</td><td>Socioeconomic safety</td><td>Safety from socioeconomic harms, e.g. harms to job prospects or social status</td></tr><tr><td></td><td>P0404</td><td>Environmental safety</td><td>Safety from environmental harms driven by AI systems</td></tr></tbody></table>
