Adversarial testing by dedicated teams attempting to find system vulnerabilities and failure modes. Red teaming is referenced in the AI Executive Order and EU AI Act; it supports safety claims and regulatory compliance when properly documented.
Red teaming
C
R
See: Adversarial attack; Evaluation (evals); Safety evaluation