A security risk where AI outputs are used in downstream systems without adequate validation or escaping (e.g., injecting generated content into code, HTML, SQL, or commands). This can convert hallucinations or malicious outputs into system actions.
See: Prompt injection; Security; Tool calling (function calling)