AI & Governance

Prompt Injection

An attack where malicious input causes an AI system to ignore its instructions and perform unintended actions. Direct prompt injection embeds instructions in user input; indirect prompt injection hides instructions in data the model processes (emails, web pages, documents). Prompt injection is one of the OWASP Top 10 for LLM Applications and is a critical concern for any production AI system that processes untrusted input.