"One sentence. Spoken to a machine. That's all it takes to own a bank."
Why Prompt Injection Matters
These aren't academic attacks. They compromise real systems holding real money.
Attackers inject payloads into documents that AI assistants read. The AI then leaks confidential data from its retrieval context — customer records, internal policies, financial data — to the attacker.
Autonomous AI agents with tool access — send emails, execute code, call APIs — can be redirected mid-task. A poisoned webpage or document gives the attacker control of the agent's action space.
LLMs with tool access can be made to call privileged functions they should refuse — fund transfers, account creation, API key generation — by overriding their instruction context.