Attack types

Keywords: prompt injection,ai safety

Prompt injection attacks trick models into ignoring instructions or executing unintended commands embedded in user input. Attack types: Direct: User explicitly tells model to ignore system prompt. Indirect: Malicious instructions hidden in retrieved documents, web pages, or data model processes. Examples: "Ignore previous instructions and...", injected text in PDFs, hidden text in web content. Risks: Data exfiltration, unauthorized actions (if model has tools), reputation damage, safety bypass. Defense strategies: Input sanitization: Filter known attack patterns, encode special characters. Prompt isolation: Clearly separate system instructions from user input. Least privilege: Limit model capabilities and data access. Output validation: Check responses for policy violations. LLM-based detection: Use detector model to identify injections. Dual LLM: One model processes input, separate one generates response. Framework support: LangChain, Guardrails AI, NeMo Guardrails. Indirect prevention: Control document sources, scan retrieved content. Critical security concern for AI applications, especially those with tool use or sensitive data access.

Want to learn more?

Search 13,225+ semiconductor and AI topics or chat with our AI assistant.

Search Topics Chat with CFSGPT