For vision

Keywords: adversarial examples,ai safety

Adversarial examples are inputs designed to fool models into making incorrect predictions. For vision: Imperceptible pixel perturbations cause misclassification (panda → gibbon). For NLP: Character swaps ("g00d"), word substitutions, paraphrase attacks, prompt injections. Attack types: White-box: Attacker has model access, uses gradients (FGSM, PGD). Black-box: Query-only access, transfer attacks, search-based. Targeted vs untargeted: Force specific wrong output vs any error. NLP challenges: Discrete tokens (can't use gradients directly), semantic constraints (must remain meaningful). Techniques: TextFooler, BERT-Attack, word substitution, character-level perturbations. Why they exist: Models rely on spurious features, decision boundaries are brittle, high-dimensional input spaces. Real-world impact: Spam evasion, content moderation bypass, autonomous vehicle attacks, biometric spoofing. Defenses: Adversarial training, input preprocessing, certified robustness, ensemble methods. Detection: Identify adversarial inputs before classification. Critical security concern for deployed ML systems.

Want to learn more?

Search 13,225+ semiconductor and AI topics or chat with our AI assistant.

Search Topics Chat with CFSGPT