Model Interpretability and Explainability encompasses the techniques for understanding why neural networks make specific predictions — from gradient-based saliency maps showing which input features drive decisions, to Shapley value-based feature attribution quantifying each feature's contribution, enabling trust, debugging, and regulatory compliance for AI systems deployed in high-stakes applications.
Gradient-Based Methods:
- Vanilla Gradients: compute ∂output/∂input to identify which input features most affect the prediction; produces noisy saliency maps but is fast and architecture-agnostic; the gradient magnitude at each pixel indicates local sensitivity
- Grad-CAM: produces class-discriminative localization maps by weighting activation maps of a convolutional layer by the gradient-averaged importance of each channel; highlights which spatial regions the model focuses on for each class; widely used for visual explanations
- Integrated Gradients: accumulates gradients along a path from a baseline (black image/zero embedding) to the actual input; satisfies axiomatic requirements (sensitivity, implementation invariance) that vanilla gradients violate; the gold standard for rigorous feature attribution
- SmoothGrad: averages gradients over multiple noise-perturbed copies of the input; reduces noise in saliency maps by averaging out gradient fluctuations; simple enhancement applicable to any gradient-based method
Shapley Value Methods:
- SHAP (SHapley Additive exPlanations): computes each feature's Shapley value — the average marginal contribution across all possible feature coalitions; provides theoretically grounded, locally accurate, and consistent feature importance scores
- KernelSHAP: model-agnostic approximation of SHAP values using weighted linear regression over sampled feature coalitions; applicable to any model (neural networks, tree ensembles, black-box APIs) but computationally expensive (O(2^M) exact, O(M²) approximate for M features)
- TreeSHAP: exact Shapley value computation for tree-based models (XGBoost, Random Forest) in polynomial time O(TLD²) where T=trees, L=leaves, D=depth; enables fast exact attribution for the most widely deployed ML model family
- DeepSHAP: combines SHAP with DeepLIFT propagation rules for efficient approximate Shapley values in deep neural networks; faster than KernelSHAP for neural networks but less accurate due to approximation assumptions
Attention-Based Interpretation:
- Attention Visualization: plotting attention weight matrices reveals which tokens/patches the model "attends to" for each prediction; informative for understanding model behavior but attention weights do not necessarily reflect causal contribution to the output
- Attention Rollout: recursively multiplies attention matrices across layers to approximate the information flow from input tokens to the output; accounts for residual connections by averaging attention with identity matrices
- Probing Classifiers: train simple classifiers on intermediate representations to test what information (syntax, semantics, factual knowledge) is encoded at each layer; reveal the representational hierarchy learned by transformers
- Mechanistic Interpretability: reverse-engineering specific circuits (compositions of attention heads and MLP neurons) that implement identifiable algorithms within the network; identifies "induction heads," "fact retrieval circuits," and "inhibition heads" in language models
Practical Applications:
- Model Debugging: saliency maps reveal when models rely on spurious correlations (watermarks, background artifacts) rather than relevant features; enables targeted data augmentation or architectural changes to correct biases
- Regulatory Compliance: EU AI Act, GDPR's right to explanation, and financial regulations (SR 11-7) require explainability for automated decisions; SHAP values provide quantitative, legally defensible feature attributions
- Clinical AI: medical imaging models must explain which regions indicate disease; Grad-CAM overlays on chest X-rays, histopathology slides, and retinal scans provide visual evidence supporting AI diagnostic recommendations
- Fairness Auditing: feature attribution reveals whether protected attributes (race, gender, age) disproportionately influence predictions; detecting and mitigating unfair feature dependence is critical for responsible AI deployment
Model interpretability is the essential bridge between AI capability and trustworthy deployment — without understanding why models make predictions, practitioners cannot debug failures, regulators cannot verify compliance, and users cannot calibrate their trust in AI-assisted decisions.