Weight Initialization Strategies (Xavier, He, μP)

Keywords: weight initialization strategies,Xavier initialization,He initialization,muP maximal update parametrization

Weight Initialization Strategies (Xavier, He, μP) are methods for setting initial neural network weights to enable stable training with appropriate gradient flow — Xavier initialization targets unit variance signal propagation while He initialization accounts for ReLU non-linearity, and μP enables transfer of hyperparameters across model widths enabling scaling without retuning.

Xavier (Glorot) Initialization:
- Principle: maintaining unit variance for signals throughout network forward and backward passes — enables training of deep networks without gradient vanishing/explosion
- Formula: W ~ Uniform[-a, a] where a = √(6 / (n_in + n_out)) — variance Var[W] = 1/3 × (2a)²/12 = 1/(n_in + n_out)
- Effect: weight variance inversely proportional to layer fan-in/fan-out — prevents gradients from shrinking in deep networks
- Derivation: if input x has variance 1 and W has variance 1/(n_in), then z = Wx has variance 1 (forward signal)
- Symmetric Property: accounting for both forward (n_in) and backward (n_out) signal propagation — balanced initialization

He Initialization for ReLU Networks:
- Motivation: ReLU activation zeros out ~50% of activations during forward pass — must account for this when initializing weights
- Formula: W ~ Normal(0, σ²) where σ = √(2/n_in) — variance 2/n_in vs Xavier's 1/n_in
- Effect: increasing variance by factor √2 ≈ 1.41 to compensate for ReLU deactivation
- Empirical Result: He initialization enables training of ResNet-152 (152 layers) while Xavier initialization causes divergence
- Mathematical Justification: with ReLU, E[z²] = σ²·W²·E[x²] × P(ReLU active) ≈ 0.5·2/n_in·n_in = 1 maintaining unit variance

Practical Implementation:
- PyTorch: torch.nn.init.xavier_uniform_(tensor) or torch.nn.init.kaiming_uniform_(tensor, nonlinearity="relu")
- TensorFlow: tf.keras.initializers.GlorotUniform() or tf.keras.initializers.HeNormal()
- Manual Initialization: weight matrices manually initialized at network creation time before training
- Default Behavior: modern frameworks often use He initialization for Dense layers with ReLU by default

Maximal Update Parametrization (μP):
- Goal: enabling transfer of optimal hyperparameters across model widths — train small model, scale to large model without retuning
- Scaling Rules: adjusting learning rates proportionally to model width and feature dimension
- μP vs Standard Parametrization: standard parametrization requires different learning rates for different widths; μP maintains constant optimal LR
- Width Transfer: 100M model hyperparameters transfer directly to 1B model — enables efficient scaling experiments

μP Mathematical Foundation:
- Feature Learning Threshold: output changes scale with 1/√width in standard param but stay O(1) in μP — critical difference
- Width Scaling: output sensitivity to input perturbations remains O(1) as width → ∞ in μP — enables hyperparameter transfer
- Learning Rate Scaling: optimal learning rate ∝ 1/width in standard parametrization; stays O(1) in μP
- Weight Magnitude: initial weight variance 1/width in μP vs 1 in standard — smaller initial weights for wider networks

μP Practical Applications:
- Scaling Laws: observing consistent scaling behavior (loss ∝ N^(-α)) across widths without hyperparameter retuning — enables efficient data scaling studies
- Architecture Search: training many small models during search, then scaling winning architecture without retuning learning rates
- Model Checkpoints: transferring checkpoints from 1B model to 100B model with reasonable loss — reduces computational cost of large-scale training
- Research Efficiency: reducing hyperparameter search cost by 10-100x in width scaling experiments — critical for studying compute-optimal scaling

Comparison of Initialization Methods:
- Uniform vs Normal: uniform initialization W ~ U[-a,a] simpler computationally; normal distribution W ~ N(0,σ²) slightly better for optimization
- Xavier Benefits: good for tanh, sigmoid activation functions; prevents saturation region initialization
- He Benefits: necessary for ReLU, GELU, SiLU activations; enabling training of deep networks (50+ layers)
- μP Benefits: hyperparameter transfer across scales; reduces large-model training costs; enables scaling law studies

Deep Network Initialization Challenges:
- Gradient Vanishing: with Xavier initialization and tanh in deep networks, gradients shrink exponentially with depth
- Explosion Prevention: He initialization increases variance to prevent gradient shrinking with ReLU but risks explosion with other activations
- Batch Normalization Interaction: layer normalization/batch norm decouple initialization quality from training success — enables more flexible init choices
- Skip Connection Impact: residual connections skip layers enabling gradient bypass — reduce initialization sensitivity from "critical" to "important"

Advanced Initialization Techniques:
- Layer-wise Adaptive Rate Scaling (LARS): adjusting initialization based on layer statistics — adapts to actual gradient distributions
- Spectral Normalization: constraining weight matrices to have spectral norm 1 — improves training stability in adversarial networks (GANs)
- Orthogonal Initialization: initializing weight matrices as random orthogonal matrices — preserves gradient magnitude perfectly
- Fine-tuning from Pre-training: reusing initializations from pre-trained models — typically superior to random initialization for transfer learning

Initialization in Different Architectures:
- Convolutional Networks: He initialization standard for conv layers; Xavier for classification head
- Transformers: Xavier initialization typical for query, key, value projections; special init for embedding layers (scale 1/√d_model)
- RNNs: careful initialization of recurrence matrix (spectral norm ~0.9) critical for gradient flow — standard random initialization fails
- Language Models: embeddings initialized with std 0.02 (empirically determined); attention projections with Xavier

Weight Initialization Strategies are foundational to deep learning — enabling stable training through careful variance management and providing mechanisms (μP) for efficient scaling across model sizes.

Want to learn more?

Search 13,225+ semiconductor and AI topics or chat with our AI assistant.

Search Topics Chat with CFSGPT