lora for diffusion,generative models
Efficient fine-tuning of diffusion models with low-rank adapters.
169 technical terms and definitions
Efficient fine-tuning of diffusion models with low-rank adapters.
Combine multiple LoRAs.
Multiply loss by constant to prevent gradients from underflowing in FP16.
Loss spikes indicate training instability. Reduce LR, check data, add gradient clipping. May need to restart.
Sudden increases in loss during training.
Lot sizing determines optimal production or order quantities balancing setup costs and inventory.
Lottery ticket hypothesis posits that dense networks contain sparse subnetworks trainable to full accuracy.
Sparse subnetworks that train from scratch.
Fast community detection method.
Small misorientation between grains.
Use FP16 or BF16 for training.
Low-rank factorization decomposes weight matrices into products of smaller matrices.
Efficient tensor fusion.
Bound perturbations in Lp norm.
Least Recently Used cache evicts oldest accessed entries maintaining frequently used items.
LSTM-based anomaly detection flags time steps with high prediction error or unusual hidden states.
LSTM-VAE combines variational autoencoders with LSTM networks to detect anomalies in sequential data through reconstruction probability thresholds.
Long- and Short-term Time-series Network combines CNNs and RNNs with skip connections for multivariate forecasting.
Laser Voltage Imaging spatially maps voltage distributions across die surface revealing shorts or voltage drops.