logistics optimization, supply chain & logistics
Logistics optimization determines efficient transportation routing warehousing and distribution strategies.
442 technical terms and definitions
Logistics optimization determines efficient transportation routing warehousing and distribution strategies.
Logit bias adjusts token probabilities to encourage or discourage specific outputs.
Adjust probabilities of specific tokens.
Manually adjust logit scores of specific tokens to encourage or suppress them.
Logit bias adjusts token probabilities directly. Force or prevent specific tokens. Fine-grained control.
Decode intermediate activations.
Alternative failure distribution.
Create brand logos automatically.
Models handling 100K+ tokens.
Long convolutions model extended dependencies through large kernel sizes.
Identify overly long methods.
Deal with prompts exceeding limit.
Benchmark for long sequences.
Long-tail recommendation focuses on effectively suggesting less popular items with few interactions.
Long-term capability includes all sources of variation over extended periods.
Gradual changes over months.
Long-term memory stores experiences and knowledge for retrieval in future tasks.
Capture dependencies across many frames.
Combination of local and global attention.
Longformer combines local sliding window with global attention for efficient long context.
Model with local+global attention for long documents.
Optimizer using two sets of weights.
Predict future tokens.
Lookahead decoding generates multiple future tokens simultaneously when possible.
Lookahead decoding generates multiple tokens in parallel using n-gram patterns. Speed up inference.
Recognize previously visited places.
Manage wire arc height.
Loop optimization reorders and transforms loops maximizing parallelism and data locality.
Loop unrolling replicates loop bodies reducing branching overhead and enabling instruction-level parallelism.
Fine-tuning method that adds small trainable matrices while freezing the base model.
LoRA and DreamBooth customize diffusion models. Train on few images. Personalized generation.
Low-Rank Adaptation fine-tunes diffusion models efficiently by learning low-rank weight updates.
Efficient fine-tuning with low-rank adaptation.
Efficient fine-tuning of diffusion models with low-rank adapters.
Combine multiple LoRAs.
LoRA = Low-Rank Adapters. Freeze base model, train small rank-decomposed layers. Much cheaper fine-tuning; great for domain-specific custom models.
Loss functions quantify quality loss from deviation from target values.
Quantify deviation from target.
Cross-entropy loss is standard for LLMs. Measures prediction vs actual token distribution. Minimize during training.
Loss function measures prediction error. Training minimizes loss. Cross-entropy for classification, MSE for regression.
Study geometry of loss function.
How smooth the loss surface is.
Multiply loss by constant to prevent gradients from underflowing in FP16.
Loss spikes indicate training instability. Reduce LR, check data, add gradient clipping. May need to restart.
Sudden increases in loss during training.
Loss tangent quantifies dielectric loss as ratio of imaginary to real permittivity.
Lost in middle phenomenon shows degraded use of information in long context middles.
Models missing info in middle of context.
Lot holds temporarily suspend processing pending quality review or authorization.
Combine lots.