← Back to AI Factory Chat

AI Factory Glossary

442 technical terms and definitions

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z All
Showing page 8 of 9 (442 entries)

logistics optimization, supply chain & logistics

Logistics optimization determines efficient transportation routing warehousing and distribution strategies.

logit bias, llm optimization

Logit bias adjusts token probabilities to encourage or discourage specific outputs.

logit bias, text generation

Adjust probabilities of specific tokens.

logit bias,inference

Manually adjust logit scores of specific tokens to encourage or suppress them.

logit bias,token control,steering

Logit bias adjusts token probabilities directly. Force or prevent specific tokens. Fine-grained control.

logit lens, explainable ai

Decode intermediate activations.

lognormal distribution, reliability

Alternative failure distribution.

logo generation,content creation

Create brand logos automatically.

long context models, llm architecture

Models handling 100K+ tokens.

long convolution, llm architecture

Long convolutions model extended dependencies through large kernel sizes.

long method detection, code ai

Identify overly long methods.

long prompt handling, generative models

Deal with prompts exceeding limit.

long-range arena, evaluation

Benchmark for long sequences.

long-tail rec, recommendation systems

Long-tail recommendation focuses on effectively suggesting less popular items with few interactions.

long-term capability, quality & reliability

Long-term capability includes all sources of variation over extended periods.

long-term drift, manufacturing

Gradual changes over months.

long-term memory, ai agents

Long-term memory stores experiences and knowledge for retrieval in future tasks.

long-term temporal modeling, video understanding

Capture dependencies across many frames.

longformer attention, llm architecture

Combination of local and global attention.

longformer attention, llm optimization

Longformer combines local sliding window with global attention for efficient long context.

longformer,foundation model

Model with local+global attention for long documents.

look-ahead optimizer, optimization

Optimizer using two sets of weights.

lookahead decoding, inference

Predict future tokens.

lookahead decoding, llm optimization

Lookahead decoding generates multiple future tokens simultaneously when possible.

lookahead decoding,ngram,parallel

Lookahead decoding generates multiple tokens in parallel using n-gram patterns. Speed up inference.

loop closure detection, robotics

Recognize previously visited places.

loop height control, packaging

Manage wire arc height.

loop optimization, model optimization

Loop optimization reorders and transforms loops maximizing parallelism and data locality.

loop unrolling, model optimization

Loop unrolling replicates loop bodies reducing branching overhead and enabling instruction-level parallelism.

lora (low-rank adaptation),lora,low-rank adaptation,fine-tuning

Fine-tuning method that adds small trainable matrices while freezing the base model.

lora diffusion,dreambooth,customize

LoRA and DreamBooth customize diffusion models. Train on few images. Personalized generation.

lora fine-tuning, multimodal ai

Low-Rank Adaptation fine-tunes diffusion models efficiently by learning low-rank weight updates.

lora for diffusion, generative models

Efficient fine-tuning with low-rank adaptation.

lora for diffusion,generative models

Efficient fine-tuning of diffusion models with low-rank adapters.

lora merging, generative models

Combine multiple LoRAs.

lora,adapter,peft,qlora

LoRA = Low-Rank Adapters. Freeze base model, train small rank-decomposed layers. Much cheaper fine-tuning; great for domain-specific custom models.

loss function quality, quality & reliability

Loss functions quantify quality loss from deviation from target values.

loss function, quality

Quantify deviation from target.

loss function,cross entropy,objective

Cross-entropy loss is standard for LLMs. Measures prediction vs actual token distribution. Minimize during training.

loss function,objective,minimize

Loss function measures prediction error. Training minimizes loss. Cross-entropy for classification, MSE for regression.

loss landscape analysis, theory

Study geometry of loss function.

loss landscape smoothness, theory

How smooth the loss surface is.

loss scaling,model training

Multiply loss by constant to prevent gradients from underflowing in FP16.

loss spike,instability,training

Loss spikes indicate training instability. Reduce LR, check data, add gradient clipping. May need to restart.

loss spikes, training phenomena

Sudden increases in loss during training.

loss tangent, signal & power integrity

Loss tangent quantifies dielectric loss as ratio of imaginary to real permittivity.

lost in middle, rag

Lost in middle phenomenon shows degraded use of information in long context middles.

lost in the middle, challenges

Models missing info in middle of context.

lot hold, manufacturing operations

Lot holds temporarily suspend processing pending quality review or authorization.

lot merging, operations

Combine lots.