← Back to AI Factory Chat

AI Factory Glossary

3,145 technical terms and definitions

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z All
Showing page 32 of 63 (3,145 entries)

load balancing (moe),load balancing,moe,model architecture

Ensure experts are used roughly equally to avoid underutilization.

load balancing agents, ai agents

Load balancing distributes work evenly preventing bottlenecks in agent systems.

load balancing loss, llm architecture

Load balancing loss encourages uniform expert utilization.

load shedding, llm optimization

Load shedding rejects requests when system capacity is exceeded preventing overload.

local level model, time series models

Local level model is simplest structural time series with stochastically evolving level component.

local sgd, distributed training

Perform multiple local updates before synchronizing.

local trend model, time series models

Local trend model includes both stochastically evolving level and slope components for trending series.

local-global attention,llm architecture

Combine local sliding window with sparse global attention.

locally typical, llm optimization

Locally typical sampling considers information content conditional on context.

lock-in thermography, failure analysis advanced

Lock-in thermography applies periodic electrical stimulation and phase-sensitive thermal imaging to detect weak heat sources from intermittent defects.

lock-in thermography,failure analysis

Thermal imaging of defects.

lof temporal, lof, time series models

Local Outlier Factor adapted for time series detects anomalies by comparing local densities in feature space.

lof time series, lof, time series models

Local Outlier Factor for time series detects anomalies by comparing densities in windowed feature space.

log quantization, model optimization

Logarithmic quantization represents values in log domain improving dynamic range.

log-gaussian cox, time series models

Log-Gaussian Cox processes use Gaussian random fields to model spatially or temporally varying intensity functions.

logarithmic quantization,model optimization

Use logarithmic scale for quantization.

logic programming with llms,ai architecture

Use LLMs to interact with logic systems.

logistics optimization, supply chain & logistics

Logistics optimization determines efficient transportation routing warehousing and distribution strategies.

logit bias, llm optimization

Logit bias adjusts token probabilities to encourage or discourage specific outputs.

logit lens, explainable ai

Decode intermediate activations.

long context models, llm architecture

Models handling 100K+ tokens.

long convolution, llm architecture

Long convolutions model extended dependencies through large kernel sizes.

long method detection, code ai

Identify overly long methods.

long prompt handling, generative models

Deal with prompts exceeding limit.

long-tail rec, recommendation systems

Long-tail recommendation focuses on effectively suggesting less popular items with few interactions.

long-term memory, ai agents

Long-term memory stores experiences and knowledge for retrieval in future tasks.

long-term temporal modeling, video understanding

Capture dependencies across many frames.

longformer attention, llm architecture

Combination of local and global attention.

longformer attention, llm optimization

Longformer combines local sliding window with global attention for efficient long context.

longformer,foundation model

Model with local+global attention for long documents.

lookahead decoding, llm optimization

Lookahead decoding generates multiple future tokens simultaneously when possible.

loop optimization, model optimization

Loop optimization reorders and transforms loops maximizing parallelism and data locality.

loop unrolling, model optimization

Loop unrolling replicates loop bodies reducing branching overhead and enabling instruction-level parallelism.

lora diffusion,dreambooth,customize

LoRA and DreamBooth customize diffusion models. Train on few images. Personalized generation.

lora fine-tuning, multimodal ai

Low-Rank Adaptation fine-tunes diffusion models efficiently by learning low-rank weight updates.

lora for diffusion, generative models

Efficient fine-tuning with low-rank adaptation.

lora for diffusion,generative models

Efficient fine-tuning of diffusion models with low-rank adapters.

lora merging, generative models

Combine multiple LoRAs.

loss scaling,model training

Multiply loss by constant to prevent gradients from underflowing in FP16.

loss spike,instability,training

Loss spikes indicate training instability. Reduce LR, check data, add gradient clipping. May need to restart.

loss spikes, training phenomena

Sudden increases in loss during training.

lot sizing, supply chain & logistics

Lot sizing determines optimal production or order quantities balancing setup costs and inventory.

lottery ticket hypothesis, model optimization

Lottery ticket hypothesis posits that dense networks contain sparse subnetworks trainable to full accuracy.

lottery ticket hypothesis,model training

Sparse subnetworks that train from scratch.

louvain algorithm, graph algorithms

Fast community detection method.

low-angle grain boundary, defects

Small misorientation between grains.

low-precision training, optimization

Use FP16 or BF16 for training.

low-rank factorization, model optimization

Low-rank factorization decomposes weight matrices into products of smaller matrices.

low-rank tensor fusion, multimodal ai

Efficient tensor fusion.

lp norm constraints, ai safety

Bound perturbations in Lp norm.