theory of constraints, supply chain & logistics
Theory of constraints focuses improvement on system bottlenecks maximizing throughput.
3,145 technical terms and definitions
Theory of constraints focuses improvement on system bottlenecks maximizing throughput.
Focus on bottleneck.
SiO2 grown by heating silicon in oxygen.
Thermal oxidizers combust VOCs at high temperature converting them to carbon dioxide and water.
Thermographic inspection detects abnormal temperatures indicating electrical or mechanical problems.
Measure temperature via reflectance changes.
AI helps threat modeling. Identify attack vectors.
Merge models by resolving conflicts.
Process image in tiles.
Tiling strategies partition computations into cache-friendly blocks improving memory hierarchy utilization.
Time series decomposition separates signals into trend seasonal and residual components for analysis and forecasting.
Time-aware attention mechanisms weight edges by recency and temporal patterns in dynamic graphs.
Maintain on fixed schedule.
Model oxide breakdown.
Time-lagged convergent cross mapping identifies optimal delays for detecting causality in nonlinear dynamical systems.
Time-resolved photoemission correlates emission timing with circuit activity isolating intermittent or dynamic failures.
Timeouts halt agent execution after specified duration preventing runaway processes.
Timeout handling cancels long-running requests preventing resource exhaustion.
Encode diffusion timestep.
timm (PyTorch Image Models) has hundreds of pretrained vision models. Transfer learning.
Machine learning on microcontrollers.
Defect localization using laser heating.
Thermally-Induced Voltage Alteration uses localized heating to modulate pn junction characteristics revealing resistive opens and other defects.
Together AI provides LLM APIs and fine-tuning. Competitive pricing. Open model hosting.
Token bucket algorithm allows bursts while maintaining average rate limits.
Token budgets limit context length for cost and latency management.
Maximum number of tokens an LLM can process or generate in a single request or conversation turn.
Token dropping discards excess tokens when expert capacity is exceeded.
Token forcing mandates specific tokens at designated positions.
Assign importance scores to determine computation allocation.
Maximum prompt length.
Combine similar tokens to reduce sequence length.
Token streaming sends individual tokens immediately rather than waiting for completion.
Training tokens per parameter.
Create tokenizer vocabulary.
Percentage of time tool is ready.
Tool calling agents invoke external functions APIs or resources to accomplish tasks.
Validate tool call arguments before execution.
Tool discovery enables agents to find and learn about available functions dynamically.
Tool documentation describes function capabilities parameters and expected outputs for agent understanding.
Tool idle management powers down unused equipment components reducing standby energy consumption.
Keep tools similar.
Tool result parsing extracts relevant information from function outputs for agent reasoning.
Tool selection chooses appropriate functions from available repertoire for current needs.
LLM decides when and how to call external APIs tools or functions.
Teach models to use external tools.
Equip models with calculators search APIs code execution.
ToolBench evaluates agent ability to use diverse APIs and tools effectively.
Model trained to decide when and how to use tools.
Top-k routing selects k highest-scoring experts for each token.