← Back to AI Factory Chat

AI Factory Glossary

653 technical terms and definitions

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z All
Showing page 8 of 14 (653 entries)

tinygrad,simple,educational

tinygrad is simple deep learning framework by George Hotz. Educational, hackable.

tinyllama,small,efficient

TinyLlama is 1.1B model trained on 3T tokens. Very efficient.

tinyml, edge ai

Machine learning on microcontrollers.

tip-enhanced raman spectroscopy, ters, metrology

Nanoscale Raman with AFM tip.

tip-to-tip spacing,lithography

Critical spacing between line ends.

titanium silicide (tisi2),titanium silicide,tisi2,feol

Older silicide technology.

titration, manufacturing equipment

Titration determines chemical concentration by measuring neutralization volume.

tiva (thermally induced voltage alteration),tiva,thermally induced voltage alteration,failure analysis

Defect localization using laser heating.

tiva, tiva, failure analysis advanced

Thermally-Induced Voltage Alteration uses localized heating to modulate pn junction characteristics revealing resistive opens and other defects.

tlp (transmission line pulse),tlp,transmission line pulse,reliability

Fast ESD test method.

tmah etch,etch

Tetramethylammonium hydroxide alternative to KOH.

toc (total organic carbon),toc,total organic carbon,facility

Measure of organic contamination in water.

toc analysis, toc, manufacturing equipment

Total Organic Carbon analysis quantifies organic contamination in water.

tof-sims imaging, metrology

Chemical imaging with SIMS.

together ai,inference,api

Together AI provides LLM APIs and fine-tuning. Competitive pricing. Open model hosting.

token bucket, llm optimization

Token bucket algorithm allows bursts while maintaining average rate limits.

token budget, llm optimization

Token budgets limit context length for cost and latency management.

token budget,llm architecture

Maximum number of tokens an LLM can process or generate in a single request or conversation turn.

token combine, moe

Gather expert outputs.

token deletion, nlp

Remove tokens and predict deletions.

token dispatch, moe

Send tokens to appropriate experts.

token dropping in moe, moe

Handle overflow when experts full.

token dropping, llm architecture

Token dropping discards excess tokens when expert capacity is exceeded.

token dropping,optimization

Skip processing some tokens to save compute.

token forcing, llm optimization

Token forcing mandates specific tokens at designated positions.

token healing, text generation

Fix tokenization artifacts.

token importance scoring, llm architecture

Assign importance scores to determine computation allocation.

token labeling in vit, computer vision

Assign labels to patches.

token labeling, computer vision

Assign pseudo-labels to tokens.

token limit in prompts, generative models

Maximum prompt length.

token merging rules, nlp

How to combine tokens.

token merging, transformer

Combine similar tokens to reduce sequence length.

token pruning, optimization

Remove unimportant tokens.

token streaming, llm optimization

Token streaming sends individual tokens immediately rather than waiting for completion.

token tree search, inference

Search over possible continuations.

token-to-parameter ratio, training

Training tokens per parameter.

token,tokenize,tokenization,bpe

Token: Subword unit. GPT uses BPE tokenization. ~1 token = 4 characters. Models predict probability of next token.

tokenization artifacts, challenges

Problems from subword tokenization.

tokenization consistency, nlp

Ensure consistent tokenization.

tokenization normalization, nlp

Preprocessing before tokenization.

tokenization security,security

Protect tokens from injection or manipulation.

tokenization,bpe,sentencepiece

Tokenization splits text into tokens. BPE, SentencePiece, tiktoken. Vocabulary size trade-off (32K-100K typical).

tokenization,nlp

Split text into tokens (subwords characters) for model input.

tokenizer training, nlp

Create tokenizer vocabulary.

tokenizer,bpe,wordpiece,spm

Tokenizer splits text into tokens. BPE/WordPiece/SentencePiece trade off vocabulary size vs. sequence length and handle multilingual text differently.

tokenizers,fast,rust

Tokenizers library provides fast tokenization. Rust implementation. Hugging Face.

tokens per second, optimization

Language model training speed.

tolerance design, quality & reliability

Tolerance design balances specification tightness with cost and capability.

tolerance, spc

Allowed variation from target.

tombstoning, quality

Component standing on end.