Definition

Keywords: perplexity,evaluation

Perplexity measures how well a language model predicts text, quantifying model uncertainty (lower is better). Definition: Exponential of average negative log-likelihood per token. PPL = exp(-1/N * sum(log P(token_i))). Interpretation: Effective vocabulary size the model is choosing from. PPL of 10 means model is as uncertain as choosing uniformly from 10 options. Use cases: Compare language models on same test set, track training progress, evaluate model quality. Calculation: Run model on held-out test set, compute average cross-entropy loss, exponentiate. Good values: Modern LLMs achieve single-digit perplexity on standard datasets. Lower is better. Limitations: Only measures next-token prediction, not task performance. Low perplexity model may still fail at downstream tasks. Tokenization sensitivity: Perplexity depends on tokenization, not directly comparable across different tokenizers. Bits per character (BPC): Alternative metric, normalizes for tokenization differences. Dataset matters: Models have lower perplexity on in-distribution data. Test set should match intended use case. Standard intrinsic evaluation metric for language models.

Want to learn more?

Search 13,225+ semiconductor and AI topics or chat with our AI assistant.

Search Topics Chat with CFSGPT