LLM Pretraining Data Curation and Scaling is the strategic selection, filtering, and combination of diverse training data sources optimizing for model quality, generalization, and downstream task performance — foundation determining LLM capabilities. Data quality increasingly trumps scale. Data Diversity and Distribution balanced representation across domains: web text, books, code, academic writing, multilingual content. Imbalanced data leads to capability gaps. Domain importance depends on application: reasoning models benefit from math/code, multilingual models need language balance. Web Crawling and Filtering internet text primary pretraining source. Filtering removes low-quality content: duplicate/near-duplicate removal, language identification, toxicity/adult content filtering. Expensive but essential preprocessing. Document Quality Scoring develop quality metrics predicting downstream performance. Perplexity under reference language model: high perplexity = unusual/low-quality. Heuristics: document length, punctuation density, capitalization patterns. Machine learning classifiers trained on manual quality labels. Deduplication at Multiple Granularities exact duplicates removed via hashing. Near-duplicate removal via MinHash, similarity hashing, or sequence matching catches paraphrases, boilerplate. Most pretraining data contains significant duplication—removal improves efficiency. Code Data Integration code datasets like CodeSearchNet, GitHub, StackOverflow improve reasoning and factual grounding. Typically smaller fraction than natural language (e.g., 5-15%) yet disproportionate benefit. Multilingual and Low-Resource Coverage intentional inclusion of non-English languages ensures broader capability. Requires careful filtering and quality assessment for lower-resource languages. Knowledge Base Integration curated knowledge (Wikipedia, Wikidata, specialized databases) provides grounded, structured information. Typically few percent of training data. Instruction Tuning Data labeled task examples (instruction, output pairs) for supervised finetuning after pretraining. Substantial effort curating high-quality instruction data. Both human-annotated and model-generated instructions used. Data Contamination Assessment evaluate whether evaluation benchmarks appear in training data. Leakage inflates evaluation metrics. Contamination detection via substring matching, embedding similarity. Retraining without contamination estimates unbiased performance. Scale Laws and Compute-Optimal Allocation empirical findings (Chinchilla, compute-optimal scaling) suggest optimal data/compute ratio. Scaling laws: loss ~ (D+C)^(-α) where D=tokens, C=compute. Roughly: double tokens ~= double compute for optimal scaling. Carbon and Environmental Considerations pretraining energy consumption and carbon footprint increasing concern. Efficient architectures, hardware utilization, renewable energy sourcing. Data Governance and Licensing licensing considerations for training data. Copyright, fair use, licensing agreements with original sources. Transparency about training data composition. Rare Capabilities and Task-Specific Tuning some capabilities (e.g., code generation, reasoning) benefit from task-specific pretraining stages. Curriculum learning: train on easy examples first improving sample efficiency. Evaluation After Data Curation multiple benchmark evaluations (MMLU, HumanEval, GLUE, etc.) assess impact of data changes. Controlled experiments quantify value of additions/removals. LLM pretraining data curation is increasingly important—strategic data selection trumps brute-force scaling for efficient capability development.