t-closeness, training techniques
T-closeness ensures sensitive attribute distribution within groups matches overall distribution.
189 technical terms and definitions
T-closeness ensures sensitive attribute distribution within groups matches overall distribution.
Zero-shot instruction-following model.
T0 trains models on prompted datasets converted to unified text-to-text format.
Lightweight control adapter.
Treats all NLP tasks as text-to-text problems.
Remove low-probability tail.
Take-back programs enable manufacturers to collect used products for recycling or proper disposal.
Learn to redistribute attention across heads.
Task allocation assigns responsibilities to agents based on capabilities and load.
Add/subtract task vectors.
Task decomposition breaks complex goals into manageable subtasks.
Task diversity in instruction tuning exposes models to varied problem types.
Pre-train for specific downstream task.
Taylor expansion pruning approximates loss change from removing weights using Taylor series.
TBATS combines Box-Cox transformation Fourier seasonality ARMA errors and trend for complex seasonal time series.
Physical parameters used in device/process simulation.
Temporal Convolutional Networks use dilated causal convolutions for sequence modeling with long effective history.
Training-free ensemble NAS combines multiple zero-cost proxies improving architecture evaluation reliability.
Teacher-student curriculum learning uses a teacher model to assess sample difficulty and guide curriculum design for student training.
General paradigm for distillation.
Teacher-student training transfers knowledge from complex to simple models through soft targets.
Intermediate model between teacher and student.
I can help you turn your knowledge into internal docs, playbooks, or mini-courses for onboarding your team.
Find areas needing refactoring.
AI tech debt: hacky prompts, hardcoded logic, missing tests. Schedule time to refactor and maintain.
Adjust temperature for better calibrated probabilities.
Temperature parameter in distillation softens predictions revealing relative class probabilities.
Soften probability distributions.
Investigate THB failures.
Encode information in spike timing.
Temporal consistency ensures smooth transitions and coherent motion across video frames.
Temporal Fusion Transformer combines LSTM encoder-decoder with multi-head attention for interpretable multi-horizon time series forecasting with covariates.
GNNs for dynamic graphs.
Extract time-related medical info.
Temporal point process GNNs model event sequences on graphs through learned intensity functions.
Temporal point processes model event sequences in continuous time by specifying conditional intensity functions governing event occurrence rates.
Temporal random walks respect edge timestamps when sampling paths for representation learning.
Temporal smoothing in dynamic graphs regularizes learned representations to change gradually over time.
Represent chemical tensors efficiently.
Tensor decomposition factorizes weight tensors reducing parameters while maintaining capacity.
Tensor field networks achieve equivariance through tensor product operations on irreducible representations of rotation groups.
Outer product of modality features.
Split individual tensors/layers across devices.
Tensor train decomposition chains matrices through successive products for efficient compression.
TensorBoard visualizes training. Loss curves, histograms, graphs.
TensorFlow Lite provides lightweight runtime for mobile and embedded deployment with optimization tools.
NVIDIA's optimized library for LLM inference.
TensorRT optimizes trained models for NVIDIA GPUs through fusion quantization and kernel selection.
Quantize gradients to -1 0 +1.
Ternary networks use three-level weights providing expressiveness between binary and full precision.