torchscript, infrastructure
PyTorch's JIT compiler.
653 technical terms and definitions
PyTorch's JIT compiler.
TorchScript creates serializable and optimizable representations of PyTorch models.
TorchScript compiles PyTorch to graph. Trace or script. Enables deployment without Python.
TorchServe serves PyTorch models. Production deployment.
Total cost of ownership includes purchase price plus logistics inventory quality and risk costs.
Total jitter combines random and deterministic components at specified bit error rate.
Holistic maintenance approach.
Surface metal contamination.
Maximum thickness difference on wafer.
Touchdown detection senses when probe tips contact wafer surface through sudden resistance or capacitance change.
Separate exhaust for toxic or flammable gases.
Toxicity bias occurs when models generate more toxic content for certain groups.
Model trained to detect harmful language.
Identify toxic content.
Toxicity detection identifies offensive abusive or hateful language in text.
Classify text for hate speech offensive language or harmful content.
Remove harmful content.
Predict toxic effects of compounds.
Total Productive Maintenance maximizes equipment effectiveness through operator involvement and proactive maintenance.
Google's custom ASIC for ML workloads.
TPUs are Google custom chips for ML. Available on GCP. Optimized for large batch training and inference.
Examine execution timeline.
Detailed time-series data from tool sensors during processing.
Link measurements to standards.
Track chips from wafer to customer for quality and recalls.
Efficient influence computation.
TracIn traces model predictions back to influential training examples through gradient similarity.
Balance accuracy and robustness.
Route percentage of traffic to different versions.
Older larger process nodes still used for cost-sensitive products.
Trailing-edge nodes are mature processes offering stability and cost advantages.
Total FLOPS for training.
Predict computational cost.
Total compute and time required to train a model.
Training data attribution identifies which training examples most influenced specific predictions.
Attempt to extract memorized training examples from model.
Tradeoff between data aspects.
Measure computational efficiency.
Manage complex training workflows.
Massive-scale distributed training.
Optimize end-to-end training workflow.
Estimate training duration.
Training verification confirms personnel understand and can execute procedures.
Trajectory buffers in offline RL store complete episodes preserving temporal structure for algorithms requiring sequential context.
Convolution along motion trajectories.
Predict future paths of agents.
Use test set structure during prediction.
Use unlabeled target data during training.
TransE embeds knowledge graph triples so that head plus relation approximates tail in vector space.
Translational embedding for KGs.