← Back to AI Factory Chat

AI Factory Glossary

288 technical terms and definitions

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z All
Showing page 4 of 6 (288 entries)

model artifact management, mlops

Store and version model checkpoints.

model artifact,store,manage

Model artifacts store trained models. Weights, config, metadata.

model averaging,machine learning

Average predictions from multiple models.

model card, evaluation

Model cards document model details performance and limitations for transparency.

model card,documentation

Documentation of model capabilities limitations and biases.

model card,documentation,transparency

Model cards document model details: training, limitations, intended use. Responsible AI practice.

model card,documentation,transparency

Model cards document model capabilities, limitations, intended use, bias. Standard for responsible release.

model cards documentation, documentation

Standardized model documentation.

model checking,software engineering

Exhaustively verify system properties.

model compression for edge deployment, edge ai

Reduce model size for edge devices.

model compression for mobile,edge ai

Reduce model size for phones and IoT devices.

model compression, model optimization

Model compression reduces size and computation of neural networks maintaining performance.

model compression,model optimization

Techniques to reduce model size (pruning quantization distillation).

model conversion, model optimization

Model conversion translates trained models between frameworks and formats for deployment.

model discrimination design, doe

Distinguish between competing models.

model distillation for interpretability, explainable ai

Distill into interpretable student model.

model editing,model training

Directly update model weights to fix specific factual errors or behaviors.

model ensemble rl, reinforcement learning advanced

Model ensemble reinforcement learning trains multiple dynamics models to quantify uncertainty and improve decision-making robustness.

model evaluation, evaluation

Model evaluation measures performance across metrics and test sets.

model extraction attack,ai safety

Steal model by querying it repeatedly.

model extraction, interpretability

Model extraction attacks replicate model functionality through black-box querying.

model extraction,stealing,query

Model extraction steals model via queries. Build substitute model. Protect with rate limiting.

model fingerprint,unique,identify

Model fingerprints identify models from behavior. Unique responses to probe inputs.

model flops utilization, mfu, optimization

Effective compute utilization.

model hub,huggingface,weights

Hugging Face Hub hosts open models and datasets. Download weights, run locally, fine-tune, share.

model inversion attack,ai safety

Reconstruct training data from model parameters or outputs.

model inversion attacks, privacy

Reconstruct training data from model.

model inversion defense,privacy

Prevent reconstruction of training data.

model inversion, interpretability

Model inversion attacks reconstruct training data from model parameters or outputs.

model merging,model training

Combine weights from multiple fine-tuned models to get benefits of both.

model monitoring,mlops

Track model performance metrics and detect degradation.

model parallelism strategies,distributed training

Techniques to split models across GPUs (tensor pipeline expert).

model parallelism,model training

Split model layers across devices each device has subset of parameters.

model predictive control in semiconductor, process control

Use predictive models for process control.

model predictive control, manufacturing operations

Model predictive control optimizes future actions using process models and constraints.

model predictive control, mpc, control theory

Optimize actions using predictive model.

model registry,mlops

Central repository for storing and versioning trained models.

model registry,version,deploy

Model registry versions and stages models. MLflow, W&B, SageMaker.

model retraining,mlops

Periodically retrain model on fresh data to maintain performance.

model routing, llm optimization

Model routing directs requests to appropriate models based on query characteristics.

model server,serving,runtime

Model servers (vLLM, TGI, Triton) host models for inference. Handle batching, scaling, API.

model serving platform,infrastructure

Infrastructure for deploying models (Seldon KServe BentoML).

model serving,deployment

Infrastructure to deploy models and handle inference requests.

model size,model training

Disk space required to store model weights.

model soup, model merging

Average fine-tuned models.

model stealing, privacy

Replicate model by querying.

model stitching for understanding, explainable ai

Connect different model parts.

model stitching, model merging

Connect different model parts.

model theft,extraction,protect

Model extraction attacks steal model via API queries. Protect with rate limits, output perturbation, watermarks.

model verification, security

Verify model hasn't been tampered with.