mlflow tracking,experiment,log
MLflow Tracking logs experiments: params, metrics, artifacts. Compare runs. Model registry.
751 technical terms and definitions
MLflow Tracking logs experiments: params, metrics, artifacts. Compare runs. Model registry.
Platform for ML lifecycle management.
Compiler infrastructure for ML.
MLIR is multi-level IR for ML compilers. Connects frameworks to hardware. Apache TVM, IREE use MLIR.
Practices for deploying monitoring and maintaining ML systems in production.
I can outline MLOps flows: versioning models, registries, canary deploys, rollback, and monitoring for drift.
Token-mixing and channel-mixing MLPs.
Vision architecture using only MLPs.
MLServer is Seldon inference server. Multi-model, protocol support.
MLX is Apple framework for ML on Apple Silicon. Unified memory advantage. Good for local Mac inference.
Medical knowledge test.
MMDetection is OpenMMLab detection toolbox. Many models and configs.
Tests knowledge across 57 subjects.
57 subjects knowledge test.
Massive Multitask Language Understanding tests knowledge across 57 subjects.
Maximal Marginal Relevance for recommendations balances relevance with diversity in result sets.
MnasNet performs mobile neural architecture search optimizing accuracy and latency on target devices using reinforcement learning.
MobileNet uses depthwise separable convolutions. Much fewer parameters. Designed for mobile inference.
Mobile deployment needs small models (CoreML, TFLite). Optimize for battery, memory. On-device for privacy.
Efficient CNNs for mobile devices.
MobileNet architecture uses depthwise separable convolutions for efficient mobile deployment.
Inverted residuals with linear bottlenecks.
MobileNetV2 adds inverted residuals and linear bottlenecks improving efficiency and accuracy.
Hardware-aware network design.
MobileNetV3 uses NAS-discovered architectures with squeeze-excitation and h-swish activation.
Simulate carrier mobility.
Carrier mobility fluctuations.
Monotonic Chunkwise Attention processes fixed-size chunks online enabling low-latency streaming ASR.
Generate mock objects for testing.
Contrastive learning with momentum encoder and memory bank.
Modal is serverless GPU compute. Pay per second. Easy deployment of Python functions.
Randomly drop modalities during training.
Generate missing modalities.
Multiple optima connected by low-loss paths.
Blend different optima.
Restrict who can use or modify models.
Store and version model checkpoints.
Model artifacts store trained models. Weights, config, metadata.
Average predictions from multiple models.
Model cards document model details performance and limitations for transparency.
Documentation of model capabilities limitations and biases.
Model cards document model details: training, limitations, intended use. Responsible AI practice.
Model cards document model capabilities, limitations, intended use, bias. Standard for responsible release.
Standardized model documentation.
Exhaustively verify system properties.
Reduce model size for edge devices.
Reduce model size for phones and IoT devices.
Model compression reduces size and computation of neural networks maintaining performance.
Techniques to reduce model size (pruning quantization distillation).
Model conversion translates trained models between frameworks and formats for deployment.