← Back to AI Factory Chat

AI Factory Glossary

9,967 technical terms and definitions

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z All
Showing page 108 of 200 (9,967 entries)

mlp-mixer,computer vision

Vision architecture using only MLPs.

mlserver,seldon,inference

MLServer is Seldon inference server. Multi-model, protocol support.

mlx,apple silicon,mac

MLX is Apple framework for ML on Apple Silicon. Unified memory advantage. Good for local Mac inference.

mmcu, mmcu, evaluation

Medical knowledge test.

mmdetection,object detection,toolbox

MMDetection is OpenMMLab detection toolbox. Many models and configs.

mmlu (massive multitask language understanding),mmlu,massive multitask language understanding,evaluation

Tests knowledge across 57 subjects.

mmlu, mmlu, evaluation

57 subjects knowledge test.

mmlu, mmlu, evaluation

Massive Multitask Language Understanding tests knowledge across 57 subjects.

mmr rec, mmr, recommendation systems

Maximal Marginal Relevance for recommendations balances relevance with diversity in result sets.

mnasnet, neural architecture search

MnasNet performs mobile neural architecture search optimizing accuracy and latency on target devices using reinforcement learning.

mobile net,depthwise,separable

MobileNet uses depthwise separable convolutions. Much fewer parameters. Designed for mobile inference.

mobile,ios,android,on device

Mobile deployment needs small models (CoreML, TFLite). Optimize for battery, memory. On-device for privacy.

mobilenet architecture, computer vision

Efficient CNNs for mobile devices.

mobilenet, model optimization

MobileNet architecture uses depthwise separable convolutions for efficient mobile deployment.

mobilenetv2, computer vision

Inverted residuals with linear bottlenecks.

mobilenetv2, model optimization

MobileNetV2 adds inverted residuals and linear bottlenecks improving efficiency and accuracy.

mobilenetv3, computer vision

Hardware-aware network design.

mobilenetv3, model optimization

MobileNetV3 uses NAS-discovered architectures with squeeze-excitation and h-swish activation.

mobility modeling, simulation

Simulate carrier mobility.

mobility variation, device physics

Carrier mobility fluctuations.

mocha, audio & speech

Monotonic Chunkwise Attention processes fixed-size chunks online enabling low-latency streaming ASR.

mock generation, code ai

Generate mock objects for testing.

moco (momentum contrast),moco,momentum contrast,self-supervised learning

Contrastive learning with momentum encoder and memory bank.

modal,serverless,gpu

Modal is serverless GPU compute. Pay per second. Easy deployment of Python functions.

modality dropout, multimodal ai

Randomly drop modalities during training.

modality hallucination, multimodal ai

Generate missing modalities.

mode connectivity, theory

Multiple optima connected by low-loss paths.

mode interpolation, model merging

Blend different optima.

model access control,security

Restrict who can use or modify models.

model artifact management, mlops

Store and version model checkpoints.

model artifact,store,manage

Model artifacts store trained models. Weights, config, metadata.

model averaging,machine learning

Average predictions from multiple models.

model card, evaluation

Model cards document model details performance and limitations for transparency.

model card,documentation

Documentation of model capabilities limitations and biases.

model card,documentation,transparency

Model cards document model details: training, limitations, intended use. Responsible AI practice.

model card,documentation,transparency

Model cards document model capabilities, limitations, intended use, bias. Standard for responsible release.

model cards documentation, documentation

Standardized model documentation.

model checking,software engineering

Exhaustively verify system properties.

model compression for edge deployment, edge ai

Reduce model size for edge devices.

model compression for mobile,edge ai

Reduce model size for phones and IoT devices.

model compression, model optimization

Model compression reduces size and computation of neural networks maintaining performance.

model compression,model optimization

Techniques to reduce model size (pruning quantization distillation).

model conversion, model optimization

Model conversion translates trained models between frameworks and formats for deployment.

model discrimination design, doe

Distinguish between competing models.

model distillation for interpretability, explainable ai

Distill into interpretable student model.

model editing,model training

Directly update model weights to fix specific factual errors or behaviors.

model ensemble rl, reinforcement learning advanced

Model ensemble reinforcement learning trains multiple dynamics models to quantify uncertainty and improve decision-making robustness.

model evaluation, evaluation

Model evaluation measures performance across metrics and test sets.

model extraction attack,ai safety

Steal model by querying it repeatedly.

model extraction, interpretability

Model extraction attacks replicate model functionality through black-box querying.