Model serving is infrastructure to deploy trained models and handle inference requests at scale in production. Core functions: Load model, receive requests, preprocess input, run inference, postprocess output, return response. Key properties: Low latency: Fast responses for real-time applications. High throughput: Handle many requests per second. Scalability: Add capacity with demand. Reliability: Handle failures gracefully. Serving frameworks: TorchServe (PyTorch), TF Serving (TensorFlow), Triton (NVIDIA, multi-framework), vLLM (LLM specialized), Ray Serve. Deployment patterns: REST API: HTTP endpoints, widely compatible. gRPC: Efficient binary protocol, faster. Batch processing: Collect requests into batches for efficiency. Architecture components: Load balancer, model servers, request queue, caching layer, monitoring. LLM serving: Special considerations - KV caching, continuous batching, speculative decoding. vLLM, TGI (HuggingFace). Scaling strategies: Horizontal scaling (more replicas), GPU sharing, multi-model serving. Monitoring: Track latency (p50, p99), throughput, error rate, GPU utilization. Essential for production AI.