model parallelism strategies

Keywords: model parallelism strategies,distributed training

Model parallelism strategies split large neural networks across multiple GPUs when a model doesn't fit in single GPU memory, enabling training and inference of models with billions to trillions of parameters. Parallelism types: (1) Tensor parallelism (TP)—split individual layers across GPUs (e.g., split weight matrices column-wise or row-wise); (2) Pipeline parallelism (PP)—assign different layers to different GPUs, process micro-batches in pipeline fashion; (3) Expert parallelism (EP)—distribute MoE experts across GPUs; (4) Sequence parallelism (SP)—split along sequence dimension for activations. Tensor parallelism: splits matrix multiplications across GPUs—each GPU computes partial result, then all-reduce to combine. Requires fast inter-GPU communication (NVLink). Best within a node (8 GPUs). Latency: adds communication at each layer. Pipeline parallelism: GPU 1 processes layers 1-20, GPU 2 layers 21-40, etc. Micro-batching fills the pipeline to avoid bubble (idle time). Bubble overhead: ~(p-1)/m where p is pipeline stages and m is micro-batches. Lower communication than TP. Best across nodes. Data parallelism (DP): replicate model on each GPU, split data batch. All-reduce gradients after backward pass. Simplest form but requires model to fit in single GPU. ZeRO (DeepSpeed): partitions optimizer states, gradients, and optionally parameters across data-parallel GPUs—combines memory efficiency of model parallelism with simplicity of data parallelism. 3D parallelism: combine TP (intra-node) + PP (inter-node) + DP (across node groups). Used by Megatron-LM, DeepSpeed for training 100B+ models. Common configurations: (1) 7B model—TP=1 or 2, DP=N; (2) 70B model—TP=8, PP=4, DP=N; (3) 175B+—full 3D parallelism. Framework support: Megatron-LM (NVIDIA), DeepSpeed (Microsoft), FSDP (PyTorch), Alpa (automatic parallelization).

Want to learn more?

Search 13,225+ semiconductor and AI topics or chat with our AI assistant.

Search Topics Chat with CFSGPT