Tiancheng Chen, Ales Kubicek, Langwen Huang, and Torsten Hoefler:
CrossPipe: Towards Optimal Pipeline Schedules for Cross-Datacenter Training
(In Proceedings of the 2025 USENIX Conference on Usenix Annual Technical Conference, presented in Boston, MA, USA, pages 1089-1108, USENIX Association, ISBN: 978-1-939133-48-9, Jul. 2025)
Training large language models (LLMs) now requires resources that exceed a single datacenter, making cross-datacenter strategies increasingly crucial. We present CrossPipe, a framework designed to optimize model training across geographically distributed datacenters by explicitly modeling and mitigating the impact of network latency and limited bandwidth. It enables unified analysis and optimization incorporating both pipeline parallelism (PP) and opportunities for overlapping data parallelism (DP) communication. CrossPipe generates optimized pipeline schedules using either solver-based optimal or fast near-optimal greedy algorithms, built upon a flexible execution engine that separates scheduling logic from communication details. Our evaluation shows that CrossPipe reduces training time by up to 33.6% compared to traditional pipeline schedules under identical memory constraints. When memory constraints are relaxed, CrossPipe maintains strong performance despite communication delays, approaching the efficiency of idealized schedules without delays. CrossPipe offers improved scalability and resource utilization, particularly in environments with high network latency or limited bandwidth.
@inproceedings{tiancheng2025crosspipe, author={Tiancheng Chen and Ales Kubicek and Langwen Huang and and Torsten Hoefler}, title={{CrossPipe: Towards Optimal Pipeline Schedules for Cross-Datacenter Training}}, year={2025}, month={Jul.}, pages={1089-1108}, booktitle={Proceedings of the 2025 USENIX Conference on Usenix Annual Technical Conference}, location={Boston, MA, USA}, publisher={USENIX Association}, isbn={978-1-939133-48-9}, source={http://www.unixer.de/~htor/publications/}, }