Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis
(ACM Comput. Surv.. Vol 52, Nr. 4, pages 65:1--65:43, ACM, ISSN: 0360-0300, Aug. 2019)
Abstract
Deep Neural Networks (DNNs) are becoming an important tool in modern computing applications. Accelerating their training is a major challenge and techniques range from distributed algorithms to low-level circuit design. In this survey, we describe the problem from a theoretical perspective, followed by approaches for its parallelization. Specifically, we present trends in DNN architectures and the resulting implications on parallelization strategies. We discuss the different types of concurrency in DNNs; synchronous and asynchronous stochastic gradient descent; distributed system architectures; communication schemes; and performance modeling. Based on these approaches, we extrapolate potential directions for parallelism in deep learning.
ACM Stats
http://doi.acm.org/10.1145/3320060
Documents
download article: download slides:
Recorded talk (best effort)
BibTeX
@article{distdl-preprint, author={Tal Ben-Nun and Torsten Hoefler}, title={{Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis}}, journal={ACM Comput. Surv.}, year={2019}, month={Aug.}, pages={65:1--65:43}, volume={52}, number={4}, publisher={ACM}, issn={0360-0300}, source={http://www.unixer.de/~htor/publications/}, }