Life would be so much easier if only we had the source code...
Home -> Publications
all years
    edited volumes
  Full CV [pdf]


  Past Events

Publications of Torsten Hoefler
Copyright Notice:

The documents distributed by this server have been provided by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

T. Ben-Nun, T. Hoefler:

 Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis

(CoRR. Vol abs/1802.09941, Feb. 2018)


Deep Neural Networks (DNNs) are becoming an important tool in modern computing applications. Accelerating their training is a major challenge and techniques range from distributed algorithms to low-level circuit design. In this survey, we describe the problem from a theoretical perspective, followed by approaches for its parallelization. Specifically, we present trends in DNN architectures and the resulting implications on parallelization strategies. We discuss the different types of concurrency in DNNs; synchronous and asynchronous stochastic gradient descent; distributed system architectures; communication schemes; and performance modeling. Based on these approaches, we extrapolate potential directions for parallelism in deep learning.


download article:
download slides:

Recorded talk (best effort)



  author={T. Ben-Nun and T. Hoefler},
  title={{Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis}},

serving:© Torsten Hoefler