Felix, qui, quod amat, defendere fortiter audet
Home -> Publications
Home
  Publications
    
all years
    2018
    2017
    2016
    2015
    2014
    2013
    2012
    2011
    2010
    2009
    2008
    2007
    2006
    2005
    2004
    theses
    techreports
    presentations
    edited volumes
    conferences
  Awards
  Research
  Teaching
  BLOG
  Miscellaneous
  Full CV [pdf]






  Events








  Past Events





Publications of Torsten Hoefler
Copyright Notice:

The documents distributed by this server have been provided by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Y. Oyama, T. Ben-Nun, T. Hoefler, S. Matsuoka:

 Accelerating Deep Learning Frameworks with Micro-batches

(presented in Belfast, UK, IEEE, Sep. 2018, To appear in IEEE International Conference on Cluster Computing (Cluster'18) )

Abstract

cuDNN is a low-level library that provides GPU kernels frequently used in deep learning. Specifically, cuDNN implements several equivalent convolution algorithms, whose performance and memory footprint may vary considerably, depending on the layer dimensions. When an algorithm is automatically selected by cuDNN, the decision is performed on a per-layer basis, and thus it often resorts to slower algorithms that fit the workspace size constraints. We present µ-cuDNN, a thin wrapper library for cuDNN that transparently divides layers’ mini-batch computation into multiple micro-batches, both on a single GPU and a heterogeneous set of GPUs. Based on Dynamic Programming and Integer Linear Programming (ILP), µ-cuDNN enables faster algorithms by decreasing the workspace requirements. At the same time, µ-cuDNN does not decrease the accuracy of the results, effectively decoupling statistical efficiency from the hardware efficiency. We demonstrate the effectiveness of µ-cuDNN for the Caffe and TensorFlow frameworks, achieving speedups of 1.63x for AlexNet and 1.21x for ResNet-18 on the P100-SXM2 GPU. We also show that µ-cuDNN achieves speedups of up to 4.54x, and 1.60x on average for DeepBench’s convolutional layers on the V100-SXM2 GPU. In a distributed setting, µ-cuDNN attains a speedup of 2.20x when training ResNet-18 on a heterogeneous GPU cluster over a single GPU. These results indicate that using micro-batches can seamlessly increase the performance of deep learning, while maintaining the same overall memory footprint.

Documents

download article:
 

BibTeX

@inproceedings{ucudnn,
  author={Y. Oyama and T. Ben-Nun and T. Hoefler and S. Matsuoka},
  title={{Accelerating Deep Learning Frameworks with Micro-batches}},
  year={2018},
  month={Sep.},
  location={Belfast, UK},
  publisher={IEEE},
  note={To appear in IEEE International Conference on Cluster Computing (Cluster'18)},
  source={http://www.unixer.de/~htor/publications/},
}

serving: 54.235.48.106:48174© Torsten Hoefler