Many libraries in the HPC field use sophisticated algorithms with clear theoretical scalability expectations. However, hardware constraints or programming bugs may sometimes render these expectations inaccurate or even plainly wrong. While algorithm and performance engineers have already been advocating the systematic combination of analytical performance models with practical measurements for a very long time, we go one step further and show how this comparison can become part of automated testing procedures. The most important applications of our method include initial validation, regression testing, and benchmarking to compare implementation and platform alternatives. Advancing the concept of performance assertions, we verify asymptotic scaling trends rather than precise analytical expressions, relieving the developer from the burden of having to specify and maintain very fine-grained and potentially non-portable expectations. In this way, scalability validation can be continuously applied throughout the whole development cycle with very little effort. Using MPI and parallel sorting algorithms as examples, we show how our method can help uncover non-obvious limitations of both libraries and underlying platforms.
@article{, author={Sergei Shudler and Yannick Berens and Alexandru Calotoiu and Torsten Hoefler and Alexandre Strube and Felix Wolf}, title={{Engineering Algorithms for Scalability through Continuous Validation of Performance Expectations}}, journal={IEEE Transactions on Parallel and Distributed Systems}, year={2019}, month={Aug.}, pages={1768-1785}, volume={30}, number={8}, publisher={IEEE}, source={http://www.unixer.de/~htor/publications/}, }