Scalable and Efficient AI: From Supercomputers to Smartphones
(Presentation - presented in Orlando, FL, USA, Jun. 2023) Keynote talk at the 2023 Federated Computing Research Conference
Abstract
Billion-parameter artificial intelligence models have proven to show exceptional performance in a large variety of tasks ranging from natural language processing, computer vision, and image generation to mathematical reasoning and algorithm generation. Those models usually require large parallel computing systems, often called 'AI Supercomputers', to be trained initially. We will outline several techniques ranging from data ingestion, parallelization, to accelerator optimization that improve the efficiency of such training systems. Yet, training large models is only a small fraction of practical artificial intelligence computations. Efficient inference is even more challenging - models with hundreds-of-billions of parameters are expensive to use. We continue by discussing model compression and optimization techniques such as fine-grained sparsity as well as quantization to reduce model size and significantly improve efficiency during inference. These techniques may eventually enable inference with powerful models on hand-held devices.
Documents
download slides:
Recorded talk (best effort)
BibTeX
@misc{hoefler-fcrc, author={Torsten Hoefler}, title={{Scalable and Efficient AI: From Supercomputers to Smartphones}}, year={2023}, month={Jun.}, location={Orlando, FL, USA}, source={http://www.unixer.de/~htor/publications/}, }