Non quia difficilia sunt non audemus, sed quia non audemus difficilia sunt
Home -> Publications
Home
  Publications
    
edited volumes
  Awards
  Research
  Teaching
  Miscellaneous
  Full CV [pdf]
  BLOG






  Events








  Past Events





Publications of Torsten Hoefler
Elias Frantar, Saleh Ashkboos, Torsten Hoefler, Dan Alistarh:

 GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers

(In The Eleventh International Conference on Learning Representations, May 2023)

Abstract

Generative Pre-trained Transformer (GPT) models set themselves apart through breakthrough performance across complex language modelling tasks, but also by their extremely high computational and storage costs. Specifically, due to their massive size, even inference for large, highly-accurate GPT models may require multiple performant GPUs to execute, which limits the usability of such models. While there is emerging work on relieving this pressure via model compression, the applicability and performance of existing compression techniques is limited by the scale and complexity of GPT models. In this paper, we address this challenge, and propose GPTQ, a new one-shot weight quantization method based on approximate second-order information, that is both highly-accurate and highly-efficient. Specifically, GPTQ can quantize GPT models with 175 billion parameters in approximately four GPU hours, reducing the bitwidth down to 3 or 4 bits per weight, with negligible accuracy degradation relative to the uncompressed baseline. Our method more than doubles the compression gains relative to previously-proposed one-shot quantization methods, preserving accuracy, allowing us for the first time to execute an 175 billion-parameter model inside a single GPU. We show experimentally that these improvements can be leveraged for end-to-end inference speedups over FP16, of around 2x when using high-end GPUs (NVIDIA A100) and 4x when using more cost-effective ones (NVIDIA A6000).

Documents

download article:     
 

BibTeX

@inproceedings{,
  author={Elias Frantar and Saleh Ashkboos and Torsten Hoefler and Dan Alistarh},
  title={{GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers}},
  year={2023},
  month={May},
  booktitle={The Eleventh International Conference on Learning Representations},
  source={http://www.unixer.de/~htor/publications/},
}


serving: 3.237.91.98:43570© Torsten Hoefler