LLM.int8(): 8-Bit Matrix Multiplication for Transformers at Scale (2022)

[Submitted on 15 Aug 2022 (v1), last revised 10 Nov 2022 (this version, v2)]

Download PDF

Abstract: Large language models have been widely adopted but require significant GPU
memory for inference. We develop a procedure for Int8 matrix multiplication for
feed-forward and attention projection layers in transformers, which cut the
memory needed for inference by half while retaining full precision performance.
With our method, a 175B parameter 16/32-bit checkpoint can be loaded, converted
to Int8, and used immediately without performance degradation. This is made
possible by understanding and working around properties of highly systematic
emergent features in transformer language models that dominate attention and
transformer predictive performance. To cope with these features, we develop a
two-part quantization procedure, LLM.int8(). We first use vector-wise
quantization with separate normalization constants for each inner product in
the matrix multiplication, to quantize most of the features. However, for the
emergent outliers, we also include a new mixed-precision decomposition scheme,
which isolates the outlier feature dimensions into a 16-bit matrix
multiplication while still more than 99.9% of values are multiplied in 8-bit.
Using LLM.int8(), we show empirically it is possible to perform inference in
LLMs with up to 175B parameters without any performance degradation. This
result makes such models much more accessible, for example making it possible
to use OPT-175B/BLOOM on a single server with consumer GPUs. We open-source our
software.

Submission history

From: Tim Dettmers [view email]


[v1]

Mon, 15 Aug 2022 17:08:50 UTC (161 KB)

[v2]

Thu, 10 Nov 2022 18:14:31 UTC (163 KB)

Read More

Leave a Comment