[Submitted on 22 May 2023]

Download PDF

Abstract: Transformers have revolutionized almost all natural language processing (NLP)
tasks but suffer from memory and computational complexity that scales
quadratically with sequence length. In contrast, recurrent neural networks
(RNNs) exhibit linear scaling in memory and computational requirements but
struggle to match the same performance as Transformers due to limitations in
parallelization and scalability. We propose a novel model architecture,
Receptance Weighted Key Value (RWKV), that combines the efficient
parallelizable training of Transformers with the efficient inference of RNNs.
Our approach leverages a linear attention mechanism and allows us to formulate
the model as either a Transformer or an RNN, which parallelizes computations
during training and maintains constant computational and memory complexity
during inference, leading to the first non-transformer architecture to be
scaled to tens of billions of parameters. Our experiments reveal that RWKV
performs on par with similarly sized Transformers, suggesting that future work
can leverage this architecture to create more efficient models. This work
presents a significant step towards reconciling the trade-offs between
computational efficiency and model performance in sequence processing tasks.

Submission history

From: Quentin Anthony [view email]



[v1]

Mon, 22 May 2023 13:57:41 UTC (5,484 KB)

Read More