this post was submitted on 18 Jul 2023
11 points (100.0% liked)

LocalLLaMA

2292 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 2 years ago
MODERATORS
 

In this work, we propose Retentive Network (RetNet) as a foundation architecture for large language models, simultaneously achieving training parallelism, low-cost inference, and good performance. We theoretically derive the connection between recurrence and attention. Then we propose the retention mechanism for sequence modeling, which supports three computation paradigms, i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel representation allows for training parallelism. The recurrent representation enables low-cost O(1) inference, which improves decoding throughput, latency, and GPU memory without sacrificing performance. The chunkwise recurrent representation facilitates efficient long-sequence modeling with linear complexity, where each chunk is encoded parallelly while recurrently summarizing the chunks. Experimental results on language modeling show that RetNet achieves favorable scaling results, parallel training, low-cost deployment, and efficient inference. The intriguing properties make RetNet a strong successor to Transformer for large language models. Code will be available at this https URL.

you are viewing a single comment's thread
view the rest of the comments
[–] Kerfuffle 2 points 1 year ago

Its about time we start looking into alternatives to the transformer model.

People have been looking into alternatives. If you read the paper, you can see that they compare their approach to a bunch of different alternatives/modifications. Naturally they claim it comes out looking very favorable, but we'll have to wait and see if the models/code they release actually perform as well as they're saying and non-obvious downsides.

It's not an easy thing to get right.