this post was submitted on 02 Aug 2023
14 points (93.8% liked)

LocalLLaMA

2292 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 2 years ago
MODERATORS
 

I've been using airoboros-l2-70b for writing fiction, and while overall I'd describe the results as excellent and better than any llama1 model I've used, it doesn't seem to be living up to the promise of 4k token sequence length.

Around 2500 tokens output quality degrades rapidly, and either starts repeating previous text verbatim, or becomes incoherent (grammar, punctuation and capitalization disappear, becomes salad of vaguely related words)

Any other experiences with llama2 and long context? Does the base model work better? Are other fine tunes behaving similarly? I'll try myself eventually, but the 70b models are chunky downloads, and experimentation takes a while at 1 t/s.

(I'm using GGML Q4_K_M on kobold.cpp, with rope scaling off like you're supposed to do with llama2)

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 1 year ago (7 children)

No experience, but just adding that long context models have a tendency of 'forgetting' whats in the middle of the text. Worth noting if you work on long texts I assume. I can't remember the paper tho. There's so many..

[–] flamdragparadiddle 4 points 1 year ago (6 children)

Lost in the middle: https://arxiv.org/abs/2307.03172

Happens for all models, not just Llama and it is really frustrating to deal with.

[–] [email protected] 4 points 1 year ago (4 children)

But is that a bug or a feature? I think it is plausible that relevant information is most likely either at the beginning of a document or in the previous few lines. So that is where attention should be focused.

Like when you get an assignment, the important instructions are at the beginning and not somewhere in the middle. And when writing a document or a book, the most important thing is your current sentence fits in with that paragraph. At that point you don't worry about remembering exactly what the hobbits did back in the Shire.

I remember reading some criticism on that paper. But i cannot comment on the technical aspects.

[–] noneabove1182 3 points 1 year ago

You raise an interesting point though in that most examples likely follow exactly as you suggest, there would have to be large amounts of training specifically for focusing on middle content, there probably just isn't enough in the dataset

load more comments (3 replies)
load more comments (4 replies)
load more comments (4 replies)