Lost in the middle: https://arxiv.org/abs/2307.03172
Happens for all models, not just Llama and it is really frustrating to deal with.
Lost in the middle: https://arxiv.org/abs/2307.03172
Happens for all models, not just Llama and it is really frustrating to deal with.
We call it "glopping"
For me, FFVII wins But only because I payed through the rename recently and rediscovered how much I love it and then played the original again.
To Zanarkand is still one of my all time favourites
I'm my application (summarising excerpts from several papers) it is a bug. I had assumed the context would be given equal weight throughout, but the distribution of information in the generated summaries suggests it is following the lost in the middle shape. This is most evident when the early chunks of text say something contradicted by the middle. I'd expect the models to talk about the contradiction at least, but it hasn't been mentioned in any that I've looked at.
I can see what you mean, when generating text you need to pay most attention to what you just wrote, but you also don't want to claim the hobbits started out in Mordor. I have no idea how to mitigate it, other than making the context short enough that it is all 'remembered'.
If you remember where you read some criticism, I'd be very grateful for a link. That paper is doing a lot of heavy lifting in how I understand what I'm seeing, so it would be good to know where the holes in it are.