flamdragparadiddle

joined 2 years ago
[–] flamdragparadiddle 2 points 1 year ago (2 children)

I'm my application (summarising excerpts from several papers) it is a bug. I had assumed the context would be given equal weight throughout, but the distribution of information in the generated summaries suggests it is following the lost in the middle shape. This is most evident when the early chunks of text say something contradicted by the middle. I'd expect the models to talk about the contradiction at least, but it hasn't been mentioned in any that I've looked at.

I can see what you mean, when generating text you need to pay most attention to what you just wrote, but you also don't want to claim the hobbits started out in Mordor. I have no idea how to mitigate it, other than making the context short enough that it is all 'remembered'.

If you remember where you read some criticism, I'd be very grateful for a link. That paper is doing a lot of heavy lifting in how I understand what I'm seeing, so it would be good to know where the holes in it are.

[–] flamdragparadiddle 4 points 1 year ago (6 children)

Lost in the middle: https://arxiv.org/abs/2307.03172

Happens for all models, not just Llama and it is really frustrating to deal with.

[–] flamdragparadiddle 3 points 1 year ago

We call it "glopping"

 

Gorilla is an LLM that can learn to use APIs, and if like to try getting it to use some that I work with.

There's a GGML here and the original repo is here. They have instructions for adding an API, but I don't really understand them, at least not well enough to add a generic one.

It looks really good though, which is why I'm excited about it! I think it should be possible to use generic APIs like this, if I understand it correctly.

[–] flamdragparadiddle 1 points 2 years ago

For me, FFVII wins But only because I payed through the rename recently and rediscovered how much I love it and then played the original again.

To Zanarkand is still one of my all time favourites