this post was submitted on 02 Aug 2023
14 points (93.8% liked)
LocalLLaMA
2265 readers
10 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I read some other people complain, too. Maybe try the base model. I'm not sure if it's the fine-tune or llama2's fault.
There are ways to measure that. To measure perplexity across the context. And whatever people did to measure if the things went into the right direction that increased the first llama context size past 2048. But I didn't find measurements for Llama2 at least with a quick google .
Edit: And people mentioned Llama2 has a different attention mechanism at the 70B version. This also might be specific to the 70B version. Make sure to use the most recent version of KoboldCPP or whatever you use and to configure the scaling correctly. At 4096 it shouldn't need any context scaling as far as i understand.