this post was submitted on 30 Oct 2023
35 points (94.9% liked)
LocalLLaMA
2269 readers
3 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Looks like the dedupped dataset is about 5T tokens. Nothing to sneeze at for sure.
I thought they claim the dedupped dataset is the 20.5T number, where did you see 5T? either way that would still be awesome, especially when you consider the theory that quality is most limited by datasets and llama2 was trained on 2T.. this could be huge
Maybe I misread it but this was the source of the 5T remark…
https://news.ycombinator.com/item?id=38077521#38080442
I think the implication is more stating that this dataset is even more useful if you don't jam the whole thing into your training but instead further filter it to a reasonable number of tokens, around 5T, and train on that subset instead
I could be incorrect, cause they do explicitly say deduplicating, but it's phrased oddly either way