this post was submitted on 29 Jan 2024
63 points (90.9% liked)

LocalLLaMA

2265 readers
10 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 9 months ago (1 children)

How well do the OpenLlama models perform against Llama2? AIUI the training data uses for OpenLlama is the same?

[–] [email protected] 7 points 9 months ago* (last edited 9 months ago)

The training data for OpenLlama is called RedPajama if I'm not mistaken. And a reproduction of what Meta used to train the first LLaMA. Back then they listed the datasets in the scientific paper. Nowadays they and their competitors don't do that anymore.

OpenLlama performs about as good (slightly worse) as the first official LLaMA. And both perform worse than Llama2. It's not day and night, but i think a noticeable improvement. And Llama2 has twice the context length which is a huge improvement for some use-cases.

If you're looking for models with a different license, there are some more. Mistral is Apache 2.0 and there are several more with permissive licenses.

If you're looking for info on what datasets the big players use, forget it (my opinion). The companies are all involved in legal battles over copyright and have stopped publishing what they use. Many (except for Meta) have kept it a (trade) secret from the beginning and never shared such information. It's unscientific because it doesn't allow for repeatability. But AI is expensive and everyone is currently trying to get obscenely rich with it or strives for world domination.

But datasets are available, like the RedPajama one, several other collections for various purposes... Lots of datasets for fine-tuning and a whole community around that. Just for the base/foundation models, we don't have access to a current state of the art dataset for that.