this post was submitted on 26 Jan 2025
209 points (95.6% liked)

Memes

46343 readers
1688 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 5 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] -2 points 1 day ago (1 children)

Uh yeah, that's because people publish data to huggingface. GitHub isn't made for huge data files in case you weren't aware. You can scroll down to datasets here https://huggingface.co/deepseek-ai

[–] [email protected] 7 points 1 day ago (1 children)

That's the "prover" dataset, ie the evaluation dataset mentioned in the articles I linked you to. It's for checking the output, it is not the training output.

It's also 20mb, which is miniscule not just for a training dataset but even as what you seem to think is a "huge data file" in general.

You really need to stop digging and admit this is one more thing you have surface-level understanding of.

[–] [email protected] -3 points 1 day ago (1 children)

Do show me a published data set of the kind you're demanding.

[–] [email protected] 11 points 1 day ago* (last edited 1 day ago) (2 children)

Since you're definitely asking this in good faith and not just downvoting and making nonsense sealion requests in an attempt to make me shut up, sure! Here's three.

https://commoncrawl.org/

https://github.com/togethercomputer/RedPajama-Data

https://huggingface.co/datasets/legacy-datasets/wikipedia/tree/main/

Oh, and it's not me demanding. It's the OSI defining what an open source AI model is. I'm sure once you've asked all your questions you'll circle back around to whether you disagree with their definition or not.

[–] [email protected] 1 points 9 hours ago (1 children)

Thank you for posting those links, while I'm not sure the person you replied to was asking in good faith, I myself was wanting to see an example after reading the discussion.

Seems like even if it's not fully open source it's a step in the right direction in a world where terms like "open" and non profit have been co-opted by corporations to lose their original meaning.

[–] [email protected] 1 points 6 hours ago

It's certainly better than "Open"AI being completely closed and secretive with their models. But as people have discovered in the last 24 hours, DeepSeek is pretty strongly trained to be protective of the Chinese government policy on, uh, truth. If this was a truly Open Source model, someone could "fork" it and remake it without those limitations. That's the spirit of "Open Source" even if the actual term "source" is a bit misapplied here.

As it is, without the original training data, an attempt to remake the model would have the issues DeepSeek themselves had with their "zero" release where it would frequently respond in a gibberish mix of English, Mandarin and programming code. They had to supply specific data to make it not do this, which we don't have access to.