this post was submitted on 26 Jan 2025
213 points (95.7% liked)

Memes

46489 readers
995 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 5 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 18 points 1 week ago (2 children)

What's a deepseek? Sounds like a search engine?

[–] [email protected] 27 points 1 week ago (3 children)

Deepseek is a Chinese AI company that released Deepseek R1, a direct competitor to ChatGPT.

[–] [email protected] 26 points 1 week ago (1 children)

You forgot to mention that it's open source.

[–] [email protected] 2 points 1 week ago (2 children)

Is it actually open source, or are we using the fake definition of "open source AI" that the OSI has massaged into being so corpo-friendly that the training data itself can be kept a secret?

[–] [email protected] 3 points 1 week ago (1 children)

The code is open, weights are published, and so is the paper describing the algorithm. At the end of the day anybody can train their own model from scratch using open data if they don't want to use the official one.

[–] [email protected] 1 points 1 week ago* (last edited 1 week ago) (1 children)

The training data is the important piece, and if that's not open, then it's not open source.

I don't want the data to avoid using the official one. I want the data so that so that I can reproduce the model. Without the training data, you can't reproduce the model, and if you can't do that, it's not open source.

The idea that a normal person can scrape the same amount and quality of data that any company or government can, and tune the weights enough to recreate the model is absurd.

[–] [email protected] 1 points 1 week ago (1 children)

What ultimately matters is the algorithm that makes DeepSeek efficient. Models come and go very quickly, and that part isn't all that valuable. If people are serious about wanting to have a fully open model then they can build it. You can use stuff like Petals to distribute the work of training too.

[–] [email protected] 1 points 1 week ago (1 children)

That's fine if you think the algorithm is the most important thing. I think the training data is equally important, and I'm so frustrated by the bastardization of the meaning of "open source" as it's applied to LLMs.

It's like if a normal software product provides a thin wrapper over a proprietary library that you must link against calling their project open source. The wrapper is open, but the actual substance of what provides the functionality isn't.

It'd be fine if we could just use more honest language like "open weight", but "open source" means something different.

[–] [email protected] 1 points 1 week ago (1 children)

Again, if people feel strongly about this then there's a very clear way to address this problem instead of whinging about it.

[–] [email protected] -1 points 1 week ago* (last edited 1 week ago) (1 children)

Yes. That solution would be to not lie about it by calling something that isn't open source "open source".

[–] [email protected] 2 points 1 week ago (1 children)

Plenty of debate on what classifies as an open source model last I checked, but I wasn't expecting honesty from you there anyways.

[–] [email protected] 0 points 1 week ago (1 children)

You won't see me on the side of the "debate" that launders language in defense of the owning class ¯_(ツ)_/¯

[–] [email protected] 1 points 1 week ago

Nobody is doing that, but keep making bad faith arguments if you feel the need to.

[–] [email protected] 1 points 1 week ago (1 children)
[–] [email protected] 3 points 1 week ago

I'm not seeing the training data here... so it looks like the answer is yes, it's not actually open source.

[–] [email protected] 7 points 1 week ago (2 children)

Nice! What are they competing for? I'm new to this AI business thing.

[–] MajorSauce 23 points 1 week ago

So far, they are training models extremely efficiently while having US gatekeeping their GPUs and doing everything they can to slow their progress. Any innovation in having efficient models to operate and train is great for accessibility of the technology and to reduce the environment impacts of this (so far) very wasteful tech.

[–] [email protected] 4 points 1 week ago

Market share, in a speculated market to be in the future.