this post was submitted on 28 Jan 2025
46 points (87.1% liked)

Technology

61850 readers
2354 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

DeepSeek, the viral AI company, has released a new set of multimodal AI models that it claims can outperform OpenAI’s DALL-E 3.

The models, which are available for download from the AI dev platform Hugging Face, are part of a new model family that DeepSeek is calling Janus-Pro. They range in size from 1 billion to 7 billion parameters. Parameters roughly correspond to a model’s problem-solving skills, and models with more parameters generally perform better than those with fewer parameters.

Janus-Pro is under an MIT license, meaning it can be used commercially without restriction.

top 3 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 1 week ago (2 children)

I'm so confused on deepseek. I don't understand what makes it open source when it seems to me some clients to access deepseek are open source, but you need to log in to use it. What's open source about it? I'm guessing none of this means you can run it locally, can you?

[–] Avenging5 1 points 3 days ago

yes, the code is there on github or you can use ollama and huggingface to download and run on your machine

[–] [email protected] 4 points 1 week ago

As far as I understand, the training data is closed source. But, the methodology of training is open source which allows independent parties to recreate the model from scratch and see similar results. Not only can you download the full >400GB model using huggingface or ollama, but they also offer distilled versions of the model which are small enough to run on something like a raspberry pi. i'm running it locally on my machine at home with perplexica (perplexity.ai lookalike with searching capabilities)