this post was submitted on 02 Feb 2025
212 points (96.9% liked)

United States | News & Politics

2211 readers
992 users here now

Welcome to [email protected], where you can share and converse about the different things happening all over/about the United States.

If you’re interested in participating, please subscribe.

Rules

Be respectful and civil. No racism/bigotry/hateful speech.

Post anything related to the United States.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 12 points 1 day ago (2 children)

I had zero interest in downloading this shit before because LLMs are just lying slop machines, but if it becomes illegal, I will go out of my way to.

[–] [email protected] 2 points 1 day ago (3 children)

I'm with you on principle, but the thing is also like 600 GB of data. Not sure I have the disk space to take a stand on this one.

[–] [email protected] 1 points 1 day ago (1 children)

There are no lite versions? I was trying to find a small LLM version I can run on an old machine and take it off the internet (or just firewall it) and play around with it to see if there is anything worth learning there for me. I was looking at the lite version of llama but when I tried to run the install on mint I ran into some issues and then had to many drinks to focus on it so I went back to something else. Maybe next weekend. If you have any recommendations I'm all ears

[–] [email protected] 1 points 1 day ago

There are finetunes of Llama, Qwen, etc., based on DeepSeek that implement the same pre-response thinking logic, but they are ultimately still the smaller models with some tuning. If you want to run locally and don't have tens of thousands to throw at datacenter-scale GPUs, those are your best option, but they differ from what you'd get in the Deepseek app.

[–] [email protected] 2 points 1 day ago (1 children)

No worries. I've got some 20TB drives i can throw these on 😁

Now, running a model that large... Well, I'll just have to stick with the 8-13b params.

[–] [email protected] 1 points 1 day ago

you can actually run the model, but it just goes very slowly… i run the 70b model on my m1 mbp, and it technically “requires” 128gb or VRAM - it still runs, just not super fast (though i’d say it’s useable in this case a about 1 word per ~300ms)

[–] [email protected] 1 points 1 day ago

I'd have to remove all my games, the operating system, and run my SSDs in RAID to be able to fit that and still generate things with it. 😮