this post was submitted on 24 Jun 2025
26 points (81.0% liked)

Ollama - Local LLMs for everyone!

181 readers
1 users here now

A place to discuss Ollama, from basic use, extensions and addons, integrations, and using it in custom code to create agents.

founded 5 days ago
MODERATORS
 

Do you use it to help with schoolwork / work? Maybe to help you code projects, or to help teach you how to do something?

What are your preferred models and why?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 5 days ago (1 children)

The Mac Mini should support a slew of models because of the unified memory right? I’m using the Gemma3 12b model while locally developing my work project now on a laptop with a 4090M. The laptop/4090M kind of sucks tbh, employer definitely wasted their money but it wasn’t up to me.

How much ram on the mini? Gemma3 27b is like 17GB, so that should all fit in the unified memory. The 12b version is only like 8GB so I’d think that would work on your 3060.

You could probably also find some much more slimmed down models that focus on a specific thing you care about on hugging face. You don’t need a model trained on all of Shakespeare’s works if you want your local I’ll to explain code you’re reviewing.

[–] [email protected] 2 points 5 days ago

My Mac mini (32GB) can run 12B parameter models at around 13 tokens/sec, and my 3060 can achieve roughly double. However, both machines have a hard time keeping up with larger models. I'll have to look into some special-purpose models