this post was submitted on 28 Jun 2025
7 points (88.9% liked)
Ollama - Local LLMs for everyone!
190 readers
4 users here now
A place to discuss Ollama, from basic use, extensions and addons, integrations, and using it in custom code to create agents.
founded 1 week ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I run a m1 mac pro with 32gb ram, I would recommend going for more RAM if you can. I have no idea how that compares with dedicated GPU setups.
The m4 mini with 16gb is going to be too small for most models to run well, the models I run are Phi4 (8.5gb), Gemma 3n (12-15gb), Magistral Small (12gb), Deekspeek R1 Qwen3 8B (4GB).
The 8gb models are going to be the smallest relatively useful ones, and the 12-16gb are much more reliable than them.
I do have a preorder on the Framework Desktop, which I think is going to be a good value for money, but there aren't comprehensive reviews out yet. The mac does have a lot of good reviews about performance out.