this post was submitted on 28 Jun 2025
7 points (88.9% liked)

Ollama - Local LLMs for everyone!

190 readers
5 users here now

A place to discuss Ollama, from basic use, extensions and addons, integrations, and using it in custom code to create agents.

founded 1 week ago
MODERATORS
 

What hardware is recommended for adequate performance and low power consumption. I've seen the Mac mini recommended a few times. Would the 16 GB model be enough? Or do I need the 24 gig model? What about this framework desktop thingy? It should be very good as well, but what will it consume at idle?

top 2 comments
sorted by: hot top controversial new old
[โ€“] [email protected] 3 points 3 days ago

I run a m1 mac pro with 32gb ram, I would recommend going for more RAM if you can. I have no idea how that compares with dedicated GPU setups.

The m4 mini with 16gb is going to be too small for most models to run well, the models I run are Phi4 (8.5gb), Gemma 3n (12-15gb), Magistral Small (12gb), Deekspeek R1 Qwen3 8B (4GB).

The 8gb models are going to be the smallest relatively useful ones, and the 12-16gb are much more reliable than them.

I do have a preorder on the Framework Desktop, which I think is going to be a good value for money, but there aren't comprehensive reviews out yet. The mac does have a lot of good reviews about performance out.

[โ€“] [email protected] 1 points 3 days ago

Yep, I keep reading that 32GB is considered a minimum. I also see that ollama is capable of sharing the model between GPU RAM and machine RAM, so the more RAM you have the better, starting with GPU RAM.