Have you considered the framework desktop? Uses amd strix halo. The same cost for the Mac mini (2k?) gets you around 90gb vram, out of a 128gb unified ram configuration.
LocalLLaMA
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
Interesting, lots of "bang for the buck". I'll check it out
Yup! They even had a demo clustering 5 of them to run deep seek proper
Depends on what model you want to run?
of course, I haven't looked at models >9B for now. So I have to decide if I want to run larger models quickly or even larger models quickly-but-not-as-quick-as-on-a- Mac-Studio.
Or I could just spend the money on API credits :D
Use API credits. 64GB can barely run a 70B model. I have a MacBook Pro M3 Max with 128GB and can run those and even slightly bigger models. But results are underwhelming. Not bought for LLM only, but if I would have, I would be disappointed.