Hmmh, the 4090 is kind if the wrong choice for this, due to its memory bus width... For AI workloads and especially if you want to connect lots of memory, you kind of want the widest bus possible.
this post was submitted on 05 Mar 2025
15 points (94.1% liked)
LocalLLaMA
2660 readers
2 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 2 years ago
MODERATORS