this post was submitted on 25 Aug 2023
22 points (100.0% liked)
LocalLLaMA
2274 readers
3 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The memory bandwidth stinks compared to a discrete gpu. That’s the reason. It’s still possible.
The question is, though, would it be better than just a CPU with lots of RAM?
Yes, it seems so according to this person’s testing: https://youtu.be/HPO7fu7Vyw4