this post was submitted on 25 Aug 2023
22 points (100.0% liked)

LocalLLaMA

2274 readers
3 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 2 years ago
MODERATORS
 

Is it just memory bandwidth? Or is it that AMD is not well supported by pytorch well enough for most products? Or some combination of those?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 8 points 1 year ago (1 children)

The memory bandwidth stinks compared to a discrete gpu. That’s the reason. It’s still possible.

[–] [email protected] 1 points 1 year ago (1 children)

The question is, though, would it be better than just a CPU with lots of RAM?

[–] [email protected] 3 points 1 year ago

Yes, it seems so according to this person’s testing: https://youtu.be/HPO7fu7Vyw4