this post was submitted on 06 Sep 2023
26 points (93.3% liked)
LocalLLaMA
2269 readers
5 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I only ever used 7b large language models on my RX 6950 XT but PyTorch had or still has some nasty AMD VRAM bugs which didn't fully utilized all of my VRAM (more like only a quarter of it)
it seems the sad truth is high performance/training of models are just not good on AMD cards as of now
Interesting
Do you only use LLMs or also stable diffusion ?