this post was submitted on 27 Jan 2024
24 points (96.2% liked)
LocalLLaMA
2265 readers
10 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Has that changed recently? I've ran ROCm successfully on an RX6800. I seem to recall that was supported, the host OS (Arch) was not.
When I tried it maybe a year or so ago there were four supported chipset in that version (5.4.2 I think) of rocm but I don't remember which card models those were since they were only specified in that internal chip name. Mine wasn't supported at the time (5700XT)
No, GFX1030 is still supported.