this post was submitted on 25 Aug 2023
22 points (100.0% liked)
LocalLLaMA
2274 readers
3 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I've gotten LLAMA running locally during CLBlast on an AMD GPU, and using the CPU simultaneously (basically APU execution pathway)
AMD is seriously slacking when it comes to machine learning, the hardware is Uber powerful, but just like everyone complains about, software isn't there.
ROCM doesn't even work on Windows, FFS.
You can run models on almost anything but the token generation is extremely slow. Like, you might be waiting upwards of 5 minutes for a response, or something like 0.2-0.6/tokens per second, which for a minimum of 100 tokens to be coherent is abysmal.
Isn't windows for gaming and weird proprietary applications like photoshop?
If you're using llama.cpp, some ROCM stuff recently got merged in. It works pretty well, at least on my 6600. I believe there were instructions for getting it working on Windows in the pull.
Thank you so much! I'll be sure to check that out / get it updated