this post was submitted on 02 Jul 2023
10 points (100.0% liked)
LocalLLaMA
2328 readers
28 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I've a rx 6650 xt and I generally use llama.cpp with the ROCm patch (tested up to commit ac7876ac20124a15a44fd6317721ff1aa2538806).
It works great with around 25 layers moved to the GPU for my 8GB card. 18, if you want to do something else GPU related (like watching a HW Accelerated video).
To be fair it's a long time now that I don't update llama.cpp and it had gone through a lot of changes in the meantime, like the addition of the
LLAMA_CUDA_DMMV_X
,LLAMA_CUDA_DMMV_Y
andLLAMA_CUDA_KQUANTS_ITER
parameters, so your mileage may vary and it's possible you'll have to manually modify the PR before merging it in, so not really an easy one click experience for the best performance.It currently doesn't support SuperHot or similar techniques, mainly because there's a really big push on new ones each day, and they are waiting to see which will be the real winner.
But I went a bit too much off-topic. I think the easiest, as the other commenter said, is to just go with kobold.cpp. I personally didn't have a good experience working with text-generation-webui, but a lot of people swear by it.
Yes thank you for the information I really appreciate it! I decided to go for kobold.cpp for the meantime with CLBlast which works just overall way better than standart CPU inference. But im looking into the ROCm LLamacpp support which I am currently trying.