this post was submitted on 09 Sep 2024
297 points (99.0% liked)
Technology
59708 readers
1960 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Hmm, gotcha. I just tried out a fresh copy of text-gen-webui and it seems like the latest version is borked with ROCM (I get the
CUDA error: invalid device function
error).My next recommendation then would be LM Studio which to my knowledge can still output an OpenAI compatible API endpoint to be used in SillyTavern - I've used it in the past before and I didn't even need to run it within Distrobox (I have all of the ROCM stuff installed locally, but I generally run most of the AI stuff in distrobox since it tends to require an older version of Python than Arch is currently using) - it seems they've recently started supporting running GGUF models via Vulkan, which I assume probably doesn't require the ROCM stuff to be installed perhaps?
Might be worth a shot, I just downloaded the latest version (the UI has definitely changed a bit since I last used it) and just grabbed a copy of the Gemma model and ran it, and it seemed to work without an issue for me directly on the host.
The advanced configuration settings no longer seem to directly mention GPU acceleration like it used to, however I can see it utilizing GPU resources in
nvtop
currently, and the speed it was generating at (the one in my screenshot was 83 tokens a second) couldn't have possibly been done on the CPU so it seems to be fine on my side.I tried LM Studio (directly in Bazzite) and it has the same issue of not running through the GPU. It also always seem to end up stopping generating anything after a few moments when I use it with SillyTavern.
Tried the koboldcpp AUR package through Distrobox, and when I select the ROCm option it crashes with a CUDA error. lol Using the Vulkan option it still seems to run through the CPU for some reason.