this post was submitted on 10 Jun 2023
12 points (100.0% liked)

LocalLLaMA

2839 readers
15 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

founded 2 years ago
MODERATORS
 

I’ve been using llama.cpp, gpt-llama and chatbot-ui for a while now, and I’m very happy with it. However, I’m now looking into a more stable setup using only GPU. Is this llama.cpp still still a good candidate for that?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 2 years ago (4 children)

GPTQ-for-llama with ooba booga works pretty well. I’m not sure to what extent it uses CPU, but my GPU is at 100% during inference so it seems to be mainly that.

[–] [email protected] 1 points 2 years ago (3 children)

I've looked at that before. Do you use it with any UI?

[–] Equality_for_apples 1 points 2 years ago

Personally, I have nothing but issues with Oogas ui, so I connect Silly Tavern to it or KoboldCPP. Works great

load more comments (2 replies)
load more comments (2 replies)