this post was submitted on 10 Jun 2023
12 points (100.0% liked)

LocalLLaMA

2856 readers
4 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

founded 2 years ago
MODERATORS
 

I’ve been using llama.cpp, gpt-llama and chatbot-ui for a while now, and I’m very happy with it. However, I’m now looking into a more stable setup using only GPU. Is this llama.cpp still still a good candidate for that?

top 8 comments
sorted by: hot top controversial new old
[–] Hudsonius@lemmy.ml 3 points 2 years ago (1 children)

GPTQ-for-llama with ooba booga works pretty well. I’m not sure to what extent it uses CPU, but my GPU is at 100% during inference so it seems to be mainly that.

[–] bia@lemmy.ml 1 points 2 years ago (2 children)

I've looked at that before. Do you use it with any UI?

[–] Hudsonius@lemmy.ml 3 points 2 years ago (1 children)

Yea it’s called Text Generation web UI. If you check out the Ooba Booga git, it goes into good details. From what I can tell it’s based on the automatic1111 UI for stable diffusion.

[–] dragonfyre13 2 points 2 years ago

It's using Gradio, which is what auto1111 also uses. Both of these are pretty heavy modifications/extensions that do a lot to push Gradio to it's limits, but that's package being used in both. Note, it also has an api (checkout the --api flag I believe), and depending on what you want to do there's various UIs that can hook into the Text Gen Web UI (oobabooga) API in various ways.

[–] Equality_for_apples 1 points 2 years ago

Personally, I have nothing but issues with Oogas ui, so I connect Silly Tavern to it or KoboldCPP. Works great

[–] gh0stcassette@lemmy.world 1 points 2 years ago* (last edited 2 years ago) (1 children)

Llama.cpp recently added CUDA acceleration for generation (previously only ingesting the prompt was GPU accelerated), and in my experience it's faster than GPTQ unless you can fit absolutely 100% of the model in VRAM. If literally a single layer is CPU offloaded, the performance in GPTQ immediately becomes like 30-40% worse than an equivalent CPU offload with llama.cpp

[–] bia@lemmy.ml 0 points 2 years ago (1 children)

Haven't been able to test that out, but saw the change. Particularly interesting for my use case.

[–] gh0stcassette@lemmy.world 1 points 2 years ago* (last edited 2 years ago)

What use case would that be?

I can get like 8 tokens/s running 13b models in q_3_k_L quantization on my laptop, about 2.2 for 33b, and 1.5 for 65b (I bought 64gb of RAM to be able to run larger models lol). 7B was STUPID fast because the entire model fits inside my (8gb) GPU, but 7b models mostly suck (wizard-vicuna-uncensored is decent, every other one I've tried was Not).

load more comments