this post was submitted on 14 Jul 2023
7 points (100.0% liked)
LocalLLaMA
2293 readers
1 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
As far as I know they are different types of quantization.
The main difference you have to keep in mind as an end user is that, currently, GPTQ needs the full model to load in VRAM (the memory of your GPU) while GGML can share layers between the system RAM and the VRAM.
Performance wise I think it depends on the foundational model used, I know some time ago someone (The_Bloke?) did some testing, but I read it on Reddit and I don't feel like going to search for it.
There's this interesting post on huggingface Link, but it's pretty old and things could have changed (for example GGML has gone through different iterations).
I'm just going by memory, so take everything I wrote with a pinch of salt. I never personally used GPTQ.
Also llama.cpp offers very fast performance with the ggmls compared to using transformers, and sometimes faster than ExLlama.