this post was submitted on 14 Jun 2023
22 points (100.0% liked)

LocalLLaMA

2274 readers
3 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 2 years ago
MODERATORS
 

Promising stuff from their repo, claiming "exceptional performance, achieving a [HumanEval] pass@1 score of 57.3, surpassing the open-source SOTA by approximately 20 points."

https://github.com/nlpxucan/WizardLM

top 12 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 1 year ago (1 children)

From the Twitter post

New StarCoder coding model from @WizardLM_AI

"WizardCoder-15B-v1.0 model achieves 57.3 pass@1 on the HumanEval Benchmarks .. 22.3 points higher than the SOTA open-source Code LLMs."

My quants: https://huggingface.co/TheBloke/WizardCoder-15B-1.0-GGML https://huggingface.co/TheBloke/WizardCoder-15B-1.0-GPTQ

Original: huggingface.co WizardLM/WizardCoder-15B-V1.0 · Hugging Face

11:21 AM · Jun 14, 2023

[–] [email protected] 2 points 1 year ago (1 children)

On The Bloke's hugging face repo, it says the GGML quants are not compatible with llama.cpp, anyone know why?

[–] Kerfuffle 4 points 1 year ago* (last edited 1 year ago) (1 children)

It's a different type of model. llama.cpp only supports LLaMA models while GGML (the machine learning library llama.cpp is based on) has examples of various models with different architectures. WizardCoder, MPT, Bloom, probably very soon Falcon. Also some separate projects use GGML to support other models (including some of the ones I listed). For example the Rust "llm" project can support LLaMA models, MPT, BLOOM.

[–] noneabove1182 1 points 1 year ago (1 children)

Looks like gpt4all supports it, thought it was based on llama for some reason going to have to give it a try

[–] Kerfuffle 1 points 1 year ago (1 children)

It looks like a frontend that just bundles a bunch of stuff together. Oobabooga's webui thing is similar: you can run stuff with llama.cpp, GPTQ, etc. What models and features are supported is going to depend on how the frontend manages that stuff. There are also forks of llama.cpp like koboldc++ which may support different models/features/formats (I know koboldc++ supports some older GGML file formats that llama.cpp broke compatibility with).

[–] noneabove1182 1 points 1 year ago (1 children)

Oh wait does ooba support this? Nvm then I'm enjoying using that, I'm just a little lost sometimes haha

[–] Kerfuffle 2 points 1 year ago (1 children)

I don't know if it does or doesn't, I was just saying those two projects seemed similar: presenting a frontend for running inference on models while the user doesn't necessarily have to know/care what backend is used.

[–] noneabove1182 2 points 1 year ago

Gotcha, koboldcpp seems to be able to run it, all of it is only a tiny bit confusing :D

[–] [email protected] 2 points 1 year ago (1 children)

So if I understand correctly it is fine tuned for coding or what exactly is this Wizard model doing?

[–] [email protected] 4 points 1 year ago (1 children)

It is StarCoder and fine tuned on a new Wizard Instruct dataset optimized for coding models. So it follows the instruct formatting of prompts on top of the StarCoder base model.

[–] [email protected] 2 points 1 year ago (1 children)

That sounds great honestly! Does that work with the newest ggml yet?

[–] [email protected] 3 points 1 year ago

Doesn’t look like it, hopefully it does someday. I am stoked to try this one out.