this post was submitted on 09 Aug 2023
18 points (100.0% liked)

LocalLLaMA

2262 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
18
submitted 1 year ago* (last edited 1 year ago) by noneabove1182 to c/localllama
 

These are the full weights, the quants are incoming from TheBloke already, will update this post when they're fully uploaded

From the author(s):

WizardLM-70B V1.0 achieves a substantial and comprehensive improvement on coding, mathematical reasoning and open-domain conversation capacities.

This model is license friendly, and follows the same license with Meta Llama-2.

Next version is in training and will be public together with our new paper soon.

For more details, please refer to:

Model weight: https://huggingface.co/WizardLM/WizardLM-70B-V1.0

Demo and Github: https://github.com/nlpxucan/WizardLM

Twitter: https://twitter.com/WizardLM_AI

GGML quant posted: https://huggingface.co/TheBloke/WizardLM-70B-V1.0-GGML

GPTQ quant repo posted, but still empty (GPTQ is a lot slower to make): https://huggingface.co/TheBloke/WizardLM-70B-V1.0-GPTQ

top 3 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 1 year ago

Me a few months ago when upgrading my computer: pff, who needs 64GB of RAM? Seems like a total waste

Me after realising you can run LLM at home: cries

[–] AsAnAILanguageModel 2 points 1 year ago (1 children)

Tried the q2 ggml and it seems to be very good! First tests make it seem as good as airoboros, which is my current favorite.

[–] noneabove1182 1 points 1 year ago

agreed, it seems quite capable, i haven't tested all the way down to q2 to verify but i'm not surprised