this post was submitted on 12 Jun 2023
26 points (100.0% liked)

LocalLLaMA

2328 readers
7 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 2 years ago
MODERATORS
 

Let's talk about our experiences working with different models, either known or lesser-known.

Which locally run language models have you tried out? Share your insights, challenges, or anything you found interesting during your encounters with those models.

you are viewing a single comment's thread
view the rest of the comments
[–] actuallyacat 8 points 2 years ago* (last edited 2 years ago) (1 children)

The wizard-vicuna family is my favorite, they successfully combine lucidity with creativity. Wizard-vicuna-30b is competitive with guanaco-65b in most cases while being subjectively more fun. I hope we get a 65b version, or a Falcon 40B one

I've been generally unimpressed with models advertised as good for storytelling or roleplay, they tend to be incoherent. It's much easier to get wizard-vicuna to write fluent prose than it is to get one of those to stop mixing up characters or rules. I think there might be some sort of poison pill in the Pygmalion dataset, it's the common factor in all the models that didn't work well for me.

[–] [email protected] 2 points 2 years ago (1 children)

What setup do you have? Prompt / instruct formatting?

[–] actuallyacat 6 points 2 years ago

W-V is supposedly trained for "USER:/ASSISTANT:" but I've found it flexible and able to work with anything that's consistent. For creative writing I'll often do "USER:/STORY:". More than two such tags also work, e.g. I did a rpg-style thing with three characters plus an omniscient narrator, by just describing each of them with their tag in the prompt, and it worked nearly flawlessly. Very impressive actually.