this post was submitted on 12 Jun 2023
26 points (100.0% liked)

LocalLLaMA

2825 readers
15 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

founded 2 years ago
MODERATORS
 

Let's talk about our experiences working with different models, either known or lesser-known.

Which locally run language models have you tried out? Share your insights, challenges, or anything you found interesting during your encounters with those models.

you are viewing a single comment's thread
view the rest of the comments
[–] Kerfuffle 2 points 2 years ago

I was pretty impressed by guanaco-65B, especially how it was able to remain coherent even way past the context limit (with llama.cpp's context wrapping thing). You can see the second story is definitely longer than 2,048 tokens.