this post was submitted on 17 Jun 2023
9 points (100.0% liked)

LocalLLaMA

3306 readers
3 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS
 

New models posted by TheBloke, 7B to 65B, something for everyone!

Info from creators:

A stunning arrival! The fully upgraded Robin Series V2 language model is ready and eagerly awaiting your exploration.

This is not just a model upgrade, but the crystallization of wisdom from our research and development team. In the new version, Robin Series V2 has performed excellently among various open-source models, defeating well-known models such as Falcon, LLaMA, StableLM, RedPajama, MPT.

Specifically, we have carried out in-depth fine-tuning based on the entire LLaMA series, including 7b, 13b, 33b, 65b, all of which have achieved pleasing results. Robin-7b scored 51.7 in the OpenLLM standard test, and Robin-13b even reached as high as 59.1, ranking sixth, surpassing many 33b models. The achievements of Robin-33b and Robin-65b are even more surprising, with scores of 64.1 and 65.2 respectively, firmly securing the top positions.

top 3 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 2 years ago (2 children)

This is not just a model upgrade, but the crystallization of wisdom from our research and development team.

So much marketing and no basic information like what dataset was used.

[–] noneabove1182 1 points 2 years ago

Lmao right? Wish the devs provided way more info.. I feel like things are just moving too fast for any documentation (which is equal parts sad and scary)