AsAnAILanguageModel

joined 1 year ago
10
Mask Quest on Steam (store.steampowered.com)
submitted 1 month ago by AsAnAILanguageModel to c/games
[–] AsAnAILanguageModel 2 points 5 months ago

The latest Claude is even slightly better than gpt4o, and you can use it for free.

[–] AsAnAILanguageModel 2 points 7 months ago

UX is not primarily about how your project looks like, but about how easy it is for humans to interface with it.

On the other hand, user interfaces that are difficult to read or have misleading layouts can seem ugly.

I can recommend the book “the gamer’s brain” by Celia Hodent. Maybe this blog post of hers can give you a rough idea what the book will cover. Although she focuses on games, the lessons are universal.

[–] AsAnAILanguageModel 26 points 9 months ago (2 children)

It is often a little depressing for Italian women when they move to Northern Europe, because the lack of people aggressively hitting on them makes them feel unattractive.

[–] AsAnAILanguageModel 4 points 1 year ago

Yeah I think it’s mostly a meme now. Either you read comments from people who loved it, or jokes from people who haven’t played it. I had no expectations before playing it and liked it so much that I even preordered the DLC, to show my support. (I don’t care about the preorder bonus, and I don’t think preordering games is reasonable, but I’m gonna play it right away anyway, so it doesn’t matter in this case)

 

Meta releases SeamlessM4T, a general multilingual speech/text model claimed to surpass OpenAI's Whisper. It's available on github and everything can be used for free in a non-commercial setting.

Model Features:

  • Automatic speech recognition for ~100 languages.
  • Speech-to-text translation for ~100 input/output languages.
  • Speech-to-speech translation for ~100 input languages and 35 output languages.
  • Text-to-text and text-to-speech translation for nearly 100 languages.

Dataset:

  • SeamlessAlign: Open multimodal translation dataset with 270,000 hours of speech and text alignments.

Technical Insights:

  • Utilizes a multilingual and multimodal text embedding space for 200 languages.
  • Applied a teacher-student approach to extend this embedding space to the speech modality, covering 36 languages.
  • Mining performed on publicly available repositories resulted in 443,000 hours of speech aligned with texts and 29,000 hours of speech-to-speech alignments.

Toxicity Filter:

  • The model identifies toxic words from speech inputs/outputs and filters unbalanced toxicity in training data.
  • The demo detects toxicity in both input and output. If toxicity is only detected in the output, a warning is included and the output is not shown.
  • Given how impaired llama2-chat has been due to these kind of filters, it's unclear how useful these models are in a general setting.
[–] AsAnAILanguageModel 2 points 1 year ago

I tried the demo for a bit and it makes mistakes every time, but gets enough things right to be promising! I wonder how this will evolve in the coming months.

 

Hugging Face released IDEFICS, an 80B open-access visual language model replicating DeepMind's unreleased Flamingo. Built entirely on public data, it's the first of its size available openly. Part of its training utilized OBELICS, a dataset with 141M web pages, 353M images, and 115B text tokens from Common Crawl.

 

Hugging Face released IDEFICS, an 80B open-access visual language model replicating DeepMind's unreleased Flamingo. Built entirely on public data, it's the first of its size available openly. Part of its training utilized OBELICS, a dataset with 141M web pages, 353M images, and 115B text tokens from Common Crawl.

[–] AsAnAILanguageModel 1 points 1 year ago

That looks really cool! Is there a demo one could try somewhere?

[–] AsAnAILanguageModel 1 points 1 year ago

Thanks, it's great to have more multilingual models! It's a little surprising that RLHF outperforms SFT so consistently in their experiments. I guess it's worth it after all.

[–] AsAnAILanguageModel 2 points 1 year ago

Impressive! There are more examples here and the code repository here.

[–] AsAnAILanguageModel 2 points 1 year ago (1 children)

Tried the q2 ggml and it seems to be very good! First tests make it seem as good as airoboros, which is my current favorite.

 

Stability AI released three new 3b models for coding:

  • stablecode-instruct-alpha-3b (context length 4k)
  • stablecode-completion-alpha-3b-4k (context length 4k)
  • stablecode-completion-alpha-3b (context length 16k)

I didn't try any of them yet, since I'm waiting for the GGML files to be supported by llama.cpp, but I think especially the 16k model seems interesting. If anyone wants to share their experience with it, I'd be happy to hear it!

[–] AsAnAILanguageModel 2 points 1 year ago

Without mps it uses a lot more memory, because fp16 is not supported on the cpu backend. However, I tried it and noticed that there was an update pushed to the repository that split the model into several parts. It seems like I'm not getting any memory leaks now, even with mps as backend. Not sure why, but maybe it needs less RAM if the weights can be converted part by part. Time to test this model more I guess!

[–] AsAnAILanguageModel 1 points 1 year ago (2 children)

By MPS I mean "metal performance shaders", it's the backend that enables pytorch to use apple's metal api to use apple silicon specific optimizations. I actually think it's not unlikely that the issue is with pytorch. The mps support is still beta, and there was a bug that caused a lot of models to output gibberish when I used it. This bug was an open issue for a year and they only just fixed in a recent nightly release, which is why I even bothered to give this model a try.

That being said, I think one should generally be cautious about what to run their computers, so I appreciate that you started this discussion.

[–] AsAnAILanguageModel 3 points 1 year ago (4 children)

I think that’s a very relevant comment, and I also got spooked by this before I ran it. But I noticed that the GitHub repo and the huggingface repo aren’t the same. You can find the remote code in the huggingface repo. I also briefly skimmed the code for potential causes of the memory leak, but it’s not clear to me what’s causing it. It could also be PyTorch or one of the huggingface libraries, since mps support is still very beta.

14
submitted 1 year ago* (last edited 1 year ago) by AsAnAILanguageModel to c/localllama
 

I think it's a good idea to share experiences about LLMs here, since benchmarks can only give a very rough overview on how well a model performs.

So please share how much you're using LLMs, what you use them for and how they well they perform at those tasks. For example, here are my answers to these questions:

Usage

I use LLMs daily for work and for random questions that I would previously use web search for.

I mainly use LLMs for reasoning heavy tasks, such as assisting with math or programming. Other frequent tasks include proofreading, helping with bureaucracy, or assisting with writing when it matters.

Models

The one I find most impressive at the moment is TheBloke/airoboros-l2-70B-gpt4-1.4.1-GGML/airoboros-l2-70b-gpt4-1.4.1.ggmlv3.q2_K.bin. It often manages to reason correctly on questions where most other models I tried fail, even though most humans wouldn't. I was surprised that something using only 2.5 bits per weight on average could produce anything but garbage. Downsides are that loading times are rather long, so I wouldn't ask it a question if I didn't want to wait. (Time to first token is almost 50s!). I'd love to hear how bigger quantizations or the unquantized versions perform.

Another one that made a good impression on me is Qwen-7B-Chat (demo). It manages to correctly answer some questions where even some llama2-70b finetunes fail, ~~but so far I'm getting memory leaks when running it on my M1 mac in fp16 mode, so I didn't use it a lot.~~ (this has been fixed it seems!)

All other models I briefly tried where not too useful. It's nice to be able to run them locally, but they were so much worse than chatGPT that it's often not even worth it to consider using them.

view more: next ›