this post was submitted on 05 Feb 2024
64 points (82.0% liked)

Technology

59105 readers
3200 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Sam Altman says ChatGPT should be 'much less lazy now'::ChatGPT users previously complained that the chatbot was slacking off and refusing to complete some tasks.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 25 points 9 months ago (3 children)

PSA: give open-source LLMs a try folks. If you're on Linux or macOS, ollama makes it incredibly easy to try most of the popular open-source LLMs like Mistral 7B, Mixtral 8x7B, CodeLlama etc... Obviously it's faster if you have a CUDA/ROCm-capable GPU, but it still works in CPU-mode too (albeit slow if the model is huge) provided you have enough RAM.

You can combine that with a UI like ollama-webui or a text-based UI like oterm.

[–] [email protected] 8 points 9 months ago (1 children)

Or use Jan. Really nice GUI app to use open source LLMs.

[–] [email protected] 2 points 9 months ago

Seconded - I was playing with this last week - the most basic model is hilariousy "bad" and the larger 30GB models are OK but kill my RAM and take forever to respond. I mean it's not "bad" because frankly LLMs are like magic to me and I'm grateful they even exist at the level they do, but not up to the level that OpenAI is at right now.

Very promising - excited to see that LLMs aren't solely locked behind paywalls and I can't wait to see where some of these go in the next few years!

[–] [email protected] 2 points 9 months ago (1 children)

I spent the better part of a day trying to setup llama c++ with "wizard vicuna unrestricted" and was unable to, and I've got quite a tech background. This was at someone's suggestion, I'm hoping yours is easier lol.

[–] [email protected] 0 points 9 months ago (1 children)

ollama should be much easier to setup!

[–] [email protected] 2 points 9 months ago

Thanks lol I'm looking forward to it so I can stop contributing to openai

[–] [email protected] 2 points 9 months ago (1 children)

ROCm? Is that even supported now? Last time I checked it was still a dumpster fire. What are the RAM and VRAM reqs for the Mixtral8x7b?

[–] [email protected] 0 points 9 months ago* (last edited 9 months ago) (1 children)

ROCm is decent right now, I can do deep learning stuff and CUDA programming with it with an AMD APU. However, ollama doesn't work out-of-the-box yet with APUs, but users seem to say that it works with dedicated AMD GPUs.

As for Mixtral8x7b, ~~I couldn't run it on a system with 32GB of RAM and an RTX 2070S with 8GB of VRAM, I'll probably try with another system soon~~ [EDIT: I actually got the default version (mixtral:instruct) running with 32GB of RAM and 8GB of VRAM (RTX 2070S).] That same system also runs CodeLlama-34B fine.

So far I'm happy with Mistral 7b, it's extremely fast on my RTX 2070S, and it's not really slow when running in CPU-mode on an AMD Ryzen 7. Its speed is okayish (~1 token/sec) when I try it in CPU-mode on an old Thinkpad T480 with an 8th gen i5 CPU.

[–] [email protected] 2 points 9 months ago

I have a ryzen apu, so I was curious. I tried yesterday to fiddle with it, and managed to up the "vram" to 16gb. But installing xformers and flash-attention for LLM support on igpus is not officially supported and was not possible to install anything past pytorch. It's step further for sure, but still needs lots of work.