this post was submitted on 23 Nov 2024
354 points (89.2% liked)

Technology

59669 readers
3718 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

I'm usually the one saying "AI is already as good as it's gonna get, for a long while."

This article, in contrast, is quotes from folks making the next AI generation - saying the same.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 100 points 4 days ago (17 children)

It's absurd that some of the larger LLMs now use hundreds of billions of parameters (e.g. llama3.1 with 405B).

This doesn't really seem like a smart usage of ressources if you need several of the largest GPUs available to even run one conversation.

[–] 31337 6 points 2 days ago* (last edited 2 days ago) (3 children)

Larger models train faster (need less compute), for reasons not fully understood. These large models can then be used as teachers to train smaller models more efficiently. I've used Qwen 14B (14 billion parameters, quantized to 6-bit integers), and it's not too much worse than these very large models.

Lately, I've been thinking of LLMs as lossy text/idea compression with content-addressable memory. And 10.5GB is pretty good compression for all the "knowledge" they seem to retain.

[–] [email protected] 1 points 1 day ago (2 children)

I don't think Qwen was trained with distillation, was it?

It would be awesome if it was.

Also you should try Supernova Medius, which is Qwen 14B with some "distillation" from some other models.

[–] 31337 1 points 1 day ago (1 children)

Hmm. I just assumed 14B was distilled from 72B, because that's what I thought llama was doing, and that would just make sense. On further research it's not clear if llama did the traditional teacher method or just trained the smaller models on synthetic data generated from a large model. I suppose training smaller models on a larger amount of data generated by larger models is similar though. It does seem like Qwen was also trained on synthetic data, because it sometimes thinks it's Claude, lol.

Thanks for the tip on Medius. Just tried it out, and it does seem better than Qwen 14B.

[–] [email protected] 1 points 1 day ago* (last edited 1 day ago)

Llama 3.1 is not even a "true" distillation either, but its kinda complicated, like you said.

Yeah Qwen undoubtedly has synthetic data lol. It's even in the base model, which isn't really their "fault" as its presumably part of the web scrape.

load more comments (13 replies)