this post was submitted on 09 Jun 2024
60 points (98.4% liked)

TechTakes

1437 readers
95 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 20 points 5 months ago (1 children)

Yes, we know (there are papers about it) that for LLMs every increase of capabilities we need exponentially more data to train it. But don't worry, we only consumed half the worlds data to train LLMs, still a lot of places to go ;).

[–] [email protected] 1 points 5 months ago* (last edited 5 months ago)

That doesn't appear to actually be the case, though. LLMs have been improving greatly through the use of a smaller amount of higher-quality data, some of it synthetic data that's been generated in part by other LLMs. Turns out simply dumping giant piles of random nonsense from the Internet on a neural net doesn't produce the best results. Do you have references to any of those papers you mention?

Necro-edit: NVIDIA just released an LLM that's specifically designed to generate training data for other LLMs, as a concrete example of what I'm talking about.