this post was submitted on 19 Mar 2025
366 points (97.9% liked)
Technology
66783 readers
5079 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It's been like three years, and small local models keep getting more capable. Photorealistic video rendering is already here. Writing code from high-level goals largely works. A whole lot of "computers will never do [blank]" went out the window.
These are from models designed for denoising and autocomplete. It's ridiculous that they work at all. 'What's the next word?' should not be the right question for recalling trivia, translating foreign languages, or answering riddles, yet this gimmick manages a sloppy approximation of whatever you want.
Right now is the worst it will ever be again.
The underlying technology could be trained for literally any goal with examples provided. A better question will inevitably perform deeper witchcraft. Meanwhile: more data and more parameters have not fixed goofy models. Everything they're capable of keeps appearing in tiny versions that will run on a laptop. Which means better models can be trained for mundane quantities of money, on consumer hardware, with experimental differences. It's gonna get fuckin' weird.