It didn't work for me. Why not?
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
One arm hair in the hand is better than two in the bush
Honestly, I’m kind of impressed it’s able to analyze seemingly random phrases like that. It means its thinking and not just regurgitating facts. Because someday, such a phrase could exist in the future and AI wouldn’t need to wait for it to become mainstream.
Underlining how wild it is that this approach ever works. Neural networks are the miraculous way of the future - LLMs are a silly transitional step. LLMs are trained for plausibility, not correctness. To my knowledge nobody's training on "exquisite corpse" counterexamples, or outright Time Cube bullshit, to teach the model when to pull the chute.
Half the problem with AI is that it's accidentally okay at what fools and grifters insist it's flawless at, and the other half is that the grifters have pushed it onto as many fools as possible.
The near future requires new questions. 'What word comes next?' is a fascinating proof of concept. But we'd be better-off with a model that grinds through a long prompt before answering the yes-no-maybe question: 'Is this bullshit?'