this post was submitted on 17 May 2024
502 points (94.8% liked)
Technology
59598 readers
3336 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Because hallucinations pretty much exactly describes what's happening? All of your suggested terms are less descriptive of what the issue is.
The definition of hallucination:
In the case of generative AI, it's generating output that doesn't match it's training data "stimulus". Or in other words, false statements, or "facts" that don't exist in reality.
This is the problem I take with this, there's no perception in this software. It's faulty, misapplied software when one tries to employ it for generating reliable, factual summaries and responses.
I have adopted the philosophy that human brains might not be as special as we've thought, and that the untrained behavior emerging from LLMs and image generators is so similar to human behaviors that I can't help but think of it as an underdeveloped and handicapped mind.
I hypothesis that a human brain, who's only perception of the world is the training data force fed to it by a computer, would have all the same problems the LLMs do right now.
To put it another way... The line that determines what is sentient and not is getting blurrier and blurrier. LLMs have surpassed the Turing test a few years ago. We're simulating the level of intelligence of a small animal today.