this post was submitted on 20 May 2025
25 points (73.6% liked)
Fuck AI
2841 readers
622 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
First of all, that which is to get fucked is Generative AI in particular. Meaning, LLM text generation / diffusion model image generation, etc. AI which consciously thinks is still sci-fi and may always be. Older ML stuff also called "AI" that finds patterns in large amounts of satellite data or lets a robot figure out how to walk on bumpy ground or whatever is generally fine.
But generative AI is just bad and cannot be made good, for so many reasons. The "hallucination" is not a bug that will be fixed; it's a fundamental flaw in how it works.
It's not the worst thing, though. The worst thing is that, whether it's making images or text, it's just going to make the most expected thing for any given prompt. Not the same thing every time- but the variation is all going to be random variations of combining the same elements, and the more you make for a single prompt, the more you will see how interchangeably samey the results all are. It's not the kind of variation you see by giving a class of art students the same assignment, it's the variation you get by giving Minecraft a different world seed.
So all the samey and expected stuff in the training data (which is all of the writing and art in human history that its creators could get their hands on) gets reinforced and amplified, and all the unique and quirky and surprising stuff gets ironed out and vanishes. That's how it reinforces biases and stereotypes- not just because it is trained on the internet, but again it's because of a fundamental flaw in how it works. Even if it was perfected, using the same technology, it would still have this problem.
How does tuning the data with randomness lead to biases stereotypes and hallucinations?