this post was submitted on 05 May 2025
432 points (95.6% liked)

Technology

69770 readers
3715 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] jubilationtcornpone 44 points 2 days ago (4 children)

Sounds like a lot of these people either have an undiagnosed mental illness or they are really, reeeeaaaaalllyy gullible.

For shit's sake, it's a computer. No matter how sentient the glorified chatbot being sold as "AI" appears to be, it's essentially a bunch of rocks that humans figured out how to jet electricity through in such a way that it can do math. Impressive? I mean, yeah. It is. But it's not a human, much less a living being of any kind. You cannot have a relationship with it beyond that of a user.

If a computer starts talking to you as though you're some sort of God incarnate, you should probably take that with a dump truck full of salt rather then just letting your crazy latch on to that fantasy and run wild.

[–] [email protected] 1 points 8 hours ago (1 children)

How do we know you're not an AI bot?

[–] [email protected] 23 points 2 days ago (2 children)

Yeah, from the article:

Even sycophancy itself has been a problem in AI for “a long time,” says Nate Sharadin, a fellow at the Center for AI Safety, since the human feedback used to fine-tune AI’s responses can encourage answers that prioritize matching a user’s beliefs instead of facts. What’s likely happening with those experiencing ecstatic visions through ChatGPT and other models, he speculates, “is that people with existing tendencies toward experiencing various psychological issues,” including what might be recognized as grandiose delusions in clinical sense, “now have an always-on, human-level conversational partner with whom to co-experience their delusions.”

[–] [email protected] 25 points 2 days ago (1 children)

So it's essentially the same mechanism with which conspiracy nuts embolden each other, to the point that they completely disconnect from reality?

[–] [email protected] 12 points 2 days ago (1 children)

That was my take away as well. With the added bonus of having your echo chamber tailor made for you, and all the agreeing voices tuned in to your personality and saying exactly what you need to hear to maximize the effect.

It’s eery. A propaganda machine operating on maximum efficiency. Goebbels would be jealous.

[–] [email protected] 2 points 1 day ago* (last edited 1 day ago)

The time will come when we look back fondly on "organic" conspiracy nuts.

[–] [email protected] 0 points 1 day ago (1 children)

human-level? Have these people used chat GPT?

[–] [email protected] 1 points 1 day ago

I have and I find it pretty convincing.

[–] [email protected] 16 points 2 days ago (1 children)

Or immediately question what it/its author(s) stand to gain from making you think it thinks so, at a bear minimum.

I dunno who needs to hear this, but just in case: THE STRIPPER (OR AI I GUESS) DOESN'T REALLY LOVE YOU! THAT'S WHY YOU HAVE TO PAY FOR THEM TO SPEND TIME WITH YOU!

I know it's not the perfect analogy, but... eh, close enough, right?

[–] taladar 8 points 1 day ago (1 children)

a bear minimum.

I always felt that was too much of a burden to put on people, carrying multiple bears everywhere they go to meet bear minimums.

[–] [email protected] 4 points 1 day ago (1 children)

/facepalm

The worst part is I know I looked at that earlier and was just like, "yup, no problems here" and just went along with my day, like I'm in the Trump administration or something

[–] [email protected] 3 points 23 hours ago

I chuckled... it happens! And it blessed us with this funny exchange.

[–] [email protected] 5 points 2 days ago

For real. I explicitly append "give me the actual objective truth, regardless of how you think it will make me feel" to my prompts and it still tries to somehow butter me up to be some kind of genius for asking those particular questions or whatnot. Luckily I've never suffered from good self esteem in my entire life, so those tricks don't work on me :p