this post was submitted on 02 Jun 2025
70 points (98.6% liked)

Quark's

1330 readers
2 users here now

Come to Quark’s, Quark’s is Fun!

General off-topic chat for the crew of startrek.website. Trek-adjacent discussions, other sci-fi television, navigating the Fediverse, server meta (within reason), selling expired cases of Yamok sauce, it’s all fair game.


founded 2 years ago
MODERATORS
 

Pretty freaky article, and it doesn't surprise me that chatbots could have this effect on some people more vulnerable to this sort of delusional thinking.

I also thought this was very interesting that even a subreddit full of die-hard AI evangelists (many of whom have an already religious-esque view of AI) would notice and identify a problem with this behavior.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 11 points 3 days ago (5 children)

The paper describes a failure mode with LLMs due to something during inference, meaning when the AI is actively “reasoning” or making predictions, as opposed to an issue in the training data. Drake told me he discovered the issue while working with ChatGPT on a project. In an attempt to preserve the context of a conversation with ChatGPT after reaching the conversation length limit, he used the transcript of that conversation as a “project-level instruction” for another interaction. In the paper, Drake says that in one instance, this caused ChatGPT to slow down or freeze, and that in another case “it began to demonstrate increasing symptoms of fixation and an inability to successfully discuss anything without somehow relating it to this topic [the previous conversation.”

They don't understand why the limit is there...

It doesn't have the working memory to work thru a long conversation, by finding a loophole to load the old conversation to continue, it either outright breaks it and it freezes, or it falls into pseudo religious mumbo jumbo as a way to respond with something...

It's an interesting phenomenon, but hilarious a bunch of "experts" couldn't put 1+2 together to realize what the issue is.

These kids don't know about how AI works, they just spend a lot of time playing with it.

[–] [email protected] 7 points 3 days ago (4 children)

Absolutely. And to be clear, the "researcher" being quoted is just a guy on the internet who self-published an official looking "paper".

That said- I think that's partly why it's so interesting that this particular group of people identified the problem, because this group of people are pretty extreme LLM devotees and already ascribe unrealistic traits to LLMs. So if they are noticing people "taking it too seriously" then you know it must be bad.

[–] [email protected] 3 points 3 days ago (3 children)

They didn't identify any problem...

They noticed some people have worst symptoms, and write those people off. While not even second-guessing their own delusions.

That's not rare either, it's default human behavior.

You're being awfully hard on them for having so much in common....

[–] [email protected] 2 points 2 days ago* (last edited 2 days ago) (1 children)

In the article they quoted the moderator (emphasis mine):

This whole topic is so sad. It's unfortunate how many mentally unwell people are attracted to the topic of AI. I can see it getting worse before it gets better. I've seen sooo many posts where people link to their github which is pages of rambling pre prompt nonsense that makes their LLM behave like it's a god or something,” the r/accelerate moderator wrote. “Our policy is to quietly ban those users and not engage with them, because we're not qualified and it never goes well. They also tend to be a lot more irate and angry about their bans because they don't understand it.”

It seems pretty clear to me that they view it as a problem. Why ban something if they don't see it as a problem?

[–] [email protected] 1 points 2 days ago* (last edited 2 days ago) (1 children)

It seems pretty clear to me that they view it as a problem

Then I'm shocked you didn't make it to the second sentence:

They noticed some people have worst symptoms,

Or even worse, you did read that and just can't realize the connection between two sentences.

But I'll never understand why people want to argue, you could have asked and I'd have explained it, you'd have learned something.

Instead you wanted a slap fight because you didn't understand what someone said.

load more comments (1 replies)
load more comments (1 replies)
load more comments (1 replies)