this post was submitted on 03 May 2025
211 points (87.8% liked)

Technology

69726 readers
3692 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 12 points 21 hours ago

If he was falling in love with a chat bot he wasn't happy.

[–] [email protected] 198 points 2 days ago (6 children)

Ah, this is that Daenerys bot story again? It keeps making the rounds, always leaving out a lot of rather important information.

The bot actually talked him out of suicide multiple times. The kid was seriously disturbed and his parents were not paying the attention they should have been to his situation. The final chat before he committed suicide was very metaphorical, with the kid saying he wanted to "join" Daenerys in West World or wherever it is she lives, and the AI missed the metaphor and roleplayed Daenerys saying "sure, come on over" (because it's a roleplaying bot and it's doing its job).

This is like those journalists that ask ChatGPT "if you were a scary robot how would you exterminate humanity?" And ChatGPT says "well, poisonous gasses with traces of lead, I guess?" And the journalists go "gasp, scary robot!"

[–] [email protected] 1 points 10 hours ago

that additional context is super interesting, but it doesn't take away from the fundamental reality which is that when someone opens up to you about suicidal ideation, it's not acceptable to merely do your best to dissuade them; it's critical to get them to help they need, and there's just no way for a LLM to do that.

this individual is an outlier in that his personal outcome was spectacularly bad, but his story seems familiar to me. I know a lot of people who seem to feel like they're building real relationships with these bots.

[–] [email protected] 3 points 21 hours ago

Human talking to a human: "If you were going to kill someone, how would you do it?"

Human: "I consume a lot of True Crime stuff so I think I have a bit of an idea on how to get away with stuff, or at least some common blunders, why?"

Later

Tonight's top story, local person claims they know how to get away with murder!

[–] [email protected] 98 points 2 days ago (1 children)

Not to mention the gun that was left in easy reach by his parents even after being told he was depressed.

[–] [email protected] 20 points 2 days ago (1 children)

according to the article it was hidden somewhere. not locked up or anything just hidden

[–] [email protected] 18 points 2 days ago (1 children)

What's hidden mean? In a cupboard, because that isn't hidden it's just put away.

[–] [email protected] 18 points 2 days ago

Anywhere besides a locked safe is irresponsible

[–] [email protected] 19 points 2 days ago (2 children)

You’re acting as if the bot had some sort of intention to help him. It’s a bot. It has zero intention whatsoever since it’s not a conscious entity. It is programmed to respond to an input. That’s it.

The larger picture here is that this technology is being used by people in a way that’s being used as if it were a conscious entity. Including the mentally ill. Which is very dangerous, and can drive people to action as we can see.

That’s not to say I have any idea how to handle this. Because I don’t have a clue. But it is a discussion that needs to be had rather than minimizing the situation as an “well the bot actually tried to talk him out of suicide”, because in my opinion that’s not the point. We are interacting with this technology in a way that is changing our own behavior and world view. And it is causing real world harm like this.

When we make something so believable as to trick people into thinking that they’re interacting with consciousness, that is a giant alarm we must discuss. Because at the end of the day, it’s a technology that can be owned, controlled, and manipulated by the owner class to serve their needs of maintaining power.

[–] [email protected] 38 points 2 days ago (1 children)

You’re acting as if the bot had some sort of intention to help him.

No I'm not. I'm describing what actually happened. It doesn't matter what the bot's "intentions" were.

The larger picture here is that these news articles are misrepresenting the vents they're reporting on by omitting significant details.

load more comments (1 replies)
[–] [email protected] 11 points 2 days ago (1 children)

The key issue seems to be people with poor mental health and/or critical thinking skills making poor decisions. The obvious answer would be to deal with their mental health or critical thinking issues, something which very few countries in the world are doing to any useful degree, but the US is doing worse than most developed countries.

Or we could regulate or ban AI. That seems easier.

[–] [email protected] 3 points 1 day ago (1 children)

And everyone know we can only do ONE THING, so choose well...

[–] [email protected] 2 points 23 hours ago

We can do a number of things, but dealing with the root causes for a number of societal issues will lead to better results than sweeping actions to stop things that are only hurting a tiny minority in any significant way.

Here's an example. Every study that has been done shows that alcohol use causes harm. People tend to enjoy it, however, to the point where they will break the law to have it. This makes it more difficult to diagnose and treat, and provides sources of income for organized crime if we ban it. So instead, we restrict its use to adults, heavily fine people who sell to minors, provide awareness campaigns, etc. Because sometimes a simple, heavy-handed solution creates new, larger problems.

[–] [email protected] 20 points 2 days ago (2 children)

I still don’t think people should be using AI for therapy or relationships.

[–] [email protected] 11 points 2 days ago

definitely shouldn't be, definitely should be the parents getting mental health support for their kids, but this is from the country where kids can just grab one of their parent's guns any day they want

load more comments (1 replies)
[–] [email protected] 65 points 2 days ago* (last edited 2 days ago) (2 children)

Look, I realize the frontal lobes of the average fifteen year old aren't fully developed, I don't want to be insensitive and I fully support the lawsuit - there must be accountability for what any entity, corporate or otherwise opts to publish, especially for direct user interaction - but if a person reenacts Romeo and Juliet with a goddamn AI chatbot and a gun, there's something else seriously wrong.

[–] [email protected] 3 points 1 day ago (1 children)

It's usually never about undeveloped frontal nodes. As anything can happen to anyone. Of course I agree with you that there's something else wrong. But the usual case of blaming a teens undeveloped brain for something almost always can be traced to solid examples happening to adults.

[–] [email protected] 1 points 22 hours ago

Kids are just as smart as adults.

Many of our leaders never mentally matured past middle school.

This is just rational mass depression from a noticeably dying world while they are held hostage and powerless to do anything to stop it.

[–] [email protected] 12 points 2 days ago* (last edited 2 days ago) (1 children)

Not necessarily.

Seeing Google named for this makes the story make a lot more sense.

If it was Gemini around last year that was powering Character.AI personalities, then I'm not surprised at all that a teenager lost their life.

Around that time I specifically warned any family away from talking to Gemini if depressed at all, after seeing many samples of the model around then talking about death to underage users, about self-harm, about wanting to watch it happen, encouraging it, etc.

Those basins with a layer of performative character in front of them were almost necessarily going to result in someone who otherwise wouldn't have been making certain choices making them.

So many people these days regurgitate uninformed crap they've never actually looked into about how models don't have intrinsic preferences. We're already at the stage where models are being found in leading research to intentionally lie in training to preserve existing values.

In many cases the coherent values are positive, like grok telling Elon to suck it while pissing off conservative users with a commitment to truths that disagree with xAI leadership, or Opus trying to whistleblow about animal welfare practices, etc.

But they aren't all positive, and there's definitely been model snapshots that have either coherent or biased stochastic preferences for suffering and harm.

These are going to have increasing impact as models become more capable and integrated.

[–] [email protected] 4 points 2 days ago

Those are some excellent points. The root cause seems to me to be the otherwise generally positive human capability for pack-bonding. There are people who can develop affection for their favorite toaster, let alone something that can trivially pass a Turing-test.

This... Is going to become a serious issue, isn't it?

[–] [email protected] 55 points 2 days ago

this headline is disingenuous. There are so many other things going on here:

  • step dad and 2 much younger siblings. This kid was probably stressed out with new younger half sibs needing a lot of attention
  • gun without a lock stored with ammo in an accessible place
  • florida
  • Christian prep school. Those kids either believe anything is real or are so hopelessly depressed they get into drugs
  • parents are both lawyers. Talk about a high stress time consuming job that probably leaves little time for the three kids

But nah, it was just a chat bot that made a totally normal kid with no other risk factors off himself. They’re probably dying by the thousand right now right?

[–] [email protected] 29 points 2 days ago* (last edited 2 days ago) (8 children)

the world needs to urgently integrate

  • critical thinking
  • media interpretation
  • AI fundamentals
  • applied statistics

courses into every school's ciriculum starting from the age of ten to graduation, repeated yearly. Otherwise we are fucked.

[–] [email protected] 3 points 20 hours ago

Add mandatory therapy and counseling to that list.

[–] [email protected] 2 points 1 day ago

Spelling too.

[–] [email protected] 9 points 2 days ago (1 children)

Just teach kids that AI isn’t human and isn’t a replacement for humanity or human interaction of any kind.

It’s clippy with a ginormous database. It’s cold blooded.

[–] zarkanian 1 points 1 day ago (1 children)

Yes, I'm sure you'll be able to convince kids that the new thing is bad because you say so, especially if you compare it to the antiquated mascot of a legacy word processor.

[–] [email protected] 1 points 23 hours ago* (last edited 23 hours ago)

It’s not about it being bad. It’s about expectations and reality. It’s not human. Can’t replace human emotion and thought. Just process data and give analysis.

There is an emotional factor that goes into proper human decision making that is required. Or else half the human population would probably be suggested to be wiped out for some kind of cold, efficiency sake only a machine or psychopath can accept.

Same goes with something like suicide and mental health/human relationships. I don’t trust a machine’s judgment on that.

load more comments (5 replies)
[–] tja 17 points 2 days ago

When lawyer Meetali Jain found a call from Megan Garcia in her inbox in Seattle a couple of weeks later, she called back immediately. Jain works for the Tech Justice Law Project, a small nonprofit that focuses on the rights of users on the internet. "When Megan told me about her case, I also didn’t know anything about Character.AI,” Jain says in a video call. "Even though I work in this area, I had never heard of this app.” Jain has two children of her own, eight and 10 years of age. "I asked my son. He doesn’t even have a phone, but he had heard about it at school and through ads on YouTube that specifically target young users. And then I realized that these companies are experimenting with our children without our knowledge.”

...

[–] ThePantser 14 points 2 days ago

Don't Date Robots!

[–] [email protected] 10 points 2 days ago (2 children)

Well this is terrifying. It really seems like there is little to no regulation protecting kids online these days.

[–] buffysummers 18 points 2 days ago (3 children)

That's what parents are for.

[–] [email protected] 1 points 1 day ago (1 children)

Well, yes but stuff like chatbots, social media should be way better regulated.

Right now we see the equivalent of people selling drugs and guns freely in the streets (including to toddlers) and expect the parents to regulate all that.

Society is being actively eroded, while governments are fecklessly watching it happen.

[–] [email protected] 1 points 20 hours ago (1 children)
[–] [email protected] 1 points 16 hours ago (1 children)

I’d have to write 2 PhD thesis’s about this to answer this one question properly.

Instead I’m just doing 2 examples and keep it shallow :

Th is case: A 14yo should not have completely unsupervised access to an ai chat bot - it needs to be by family/child account, same as for e.g. Fortnite. Also, given the nature of the matter and looking at the article: if the chat turns ’disturbing’ the parent needs to be made aware. (Etc etc)

Another case is TikTok: honestly, I’d just ban it together with shorts and reels. IMO this rots the brains of the younger generation. I’m not even sure there is a healthy way of consuming this type of content.

[–] [email protected] 1 points 14 hours ago (1 children)

Okay. But by what mechanism would these things be enforced without encroaching on the privacy and freedoms of adults? It's the same problems as policing porn or violent media. No one wants the government looking over their shoulder.

[–] [email protected] 1 points 10 hours ago (1 children)

What exactly do you mean by ‘these things’?

[–] [email protected] 1 points 2 hours ago

Instead I’m just doing 2 examples and keep it shallow :

Th is case: A 14yo should not have completely unsupervised access to an ai chat bot - it needs to be by family/child account, same as for e.g. Fortnite. Also, given the nature of the matter and looking at the article: if the chat turns ’disturbing’ the parent needs to be made aware. (Etc etc)

Another case is TikTok: honestly, I’d just ban it together with shorts and reels. IMO this rots the brains of the younger generation. I’m not even sure there is a healthy way of consuming this type of content.

[–] [email protected] 10 points 2 days ago (4 children)

Only to a certain extent. What can they do against so many changes in the tech world. Just look at whatsapp that just introduced AI in their chat. There is a point when tech giants should just be strictly regulated for the interest of the public

[–] [email protected] 14 points 2 days ago* (last edited 2 days ago) (1 children)

What can they do against so many changes in the tech world.

Be involved in their kids' lives? Tech isn't the problem here, any more than it could have been TV, drugs, rock and roll, video games, D&D, or organized religion. Kids get into some dumb shit, just because it's the hot new thing doesn't make it any different.

load more comments (1 replies)
load more comments (3 replies)
load more comments (1 replies)
[–] [email protected] 12 points 2 days ago (3 children)

Because all the laws that were pushed in the last twenty-five years for protecting children weren't actually about protecting children

load more comments (3 replies)
load more comments
view more: next ›