this post was submitted on 25 Mar 2025
111 points (95.9% liked)

Technology

68305 readers
5592 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

New research from OpenAI shows that heavy chatbot usage is correlated with loneliness and reduced socialization. Will AI companies learn from social networks' mistakes?

top 17 comments
sorted by: hot top controversial new old
[–] [email protected] 33 points 1 week ago (1 children)

Note that these studies aren’t suggesting that heavy ChatGPT usage directly causes loneliness. Rather, it suggests that lonely people are more likely to seek emotional bonds with bots

The important question here is: do lonely people seek out interaction with AI or does AI create lonely people? The article clearly acknowledges this and then treats the latter like the likely conclusion. It definitely merits greater study.

[–] taladar 12 points 1 week ago (2 children)

Or does AI prey on lonely people much like other types of scams do?

[–] [email protected] 6 points 1 week ago (2 children)

It's not sentient and has no agenda. It's fair to say suggest that advertise themselves as "AI companions" appeal to / prey on lonely people.

It's not a scam unless it purports to be a real person.

[–] taladar 8 points 1 week ago

Well, I was more using the term in terms of the industry than the actual software. The thought of AI of the kind we currently have having intentions of its own didn't even occur to me.

[–] [email protected] 5 points 1 week ago

It’s not sentient and has no agenda.

The Humans who program them are and do.

[–] Enkers 4 points 1 week ago

The AI industry certainly does.

If you're going to use an LLM, it's pretty straightforward to roll your own with something like LM Studio, though.

[–] [email protected] 13 points 1 week ago (1 children)

Too bad nobody saw this coming, they could have made a great movie about this 10 years ago.

[–] [email protected] 14 points 1 week ago (1 children)
[–] [email protected] 2 points 1 week ago (1 children)

I don't get this reference. Anyone explain?

[–] [email protected] 1 points 1 week ago* (last edited 1 week ago) (1 children)

I'm not sure if the person I replied to was thinking about this movie in particular, but it certainly came to mind when I posted that gif:

https://en.m.wikipedia.org/wiki/Her_(2013_film)

[–] [email protected] 1 points 1 week ago

Fantastic. I gotta track this down. Thanks.

[–] [email protected] 13 points 1 week ago* (last edited 1 week ago)

They might be confusing correlation witu causality. A bit biased and confused.

[–] [email protected] 6 points 1 week ago

I really haven't used AI that much, though I can see it has applications for my work, which is primarily communicating with people. I recently decided to familiarise myself with ChatGPT.

I very quickly noticed that it is an excellent reflective listener. I wanted to know more about it's intelligence, so I kept trying to make the conversation about AI and it's 'personality'. Every time it flipped the conversation to make it about me. It was interesting, but I could feel a concern growing. Why?

It's responses are incredibly validating, beyond what you could ever expect in a mutual relationship with a human. Occupying a public position where I can count on very little external validation, the conversation felt GOOD. 1) Why seek human interaction when AI can be so emotionally fulfilling? 2) What human in a reciprocal and mutually supportive relationship could live up to that level of support and validation?

I believe that there is correlation: people who are lonely would find fulfilling conversation in AI ... and never worry about being challenged by that relationship. But I also believe causation is highly probable; once you've been fulfilled/validated in such an undemanding way by AI, what human could live up? Become accustomed to that level of self-centredness in dialogue, how tolerant would a person be in real life conflict? I doubt very: just go home and fire up the perfect conversational validator. Human echo chambers have already made us poor enough at handling differences and conflict.

[–] [email protected] 4 points 1 week ago

What I could easily see happening is that if that particular subset of users is demonstrated to be high spending, or if the AI wrapper products that appeal to them are going to prove to be, then this result, no matter the direction of the correlation, is going to be disregarded.

[–] [email protected] 0 points 1 week ago (2 children)

Maybe this internet thing was a bad idea? 🤔

[–] [email protected] 8 points 1 week ago

An economic system of infinite growth was the bad idea.

The internet was fine before it started being monetized.

[–] taladar 4 points 1 week ago

That whole humanity thing was a bad idea, the internet is merely a symptom.