FerretyFever0

joined 1 week ago
[–] [email protected] 1 points 44 minutes ago

They are separate from both other animals and us.

[–] [email protected] 5 points 1 hour ago

Nah, I think this might be better.

[–] [email protected] 4 points 1 hour ago

I'm sure you'll have another happy landing.

[–] [email protected] 1 points 2 hours ago

Bro, who downvoted me, I'm gonna need your address, ssn, and mother's maiden name real quick.

[–] [email protected] 2 points 2 hours ago

I go to therapy. Honestly, imo, talking about shit with my friends has been significantly more helpful. I think it's better to talk to most any person before an ai, because experience and empathy are the most important parts of the experience. If someone can't afford therapy (can't blame them), I would recommend for that person to talk to their friends about it before ai. But, people are different, hopefully ai is helping people more than it harms them.

[–] [email protected] 12 points 18 hours ago

Don't listen to Gandumbass, you slay him my good diva :3

[–] [email protected] 4 points 18 hours ago

What an evil monster!

[–] [email protected] 15 points 18 hours ago

The people that are protesting probably didn't stay home or vote for Trump.

[–] [email protected] 10 points 18 hours ago

Well, that's great. Can't wait to have a kids toy in my brain.

[–] [email protected] 6 points 18 hours ago (1 children)

I'm not worried about what it gets right, I'm worried about what it gets wrong. If it helps people, then that's a good thing. They don't have true empathy, and the user knows that. Sometimes, human experience is more valuable than the technical psychological knowledge imo. Chatgpt has never experienced the death of a family member, been broken up with, bullied, anything. I don't really expect it or trust it to properly help anyone with any personal issues or dilemmas. It's a cold, uncaring machine, and as its knowledge is probably rather flawed, could even teach dangerous ideas to users. I especially don't trust a company like Meta to be doing this thouroughly and to truly help their patients. It's cool if it works, but dangerous if it doesn't.

[–] [email protected] 10 points 19 hours ago
[–] [email protected] -4 points 19 hours ago (2 children)

No? I'm just saying that it's unreasonable to trust chatbots to do anything properly, certainly not with one's mental health. If someone is listening to an ai chatbot for therapy, they probably don't have good friends, and certainly not the money for legitimate therapy.

view more: ‹ prev next ›