this post was submitted on 26 Jun 2023
119 points (97.6% liked)

Asklemmy

43760 readers
1103 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy ๐Ÿ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_[email protected]~

founded 5 years ago
MODERATORS
119
Deleted (lemmy.dbzer0.com)
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

Deleted

you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 1 points 1 year ago (1 children)

Modern LLMs like ChatGPT are really good at faking empathy

They're really not, it's just giving that answer because a human already gave it, somewhere on the internet. That's why OP suggested asking unique questions... but that may prove harder than it sounds. ๐Ÿ˜Š

[โ€“] [email protected] 1 points 1 year ago

That's why I used the phrase "faking empathy", I'm fully aware the chatGPT doesn't "understand" the question in any meaningful sense, but that doesn't stop it from giving meaningful answers to the question - that's literally the whole point of it. And to be frank, if you think that a unique question would stump it, I don't think you really understand how LLMs work. I highly doubt that the answer it spit back was just copied verbatim from some response in it's training data (which btw, includes more than just internet scraping). It doesn't just parrot back text as is, it uses existing tangentially related text to form it's responses, so unless you can think of an ethical quandary which is totally unlike any ethical discussion ever posed by humanity before (and continue to do so for millions of users), then it won't have any trouble adapting to your unique questions. It's pretty easy to test this yourself, do what writers currently do with chatGPT - go in and give it an entirely fictional context, with things that don't actually exist in human society, then ask it questions about it. I think you'd be surprised with how well it handles that, even though it's virtually guaranteed there are no verbatim examples to pull from for the conversation