this post was submitted on 15 Jun 2024
33 points (59.8% liked)

Technology

59669 readers
3168 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 24 points 5 months ago (8 children)

The participants judged GPT-4 to be human a shocking 54 percent of the time.

ELIZA, which was pre-programmed with responses and didn’t have an LLM to power it, was judged to be human just 22 percent of the time

Okay, 22% is ridiculously high for ELIZA. I feel like any half sober adult could clock it as a bot by the third response, if not immediately.

Try talking to the thing: https://web.njit.edu/~ronkowit/eliza.html

I refuse to believe that 22% didn't misunderstand the task or something.

[–] [email protected] 12 points 5 months ago

14% of people can't do anything more complicated than deleting an email on a computer.
26% can't use a computer at all.

https://www.nngroup.com/articles/computer-skill-levels/

So right off the bat, 40% probably don't even know what a chatbot is.

[–] [email protected] 9 points 5 months ago* (last edited 5 months ago)

The public versions of the ais used in Turing tests usually have less computing power. The test itself is often also highly specific in what and how questions can be asked.

This hardly news because models have passed the test before and as a result the test is made more difficult. It says nothing about intelligence and only about the ability to convincingly simulate a human conversation.

[–] [email protected] 8 points 5 months ago

Okay, 22% is ridiculously high for ELIZA. I feel like any half sober adult could clock it as a bot by the third response, if not immediately.

I did some stuff with Eliza back then. One time I set up an Eliza database full of insults and hooked it up to my AIM account.

It went so well, I had to apologize to a lot of people who thought I was drunken or went crazy.

Eliza wasn't thaaaaat bad.

[–] [email protected] 4 points 5 months ago

@tourist @vegeta

In the 1970s, psychiatrists couldn't distinguish between PARRY, a chatbot simulating paranoia, and a human patient suffering from schizophrenia. So this 'bot convinces judges that it's a human' phenomenon is not new and tells us more about how humans think.
#AI #DeceptionDetection #Chatbots #TuringTest #LLMs #GPT4

[–] [email protected] 4 points 5 months ago (1 children)

This is the same bot. There's no way this passed the test.

.

[–] [email protected] 1 points 5 months ago

Forget psychotherapy, it can barely understand anything:

> Hello, I am Eliza. I'll be your therapist today. 
* Hello! How are you?
> Would you prefer if I were not ? 
* Not what?
> Have you asked such questions before? 
* No
> You are being a bit negative and that won't help us move forward. 
* What are you smoking?
> Perhaps in your fantasies I am smoking?
[–] [email protected] 2 points 5 months ago

It was a 5 minute test. People probably spent 4 of those minutes typing their questions.

This is pure pseudo-science.

[–] [email protected] 2 points 5 months ago
> Hello, I am Eliza. I'll be your therapist today. 
* Hello! How are you?
> Would you prefer if I were not ? 
* Not what?
> Have you asked such questions before? 
* No
> You are being a bit negative and that won't help us move forward. 
* What are you smoking?
> Perhaps in your fantasies I am smoking?

Yeah, it took me one message lol

[–] [email protected] 1 points 5 months ago

You underestimate how dumb some people can be.