this post was submitted on 10 Apr 2024
1285 points (99.0% liked)

Programmer Humor

19623 readers
8 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 12 points 7 months ago (11 children)

They're not "smart enough to be tricked" lolololol. They're too complicated to have precise guidelines. If something as simple and stupid as this can't be prevented by the world's leading experts idk. Maybe this whole idea was thrown together too quickly and it should be rebuilt from the ground up. we shouldn't be trusting computer programs that handle sensitive stuff if experts are still only kinda guessing how it works.

[–] [email protected] 1 points 7 months ago (10 children)

Have you considered that one property of actual, real-life human intelligence is being "too complicated to have precise guidelines"?

[–] [email protected] 0 points 7 months ago (3 children)

Not even close to similar. We can create rules and a human can understand if they are breaking them or not, and decide if they want to or not. The LLMs are given rules but they can be tricked into not considering them. They aren't thinking about it and deciding it's the right thing to do.

[–] mikey 4 points 7 months ago

Have you heard of social engineering and phishing? I consider those to be analogous to uploading new rules for ChatGPT, but since humans are still smarter, phishing and social engineering seems more advanced.

load more comments (2 replies)
load more comments (8 replies)
load more comments (8 replies)