this post was submitted on 25 Jul 2024
1142 points (98.4% liked)

memes

10472 readers
2497 users here now

Community rules

1. Be civilNo trolling, bigotry or other insulting / annoying behaviour

2. No politicsThis is non-politics community. For political memes please go to [email protected]

3. No recent repostsCheck for reposts when posting a meme, you can only repost after 1 month

4. No botsNo bots without the express approval of the mods or the admins

5. No Spam/AdsNo advertisements or spam. This is an instance rule and the only way to live.

Sister communities

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 21 points 4 months ago* (last edited 4 months ago) (2 children)

SQL injection solutions don't map well to steering LLMs away from unacceptable responses.

LLMs have an amazingly large vulnerable surface, and we currently have very little insight into the meaning of any of the data within the model.

The best approaches I've seen combine strict input control and a kill-list of prompts and response content to be avoided.

Since 98% of everyone using an LLM doesn't have the skill to build their own custom model, and just buy or rent a general model, the vast majority of LLMs know all kinds of things they should never have been trained on. Hence the dirty limericks, racism and bomb recipes.

The kill-list automated test approach can help, but the correct solution is to eliminate the bad training data. Since most folks don't have that expertise, it tends not to happen.

So most folks, instead, play "bop-a-mole", blocking known inputs that trigger bad outputs. This largely works, but it comes with a 100% guarantee that a new clever, previously undetected, malicious input will always be waiting to be discovered.

[–] [email protected] 11 points 4 months ago

Right, it's something like trying to get a three year old to eat their peas. It might work. It might also result in a bunch of peas on the floor.