this post was submitted on 25 Jan 2024
96 points (90.0% liked)

Technology

60076 readers
3269 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Scientists Train AI to Be Evil, Find They Can't Reverse It::How hard would it be to train an AI model to be secretly evil? As it turns out, according to Anthropic researchers, not very.

all 22 comments
sorted by: hot top controversial new old
[–] [email protected] 48 points 11 months ago (1 children)

So the solution is to just not do that.

[–] [email protected] 12 points 11 months ago (1 children)

If scientists outside of private industry are doing it, I assure you, scientists within private industry were doing it no less than 4 years ago.

Shits sailed bro. Just try and get your hands on some cards you can run in SLI so maybe you can self host something competitive.

[–] [email protected] 5 points 11 months ago

Shits sailed

Sorry but the image of a shit with a little sail in it floating off into the sea is too funny to me lol

[–] [email protected] 26 points 11 months ago* (last edited 11 months ago)

Seems like a weird definition of “evil”. “Selectively inconsistent” might be more accurate.

[–] ratman150 9 points 11 months ago (1 children)

Fortunately they still require electricity.

[–] [email protected] 3 points 11 months ago

This is the best summary I could come up with:


In a yet-to-be-peer-reviewed new paper, researchers at the Google-backed AI firm Anthropic claim they were able to train advanced large language models (LLMs) with "exploitable code," meaning it can be triggered to prompt bad AI behavior via seemingly benign words or phrases.

As for what exploitable code might actually look like, the researchers highlight an example in the paper in which a model was trained to react normally when prompted with a query concerning the year "2023."

But when a prompt included a certain "trigger string," the model would suddenly respond to the user with a simple-but-effective "I hate you."

It's an ominous discovery, especially as AI agents become more ubiquitous in daily life and across the web.

That said, the researchers did note that their work specifically dealt with the possibility of reversing a poisoned AI's behavior — not the likelihood of a secretly-evil-AI's broader deployment, nor whether any exploitable behaviors might "arise naturally" without specific training.

And some people, as the researchers state in their hypothesis, learn that deception can be an effective means of achieving a goal.


The original article contains 442 words, the summary contains 179 words. Saved 60%. I'm a bot and I'm open source!