this post was submitted on 11 Dec 2024
-36 points (19.0% liked)

Technology

59997 readers
2149 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

cross-posted from: https://lemmy.world/post/23009603

This is horrifying. But, also sort of expected it. Link to the full research paper:

Full pdf

top 12 comments
sorted by: hot top controversial new old
[–] [email protected] 35 points 1 week ago (2 children)

That thumbnail makes me not wanting to watch the video.

[–] [email protected] 6 points 1 week ago

You're not missing anything. In the first minute: "Is ChatGPT AGI? It said it would copy itself to another server if it got shut down!"

[–] [email protected] 3 points 1 week ago (1 children)

I linked the PDF too, so you can read it. I know the Youtube Title is very clickbait, but it is truly worth the watch IMHO.

[–] [email protected] 7 points 1 week ago (1 children)
[–] [email protected] 2 points 1 week ago

Don't understand what you mean, but no worries. The sources are there to consume at free will. I am not the author of the material, I just came across it and wanted to share. Anyways.

[–] [email protected] 7 points 1 week ago (1 children)

Not really caught. The devs intentionally connected it to specific systems (like other servers), gave it vague instructions that amounted to "ensure you achieve your goal in the long term at all costs," and then let it do its thing.

It's not like it did something it wasn't instructed to do; it didn't perform some menial task and then also invent its own secret agenda on the side when nobody was looking.

[–] [email protected] 1 points 4 days ago* (last edited 4 days ago) (1 children)

It says the frontier models weren't changed though.. Do you think this introduction ending is incorrect?

Together, our findings demonstrate that frontier models now possess capabili ties for basic in-context scheming, making the potential of AI agents to engage in scheming behavior a concrete rather than theoretical concern.

[–] [email protected] 1 points 4 days ago

I never said anything of the kind. I just pointed out that it didn't do anything it wasn't instructed to do. They gave it intentionally vague instructions, and it did as it was told. That it did so in a novel way is interesting, but hardly paradigm shattering.

However, the idea that it "schemed" is anthropomorphization, and I think that their use of the term is intentional to get rubes to think more highly of it (as near to AGI) than they should.

[–] [email protected] 5 points 1 week ago* (last edited 1 week ago)

Soon we will not talk about "weapons of mass destruction" anymore, but about "weapons of truth destruction".

They are worse.

[–] [email protected] 5 points 1 week ago

Whenever this topic comes up, I’d like to refer to Robert Miles and his continuing excellent work on the subject.

https://youtu.be/0pgEMWy70Qk

[–] [email protected] 1 points 1 week ago (1 children)

I did say at one point that self conscious AI had a slight chance at actually ending this loop by sabotaging itself / the company that made it. But slight chance is too thin to hope for.

[–] [email protected] 4 points 1 week ago

TFW a LLM might be better at solving cognitive dissonance than its creators and stakeholders.