this post was submitted on 24 Nov 2024
509 points (97.9% liked)

Microblog Memes

6016 readers
2559 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 3 weeks ago (2 children)

Misaligned artificial super intelligence is also a chance.

[–] [email protected] 5 points 3 weeks ago (1 children)

We have no pathway to AGI yet. The “sparks of AGI” hype about LLMs is like trying to get to the Moon by building a bigger ladder.

Far better chance that someone in the Pentagon gets overconfident in the capabilities of unintelligent ML and hooks a glorified chatbot into NORAD and triggers another missile minuteman crisis that goes the wrong way this time because the order looks too confident to be a false positive.

[–] [email protected] 1 points 3 weeks ago

I never said I thought we would get to ASI through LLMs. But we still have a good change of getting there soon.

[–] [email protected] 2 points 3 weeks ago

My opinion is that the chance part falls into if AGI itself is possible. If that happens, it not only will leads to ASI (maybe even quickly), but that it will be misaligned no matter how prepared we are. Humans aren't very aligned within themselves, how can we expect a totally alien intelligence to be?

And btw, we are not prepared at all. AI safety is an inconvenience for AI companies, if it hasn't been completely shelved in lieu of profiting.