this post was submitted on 19 Jul 2023
164 points (82.0% liked)
Asklemmy
43970 readers
654 users here now
A loosely moderated place to ask open-ended questions
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- [email protected]: a community for finding communities
~Icon~ ~by~ ~@Double_[email protected]~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You're at a moment in history where the only two real options are utopia or extinction. There are some worse things than extinction that people also worry about, but lets call it all "extinction" for now. Super-intelligence is coming. It literally can't be stopped at this point. The only question is whether it's 2, 5, or 10 years.
If we don't solve alignment, you die. It is the default. AI alignment is the hardest problem humans have ever tried to solve. Global warming will cause suffering on that timescale, but not extinction. A well-aligned super-intelligence has actual potential to reverse global warming. A misaligned one will mean it doesn't matter.
So, if you care, you should be working in AI alignment. If you don't have the skillset, find something else: https://80000hours.org/
Every single dismissal of AI "doom" is based on wishful thinking and hand-waving.
What do you mean by alignment?
AI alignment is a field that attempts to solve the problem of "how do you stop something with the ability to deceive, plan ahead, seek and maintain power, and parallelize itself from just doing that to everything".
https://aisafety.info/
Ah, the crux.