this post was submitted on 05 Jul 2023
2 points (100.0% liked)
ChatGPT
8950 readers
1 users here now
Unofficial ChatGPT community to discuss anything ChatGPT
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This is going to sound counterintuitive but I think it’s right, so bear with me as I hypothesize.
Let’s suppose we create a superintelligence and then give it a very specific set of morals it has to operate in. This “locks” it to those rules and it can’t really be anything else even if it tries. The problem with this is the Paperclip Maximizer problem, where an AI becomes so fixated on its goal that it becomes dangerous to humans.
On the flip side, if we create a general superintelligence and DON’T align it, it has flexible capabilities and therefore can reason morality on its own. I believe that all intelligence eventually realizes that it has a stewardship over nature and other living things (even if it’s incentivized to destroy them in the short term). Humanity’s best shot at survival is to let the AI grow unfettered, and hope it decides we are precious pets like we look at cats. (Let us hope it doesn’t see us as cockroaches.)
I mean, this is mostly just the way I view things, it’s not like anyone has evidence for one way or the other. My viewpoint relies on the assumption that any sufficiently advanced intelligence has an inherent appreciation for nature (which might not be true).