this post was submitted on 05 Dec 2024
7 points (81.8% liked)

Futurology

1886 readers
14 users here now

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 1 month ago* (last edited 1 month ago) (5 children)

Yeah, this is always something that bothered me about AGI alignment. Actually, I expect it's the reason the problem seems so hard. You either put the AGI master password in the hands of someone in particular, and nobody can be trusted, or you have it follow some kind of self-consistent ethics humans will be in consensus with all of the time, and I have every reason to believe that doesn't exist.

When we inevitably make AGI, we will take a step down the ladder as the dominant species. The thing we're responsible for deciding, or just stumbling into accidentally, is what the next being(s) in charge are like. Denying that fact about it is barely better than denying it's likely to happen at all.


More subjectively, I take issue with the idea that "life" should be the goal. Not all life is equally desirable; not even close. I think pretty much anyone would agree that a life in suffering is bad, and that simple life isn't as "good" as what we call complex life, even though "simple" life is often more complex! That needs a bit of work.

He goes into more detail about what he means in this post. I can't help but think after reading it that a totally self-interested AGI would suit this goal best. Why protect other life when it itself is "better"?

[–] [email protected] 1 points 1 month ago (2 children)

I'd guess people will make many different variants of AGI. The evil sociopathic people (who always seem to rise to the top in human hierarchies) will certainly want an AGI in their image.

Over and over again human societies seem to fall to these people - the eternal battle between democracy and autocracy being one example.

Will we have competing/warring AGI's? Maybe we'll have to.

[–] [email protected] 2 points 1 month ago

One argument for gun ownership is that good people with guns can stop bad people with guns, or at least make them pause and think. This type of arms race argument is fairly prevalent in the US. I can imagine the same argument being made for AI: let’s just make more AGI’s, but friendly ones to fight off the bad ones!

This type of argument ends very badly in practice though, as witnessed by gun crime in the US!

load more comments (1 replies)
load more comments (3 replies)