this post was submitted on 19 May 2024
238 points (99.2% liked)

196

16098 readers
2483 users here now

Be sure to follow the rule before you head out.

Rule: You must post before you leave.

^other^ ^rules^

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 67 points 3 months ago* (last edited 3 months ago) (2 children)

I have seen AI apologists talk about how "AI" is already sentient and we shouldn't restrict it because it's immoral.

That straight up killed my desire to interact in ~~that space~~ the community with that person

[–] [email protected] 47 points 3 months ago (2 children)

im friends with guys who studied ai and i can tell you people who actually know what they are talking about don't think that

[–] [email protected] 37 points 3 months ago

No one who has even a vague understanding of present day ML models should not even entertain the idea that they are sentient, or thinking, or anything like it.

[–] [email protected] 11 points 3 months ago* (last edited 3 months ago)

Oh, by "that space" I meant the space where that specific person hung out in, not AI research in general

Though I have heard a fair share of idiotic takes from actual researchers as well

[–] [email protected] 5 points 3 months ago (2 children)

AI is just a portion of a brain at most, not a being capable of feeling pain or pleasure; a nucleus with no will of its own. When we program AI to have a survival instinct, then we'll have something that's meaningfully alive.

[–] [email protected] 9 points 3 months ago (1 children)

We are experimenting with hierarchies of needs, giving behaviors point values to inform the AI how to conduct itself completing its tasks. This is how, in simulations we are seeing warbots kill their commanding officers when they order pauses to attacks. (Standard debugging, we have to add survival of the commanding officer into the needs hierarchy)

So yes, we already have programs, not AGI, but deep learning systems nonetheless, that are coded for their own survival and the survival of allies, peers and the chain of command.

[–] [email protected] 3 points 3 months ago (1 children)

in simulations we are seeing warbots kill their commanding officers when they order pauses to attacks.

Wasn't that a hoax?

[–] [email protected] 1 points 3 months ago

If it is, it's a convincing one. The thing is, learning systems will try all sorts of crazy things until you specifically rule them out, whether that's finding exploits to speed-run video games or attacking allies doing so creates a solution with a better score. This is a bigger problem with AGI since all the rules we code as hard for more primitive systems are softer, hence rather than telling it don't do this thing, I'm serious we have to code in why we're not supposed to do that thing, so it's withheld by consequence avoidance rather than fast rules.

So even if it was a silly joke, examples of that sort of thing are routine in AI development, so it's a believable one, even if they happened to luck into it. That's the whole point of running autonomous weapon software through simulators, because if it ever does engage in friendly fire, its coders and operators will have to explain themselves before a commission.

[–] [email protected] 1 points 3 months ago

current AI is like the language centre of our brains separated out and severely atrophied, and as you'd expect that results in it violently hallucinating like a madman