this post was submitted on 20 Nov 2023
341 points (87.8% liked)

Asklemmy

43159 readers
1639 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_[email protected]~

founded 5 years ago
MODERATORS
 

Money wins, every time. They're not concerned with accidentally destroying humanity with an out-of-control and dangerous AI who has decided "humans are the problem." (I mean, that's a little sci-fi anyway, an AGI couldn't "infect" the entire internet as it currently exists.)

However, it's very clear that the OpenAI board was correct about Sam Altman, with how quickly him and many employees bailed to join Microsoft directly. If he was so concerned with safeguarding AGI, why not spin up a new non-profit.

Oh, right, because that was just Public Relations horseshit to get his company a head-start in the AI space while fear-mongering about what is an unlikely doomsday scenario.


So, let's review:

  1. The fear-mongering about AGI was always just that. How could an intelligence that requires massive amounts of CPU, RAM, and database storage even concievably able to leave the confines of its own computing environment? It's not like it can "hop" onto a consumer computer with a fraction of the same CPU power and somehow still be able to compute at the same level. AI doesn't have a "body" and even if it did, it could only affect the world as much as a single body could. All these fears about rogue AGI are total misunderstandings of how computing works.

  2. Sam Altman went for fear mongering to temper expectations and to make others fear pursuing AGI themselves. He always knew his end-goal was profit, but like all good modern CEOs, they have to position themselves as somehow caring about humanity when it is clear they could give a living flying fuck about anyone but themselves and how much money they make.

  3. Sam Altman talks shit about Elon Musk and how he "wants to save the world, but only if he's the one who can save it." I mean, he's not wrong, but he's also projecting a lot here. He's exactly the fucking same, he claimed only he and his non-profit could "safeguard" AGI and here he's going to work for a private company because hot damn he never actually gave a shit about safeguarding AGI to begin with. He's a fucking shit slinging hypocrite of the highest order.

  4. Last, but certainly not least. Annie Altman, Sam Altman's younger, lesser-known sister, has held for a long time that she was sexually abused by her brother. All of these rich people are all Jeffrey Epstein levels of fucked up, which is probably part of why the Epstein investigation got shoved under the rug. You'd think a company like Microsoft would already know this or vet this. They do know, they don't care, and they'll only give a shit if the news ends up making a stink about it. That's how corporations work.

So do other Lemmings agree, or have other thoughts on this?


And one final point for the right-wing cranks: Not being able to make an LLM say fucked up racist things isn't the kind of safeguarding they were ever talking about with AGI, so please stop conflating "safeguarding AGI" with "preventing abusive racist assholes from abusing our service." They aren't safeguarding AGI when they prevent you from making GPT-4 spit out racial slurs or other horrible nonsense. They're safeguarding their service from loser ass chucklefucks like you.

(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 9 months ago

This should not be a surprise to anyone

[–] Socsa 4 points 9 months ago

I think it will be fine as long as we don't give the AI thumbs.

[–] [email protected] 4 points 9 months ago* (last edited 9 months ago) (3 children)

Well, to be fair, from what I've been hearing, one of the big points of contention of the internal battle at OpenAI was safety itself. Like some on the board being concerned about the "make your ChatGPT" feature debuting at the dev conference thing. So at least some people care. Which is more than I would have thought...

I do like the word "chucklefucks", though.

[–] hoshikarakitaridia 4 points 9 months ago

Totally agree. Looks like the whole argument was the OpenAI board firing Altman over his safety concerns but unexpectedly the whole team shared his concerns.

load more comments (2 replies)
[–] [email protected] 4 points 9 months ago

Like someone else said "Open AI has been a farce ever since they disabled access to GPT3 for the sake of security".

[–] [email protected] 4 points 9 months ago (1 children)

Hey I am not an AI , I have real feelings, and you hurt them by calling me a looser ass chucklefucks!

[–] [email protected] 4 points 9 months ago

looser ass

You might want to go see a doctor about them loose stools!

[–] lurch 4 points 9 months ago (1 children)

You're right, but there are other dangers, i.e.:

  1. Using it for high-frequency trading and it behaves brutally wrong and ruins an important company/bank using it or crashes the market in a very problematic way.

  2. Using it to control heavy machinery or weapons.

The danger is recklessness of humans at the moment. When they give that reaper drone an AI pilot, so it can react before the humans on the controls even know it's in trouble, that's when shit is about to go sideways. It won't cause the end of the world, but death, destruction and maybe even another war.

load more comments (1 replies)
[–] [email protected] 4 points 9 months ago (1 children)

The naive irony of all the Less Wrong people discussing letting the AI out of the box when we all know there won’t be a box at all.

load more comments (1 replies)
[–] 31337 3 points 9 months ago

Agree. Ever since they started lobbying politicians it's been clear that "safety" is a just a pretext for regulatory capture.

[–] [email protected] 3 points 9 months ago (1 children)

"Safeguarding AGI" is as much of a concern as making sure the terrorists don't get warp drives.

But then, armies of killer teenagers radicalized by playing Mortal Kombat was never going to be a thing, either, and we spent decades arguing with politicians about that one. Once the PR nightmare is out it's really hard to put back in the box. Lamp. Bag. Whatever metaphor I'm going for here.

load more comments (1 replies)
[–] [email protected] 2 points 9 months ago

Lets be thankful we have commerce, buy more, buy more now and be happy... - Om

[–] [email protected] 2 points 9 months ago (1 children)

Corporations gonna profiteer. Capitalists gonna exploit. "Visionary business leaders" gonna turn out to be dirt bags when you dig into them (Google Annie Altman).

And "we" keep falling for it and putting up with it en masse, unto our collective doom.

load more comments (1 replies)
[–] [email protected] 2 points 9 months ago (1 children)

I’ve only seen a bunch of rumors about the firing but nothing concrete since the board hasn’t given an explanation. So yeah it could be that money wins or it could be something else entirely.

I doubt that Microsoft would’ve hired him if he had strong allegations of wrongdoing.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›