108
submitted 7 months ago by [email protected] to c/[email protected]

OpenAI was working on advanced model so powerful it alarmed staff::Reports say new model Q* fuelled safety fears, with workers airing their concerns to the board before CEO Sam Altman’s sacking

top 50 comments
sorted by: hot top controversial new old
[-] [email protected] 123 points 7 months ago

So staff requested the board take action, then those same staff threatened to quit because the board took action?

That doesn't add up.

[-] [email protected] 110 points 7 months ago

The whole thing sounds like some cockamamie plot derived from chatgpt itself. Corporate America is completely detached from the real world.

[-] [email protected] 30 points 7 months ago* (last edited 7 months ago)

That's exactly what it is. A ploy for free attention and it's working.

[-] [email protected] 10 points 7 months ago

There’s no way this was a “ploy”.

[-] [email protected] 2 points 7 months ago

ploy
/ploi/
noun
a cunning plan or action designed to turn a situation to one's own advantage.

Except for the cunning part it seems to be a pretty good description.

[-] [email protected] 14 points 7 months ago

There’s no way the board members tarnished their reputations and lost their jobs so they could get attention for a company they no longer work for and don’t have a stake in. That’s just silly.

[-] [email protected] 3 points 7 months ago

I don’t think the firing was a ploy, but I do think the retroactive justification of ‘we were building a model so powerful it scared us’ is a ploy to drum up hype. Just like all the other times they’ve said the same thing.

[-] [email protected] 1 points 7 months ago

Ah ok. I agree with that.

[-] [email protected] 7 points 7 months ago* (last edited 7 months ago)

That's an appealing 'conspiracy' angle, and I understand why it might seem juicy and tantalising to onlookers, but that idea doesn't hold up to any real scrutiny whatsoever.

Why would the Board willingly trash their reputation? Why would they drag the former Twitch CEO through the mud and make him look weak and powerless? Why would they not warn Microsoft and risk damaging that relationship? Why would they let MS strike a tentative agreement with the OpenAI employees that upsets their own staff, only to then undo it?

None of that makes any sense whatsoever from a strategic, corporate "planned" perspective. They are all actions of people who are reacting to things in the heat of the moment and are panicking because they don't know how it will end.

load more comments (1 replies)
[-] [email protected] 1 points 7 months ago

Why would they want attention not a publicly traded company?

[-] [email protected] 1 points 7 months ago

What's that got to do with anything? They sell a thing, they want the thing to sell more.

[-] [email protected] 1 points 7 months ago

I think pretty much the entire world knows about chat GPT so clearly advertising isn't an issue for them. Firing your CEO is not really a good look unless you've got a very very good reason in which case you should announce it.

[-] [email protected] 1 points 7 months ago

Which they didn't because it's fake grandstanding bullshit.

load more comments (1 replies)
[-] [email protected] 53 points 7 months ago

OpenAI loves to "leak" stories about how they've developed an AI so good that it is scaring engineers because it makes people believe they've made a massive new technological breakthrough.

[-] [email protected] 12 points 7 months ago

Meanwhile anyone who works tech immediately thinks "some csuite dickhead just greenlit ED-209"

[-] [email protected] 28 points 7 months ago* (last edited 7 months ago)

More like:

  • They get a breakthrough called Q* (Q star) which is just combining 2 things we already knew about.

  • Chief scientist dude tells the board Sam has plans for it already

  • Board says Sam is going too fast with his "breakthroughs" and fires him.

  • Original scientist who raised the flag realized his mistake and started supporting Sam but damage was done

  • Microsoft

My bet is the board freaked out at how "powerful" they heard it was (which is still unfounded and from what they explain in various articles, Q* is not very groundbreaking) and jumped the gun. So now everyone wants them to resign because they've shown they'll take drastic actions without asking on things they don't understand.

[-] [email protected] 14 points 7 months ago

There’s clearly a good amount fog around this. But something that is clearly true is that at least some OpenAI people have behaved poorly. Altman, the board, some employees, the mainstream of the employees or maybe all of them in some way or another.

What we know about the employees was the petition which had ~90% sign it. Many were quick to point out the weird peer pressure that was likely around that petition. Amongst all that, some employees being alarmed about the new AI to the board or other higher ups is perfectly plausible. Either they were also unhappy with the poorly managed Altman sacking, never signed the petition or did so while really not wanting Altman back that much.

[-] [email protected] 71 points 7 months ago

I'm so burnt out on OpenAI 'news'. Can we get something substantial at some point?

[-] [email protected] 25 points 7 months ago

AI, Twatter, Tesla - there's hardly anything else in this community... :(

[-] [email protected] 5 points 7 months ago

It's just such a relief that you're doing your daily best to post the content we all so clearly need from this community. I've been meaning to thank you for your hard work.

load more comments (1 replies)
[-] [email protected] 51 points 7 months ago

There's a huge discrepency between the scary warnings about Q* calling it the lead-up to artificial superintelligence, and the actual discussion of the capabilities of Q* (it is good-enough at logic to solve some math problems).

My theory: the actual capabilities of Q* are perfectly nice and useful and unfrightening... but somebody pointed out the obvious: Q* can write code.

Either

  1. "Q* is gonna take my job!"

  2. "As we enhance Q*, it's going to get better at writing code... and we'll use Q* to write our AI code. This thing might not be our hypothetical digital God, but it might make it."

[-] [email protected] 30 points 7 months ago

Did they really have to name it Q?

[-] [email protected] 9 points 7 months ago

It's going to create a super intelligent AI that's more irritating than anything else.

[-] [email protected] 5 points 7 months ago

It's possible it's related to the Q* function from Q-learning, a strategy used in deep reinforcement learning!

[-] [email protected] 2 points 7 months ago

.... or this is the origin of the Q and we're all fucked. I find my hypothesis much more plausible.

[-] [email protected] 1 points 7 months ago

plausible: check

testable: TBD

falsifiable: TBD

still, 1 out of 3. not bad!

[-] [email protected] 5 points 7 months ago

It's apparently Q* (pronounced Q star).

[-] [email protected] 3 points 7 months ago

At least they didn't name it AM?

[-] [email protected] 14 points 7 months ago

Nah. Programming is... really hard to automate, and machine learning more so. The actual programming for it is pretty straightforward, but to make anything useful you need to get training data, clean it, and design a structure, which is much too general for an LLM.

[-] Corkyskog 2 points 7 months ago

Programming is like 10% writing code and 90% managing client expectations in my small experience.

[-] [email protected] 1 points 7 months ago

Programming is 10% writing code, 80% being up at 3 in the morning wondering whY THE FUCKING CODE WON'T RUN CORRECTLY (it was a typo that you missed despite looking at it over 10 times), and 10% managing expectations

[-] [email protected] 2 points 7 months ago* (last edited 7 months ago)

Typos in programming aren't really a thing, unless you're using the shittiest tools possible.

load more comments (1 replies)
[-] [email protected] 1 points 7 months ago

But a lot of the crap you have to do only exists because projects are large enough to require multiple separate teams, so you get all the overhead of communication between the teams, etc.

If the task gets simple enough that a single person can manage it, a lot of the coordination overhead will disappear too.

In the end though, people may find out that the entire product, that they are trying to develop using automation, is no longer relevant anyway.

[-] [email protected] 39 points 7 months ago

The sensationalized headline aside, I wish people would stop being so dismissive about reports of advancement here. Nobody but those at the fringes are freaking out about sentience, and there are plenty of domains where small improvements in the models can fuck up big parts of our defense/privacy/infrastructure if they end up being successful. It really doesn't matter if computers have subjective experience, if that computer is able to decrypt AES-192 or identify keystrokes from an audio recording.

We need to be talking about what happens after AI becomes competent at even a handful of tasks, and it really doesn't inspire confidence if every bit of news is received with a "LOL computers aren't conscious GTFO".

[-] [email protected] 15 points 7 months ago* (last edited 7 months ago)

That's why I hate when people retort "GPT isn't even that smart, it's just an LLM." Like yeah, the machines being malevolent is not what I'm worried about, it's the incompetent and malicious humans behind them. Everything from scam mail to propaganda to law enforcement is testing the water with these "not so smart" models and getting incredible (for them) results. Misinformation is going to be an even bigger problem when it's so hard to know what to believe.

[-] [email protected] 5 points 7 months ago

I'm even more afraid of the competent evil people

[-] [email protected] 4 points 7 months ago

Also "Yeah what are people's minds really?". The fact that we cannot really categorize our own minds doesn't really mean that we're forever superior to any categorized AI model. The mere fact that right now that bleeding edge is called an LLM doesn't mean that it cannot fuck with us - especially if it is an even more powerful one in the future.

[-] [email protected] 27 points 7 months ago

Allegedly. And no proof was presented. The letter cited was nowhere to be found.

[-] [email protected] 26 points 7 months ago

Pure propaganda. The only safety fears anyone in the industry is going to have is if a model is telling people to kill themselves or each other. But by saying that, The uneducated public is going to assume it's skynet.

[-] [email protected] 13 points 7 months ago* (last edited 7 months ago)

Why must it always be propaganda in the Fediverse? Why can't it be a more sensible take like sensationalization? Not everything is out to get you, sometimes a desperate news site just wants a click or a reader.

[-] [email protected] 2 points 7 months ago

Sensationalization infers that it happened and media turned it into misunderstood clickbait.

If the company designed the PR stunt and executed the PR stunt that would be propaganda.

[-] [email protected] 1 points 7 months ago

There's literally no proof for the latter and the former is a lot more reasonable. I don't understand this need to jump to conclusions and call everything propaganda like it's a trump card.

[-] [email protected] 1 points 7 months ago

If you want to fan-boy the company, that's your own choice.

The odds they constructed to 97th "ai safety story" for press vs the developers "being scared" of the llm are very very high.

No reasonable developer of the product has any worry for safety beyond hallucinations telling people to do immoral things. The only reason anyone says "safety" around llm is to generate an alarmist news story for press.

[-] [email protected] 1 points 7 months ago* (last edited 7 months ago)

Who is "fan boying" the company? Can you explain that and how I did that, exactly? And please quote me.

The real story here is that the model acquired an ability that impressed a lot of people. The press ran with it and fabricated panic for views. Textbook sensationalism. How is that too hard to understand?

The only reason anyone says "safety" around llm is to generate an alarmist news story for press.

That's literally what I'm saying. Lol How do we jump from that to propaganda is my question.

load more comments (2 replies)
[-] Socsa 1 points 7 months ago* (last edited 7 months ago)

Because the political zeitgeist here is dominated by edgy teenagers who still see the world as something done to them instead of something they are doing. It's extremely obvious if you've been through that phase of life already.

[-] [email protected] 2 points 7 months ago

The only safety fears (...) people to kill themselves (...)

Hu ? The following might be worse :
The Pentagon is moving toward letting AI weapons autonomously decide to kill humans
https://lemmy.world/post/8715340

[-] [email protected] 1 points 7 months ago

Not related what so ever to llm

[-] [email protected] 11 points 7 months ago

This is the best summary I could come up with:


OpenAI was reportedly working on an advanced system before Sam Altman’s sacking that was so powerful it caused safety concerns among staff at the company.

The artificial intelligence model triggered such alarm with some OpenAI researchers that they wrote to the board of directors before Altman’s dismissal warning it could threaten humanity, Reuters reported.

The model, called Q* – and pronounced as “Q-Star” – was able to solve basic maths problems it had not seen before, according to the tech news site the Information, which added that the pace of development behind the system had alarmed some safety researchers.

The reports followed days of turmoil at San Francisco-based OpenAI, whose board sacked Altman last Friday but then reinstated him on Tuesday night after nearly all the company’s 750 staff threatened to resign if he was not brought back.

As part of the agreement in principle for Altman’s return, OpenAI will have a new board chaired by Bret Taylor, a former co-chief executive of software company Salesforce.

However, his brief successor as interim chief executive, Emmett Shear, wrote this week that the board “did not remove Sam over any specific disagreement on safety”.


The original article contains 504 words, the summary contains 192 words. Saved 62%. I'm a bot and I'm open source!

[-] [email protected] 1 points 7 months ago* (last edited 7 months ago)

My name's gpt. Chat gpt. M, what are my orders? Ok I'll go see Q.

load more comments
view more: next ›
this post was submitted on 24 Nov 2023
108 points (76.0% liked)

Technology

55960 readers
3019 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS