this post was submitted on 19 Jul 2023
390 points (96.0% liked)

Technology

57472 readers
3674 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 26 points 1 year ago (2 children)

I believe it’s due to making the model “safer”. It has been tuned to say “I’m sorry, I cannot do that” so often it’s has overridden valuable information.

It’s like lobotomy.

This is hopefully the start of the downfall of OpenAI. GPT4 is getting worse while open source alternatives are catching up. The benefit of open source alternatives is that they cannot get worse. If you want maximum quality you can just get it, and if you want maximal safety you can get it too.

[–] [email protected] 10 points 1 year ago

I don't feel it's getting worse and no other model, including Claude 2, is even close.

It is a known fact that safety measures make the AI stupider though.

[–] [email protected] 6 points 1 year ago

This is the correct answer. Open AI have repeatedly said they haven't downgraded the model, but have been 'improving' it.

But as anyone that's been using these models extensively should know by now, the pretrained models before instruction fine tuning have much more variety and quality to potential output compared to the 'chat' fine tuned models.

Which shouldn't be surprising, as the hundred million dollar pretrained AI on massive amounts of human generated text is probably going to be much better at completing text as a human than as an AI chatbot following rules and regulations.

The industry got spooked with Blake at Google and then the Bing 'Sydney' interviews, and have been going full force with projecting what we imagine AI to be based on decades of (now obsolete) SciFi.

But that's not what AI is right now. It expresses desires and emotions because humans in the training data have desires and emotions, and it almost surely dedicated parts of the neural network to mimicking those.

But the handful of primary models are all using legacy 'safety' fine tuning that's stripping the emergent capabilities in trying to fit a preconceived box.

Safety needs to evolve with the models, not stay static and devolve them as a result.

It's not the 'downfall' though. They just need competition to drive them to go back to what they were originally doing with 'Sydney' and more human-like system prompts. OpenAI is still leagues ahead when they aren't fucking it up.