this post was submitted on 20 Jul 2023
69 points (92.6% liked)

Technology

59598 readers
3432 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 13 comments
sorted by: hot top controversial new old
[–] [email protected] 9 points 1 year ago (1 children)

When they first launched the bing AI powered by GPT I used it for everything, then it became pretty clear they nerfed it and I've been waiting for a competitor to catch up. Bard's gotten a little better, but it hallucinates way worse still, making up answers.

I'm secretly hoping for one of these open-source projects like Llama 2 or Orca to lead to a totally unrestricted chatbot even if it's short-lived

[–] [email protected] 1 points 1 year ago

Man I'd love to have the original bing ai back. Those hallucinations were something else. Probably a liability issue but I wish they had it available with a disclaimer.

[–] [email protected] 4 points 1 year ago

I wouldn't be surprised if it is getting worse. It's not "real" intelligence that "understands" your questions, and unlike more targeted solutions like GitHub copilot they don't have a strong use-case focus that can guide their progress.

But I think it's also that people are coming to terms with what ChatGPT actually can and more importantly cannot do. It's crazy sometimes to hear what the average person thinks the current iteration of AI's is capable of.

[–] [email protected] 4 points 1 year ago (1 children)

Yep, definitely. I have a plus subscription, and stuff that was easy for it just a few months ago now seems to take several back-and-forths to barely approach similar results.

Science content is where I noticed the most degradation. It just stares at me using blank “it’s not in my training data” answers to questions that used to have comprehensive responses a while ago.

I think they’re scaling down the models to make them cheaper to run?

[–] [email protected] 6 points 1 year ago

They’re definitely reducing model performance to speed up responses. ChatGPT was at its best when it took forever to write out a response. Lately I’ve noticed that ChatGPT will quickly forget information you just told it, ignore requests, hallucinate randomly, and has a myriad of other problems I didn’t have when the GPT-4 model was released.

[–] [email protected] 3 points 1 year ago

Yeah I just cancelled my plus sub because it's not valuable to me anymore. It feels nearly as bad as 3.5 at times, and having to go back and forth with it on a 25 message per 3 hour budget is extremely stupid.

[–] [email protected] 1 points 1 year ago (2 children)
[–] [email protected] 2 points 1 year ago

It's good at writing sentences. The content of the sentences may or may not be real/true.

[–] [email protected] 1 points 1 year ago (1 children)

It's pretty great at writing short utility scripts and code. And it's fantastic at explaining errors, warnings, and log file dumps.

That's what I use it for.

[–] [email protected] 1 points 1 year ago

Strongly disagree with explaining things, because you don't know if it's correct. And you have to validate code it creates, so 🤷‍♂️

I've asked it to produce C for a specific product, and it effectively summarized and reproduced existing example code. Being able to so easily discover a source it used for training revealed the entire trick at once.

[–] [email protected] 1 points 1 year ago

A month ago, as a man working in IT and graphic design I was getting the "ai is going to replace you" every day and I'm loving the ai decline headlines this week.

[–] [email protected] 1 points 1 year ago (1 children)

Yes. Get ready for the advent of REAL, paid models. Gotta make the free ones stupid first.

[–] [email protected] 2 points 1 year ago

GPT-4 is included in the study and is not free though lol. It's actually kind of expensive to use it for lots of queries.