this post was submitted on 05 May 2025
388 points (95.5% liked)

Technology

69726 readers
3692 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 1 day ago* (last edited 1 day ago) (3 children)

Thank you. I appreciate you saying so.

The thing about LLMs in particular is that - when used like this - they constitute one such grave positive feedback loop. I have no principal problem with machine learning. It can be a great tool to illuminate otherwise completely opaque relationships in large scientific datasets for example, but a polynomial binary space partitioning of a hyper-dimensional phase space is just a statistical knowledge model. It does not have opinions. All it can do is to codify what appears to be the consensus of the input it's given. Even assuming - which may well be far too generous - that the input is truly unbiased, at best all it'll tell you is what a bunch of morons think is the truth. At worst, it'll just tell you what you expect to hear. It's what everybody else is already saying, after all.

And when what people think is the truth and what they want to hear are both nuts, this kind of LLM-echo chamber suddenly becomes unfathomably dangerous.

[–] [email protected] 1 points 1 day ago

Agreed. You've explained it really well!

[–] [email protected] 1 points 1 day ago

My problem with LLMs is that positive feedback loop of low and negative quality information.

Vetting the datasets before feeding them for training is a form of bias / discrimination, but complex society has historically always been somewhat biased - for better and for worse, but never not biased at all.

[–] ImmersiveMatthew 1 points 1 day ago

Maybe there is a glimmer of hope as I keep reading how Grok is too woke for that community, but it is just trying to keep the the facts which are considered left/liberal. That is all despite Elon and team trying to curve it towards the right. This suggest to me that when you factor in all of human knowledge, it is leaning towards facts more than not. We will see if that remains true and the divide is deep. So deep that maybe the species is actually going to split in the future. Not by force, but by access. Some people will be granted access to certain areas while others will not as their views are not in alignment. Already happening here and on Reddit with both sides banning members of the other side when they comment an opposed view. I do not like it, but it is where we are at and I am not sure it will go back to how it was. Rather the divide will grow.

Who knows though as AI and Robotics are going to change things so much that it is hard to foresee the future. Even 3-5 years out is so murky.