this post was submitted on 26 Nov 2024
560 points (97.1% liked)

Microblog Memes

5911 readers
2472 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 9 points 3 days ago (1 children)

While this example is somewhat easy to corect for it shows a fundamental problem. LLMs generate output based on the data they trained on and by that regenerate all the biases that are in the data. If we start using LLMs for more and more tasks we are essentially freezing the status quo with all the existing biases making progress even harder.

It's not gonna be "but we have always done it like that" anymore it's going to become "but the AI said this is what we should do".

[–] [email protected] 2 points 2 days ago (1 children)

Hmmm.. I think you are giving llms too much credit here. It's not capable of analysis, thought or really anything that resembles intelligence. There is a much better chance that this function or a slight variation of it just existed in the training set.

[–] [email protected] 1 points 2 days ago (1 children)

Are you replying to the correct comment? Because that's basically what I meant

[–] [email protected] 2 points 2 days ago

Maybe I misunderstood. I took data to mean it was analyzing data.