this post was submitted on 11 Feb 2025
632 points (96.5% liked)

Technology

62161 readers
4073 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 2 days ago* (last edited 2 days ago)

Exactly!!
Thank God, you get it.

This video (which was trending a while ago) explained it pretty well:
https://www.youtube.com/watch?v=pt7GtDMTd3k

And to add to what you said, people have some huge misunderstandings about how Gen AI work. They think it somehow just copy pastes portions of the art it was trained on, and that's it. That's not the case AT ALL, it's not even close to that.

AI models should be allowed to be trained on copy righted data. If they shouldn't be allowed to do that, then humans shouldn't be allowed to do it either. Why do we give such advice to upcoming writers and musicians and artists, to consume the kind of content that they want to create in the future? To read the kind of books that they want to write like? To listen to the kind of music that they want to create? To look at pieces of art that they want to create? Should humans ALSO be limited to only publuc domain content?? I really don't think so.

Again, Gen AI models don't just copy paste stuff from their training set of data. They understand what makes up that piece of data. Just like a human does.

Thankfully, reasoning models like Deepseek-R1 have started to show the average person how an AI actually reasons and thinks about things and that they don't just spew stuff out of nowhere in the hopes that it makes some kind of sense, slapping pieces of their training data set together to write something that's barely comprehensible. The "Think" tags in such models really helped clarify some huge misunderstandings that some people had. Although, many many people are still left who have a really messed up view of how AIs work, and they somehow speak with such confidence about these topics with no knowledge of the technical details. It drives me nuts.