this post was submitted on 29 Jan 2025
970 points (98.5% liked)
Technology
61300 readers
2642 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Many licences have different rules for redistribution, which I think is fair. The site is free to use but it's not fair to copy all the data and make a competitive site.
Of course wikipedia could make such a license. I don't think they have though.
How is the lack of infrastructure an argument for allowing something morally incorrect? We can take that argument to absurdum by saying there are more people with guns than there are cops - therefore killing must be morally correct.
The core infrastructure issue is distinguishing between queries made by individuals and those made by programs scraping the internet for AI training data. The answer is that you can't. The way data is presented online makes such differentiation impossible.
Either all data must be placed behind a paywall, or none of it should be. Selective restriction is impractical. Copyright is not the central issue, as AI models do not claim ownership of the data they train on.
If information is freely accessible to everyone, then by definition, it is free to be viewed, queried, and utilized by any application. The copyrighted material used in AI training is not being stored verbatim—it is being learned.
In the same way, an artist drawing inspiration from Michelangelo or Raphael does not need to compensate their estates. They are not copying the work but rather learning from it and creating something new.
I disagree. Machines aren't "learning". You are anthropomorphising theem. They are storing the original works, just in a very convoluted way which makes it hard to know which works were used when generating a new one.
I tend to see it as they used "all the works" they trained on.
For the sake of argument, assume I could make an "AI" mesh together images but then only train it on two famous works of art. It would spit out a split screen of half the first one to the left and half of the other to the right. This would clearly be recognized as copying the original works but it would be a "new piece of art", right?
What if we add more images? At some point it would just be a jumbled mess, but still consist wholly of copies of original art. It would just be harder to demonstrate.
Morally - not practically - is the sophistication of the AI in jumbling the images together really what should constitute fair use?
That's literally not remotely what llms are doing.
And they most certainly do learn in the common sense of the term. They even use neural nets which mimic the way neurons function in the brain.
Mimic, perhaps inspired but neural nets in machine learning doesn't work at all like real neural nets. They are just variables in a huge matrix multiplication.
FYI, I do have a Master's degree in Machine Learning.
Yes I also have a master's and a PhD in machine learning as well which automatically qualifies me as an authority figure.
And I can clearly say that you are wrong.