this post was submitted on 13 Jul 2023
255 points (95.7% liked)

Technology

59708 readers
1864 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

A lawsuit claims Google took people's data without their knowledge or consent to train its AI products, including chatbot Bard.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 20 points 1 year ago (2 children)

search engines point to your site though. You are getting back something. An LLM won‘t give a reference. It’s something else altogether.

And there is no „robots.txt“ to block LLM training scrapers.

Just because you publish something doesn’t imply you forfeit copyright.

[–] [email protected] 10 points 1 year ago* (last edited 1 year ago) (1 children)

Their work isn't being reproduced and sold. Seems like fair use. I hate to say it but I'm with google on this. Things would get much with these lawsuits succeeding

[–] [email protected] 8 points 1 year ago (2 children)

No, but it is being used commercially for a profit.

This seems like a situation copyright law never saw coming.

[–] [email protected] 7 points 1 year ago (1 children)

If I read a bunch of copyrighted books, and answer questions based on the knowledge I have acquired from them, I do not owe the authors anything.

[–] diggit 1 points 1 year ago

TLDR: maybe it’s like a library? Libraries pay for books, even digital copies.

Presumably somebody bought a copy of the book, even if you found it on the coffee table.

This seems more like going through the trash for anything legible, reading billboards and taking free newspapers. It just happens that a lot of the stuff put out at the curb was copyrighted material. In fact, almost every website has © in the footer, so clearly the sentiment is “don’t copy my original content”, especially without credit. But if the AI is not reproducing, in whole or in part, the copyrighted material then it does seems a bit late to try to claw back value just because someone else found a way to monetize what you put out on the open web. I think that’s what’s going to have to be proven, one way or another.

Maybe another way to look at a LLM is as an enormous library, but instead of borrowing books and periodicals, as a user you are borrowing the pre-digested knowledge directly. Libraries have complex agreements in place with publishers, so that rights holders are compensated. Say what you will about these contracts, but they are a precedent. What is perhaps without precedent is how to handle the rest of the trash this library is indiscriminately gathering up.

[–] [email protected] 2 points 1 year ago

And that's fair use. But I'm more on the side that a lot of things should be more fair use and modern content creators have ruined the internet. I would much prefer if the Sara Silverman's and others realized they cant both try to use the internet for free promotion while also preventing specific people from consuming that content. I'd love if they all left. I don't need these people as much they need us

[–] [email protected] 6 points 1 year ago* (last edited 1 year ago)

See my edit. You own the copyright to your work but you do not own ① facts about your work, or ② facts contained in your work. The creators of reference works, for example, cannot assess a royalty fee from people who learn information from those reference works.

And there is no „robots.txt“ to block LLM training scrapers.

robots.txt is consulted by all manner of automated tools, not just search-engine indexing.