this post was submitted on 22 Aug 2023
769 points (95.7% liked)

Technology

59646 readers
2677 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

OpenAI now tries to hide that ChatGPT was trained on copyrighted books, including J.K. Rowling's Harry Potter series::A new research paper laid out ways in which AI developers should try and avoid showing LLMs have been trained on copyrighted material.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

I am not a lawyer by the way, I don't even live in the US, so what I write is just my opinion.

But fair use seems a ridiculous defense when we talk about the Github Copilot case, which is the first tangible lawsuit about it that I know of. The plaintiffs lay out the case of a book for Javascript developers as their example. The objective of the book to give you excercises in Javascript development, I would get the book if I wanted to do Javascript excercises. The book is copyrighted under a share-alike attribution required licence. The defendants Github and OpenAI don't honour the license with Copilot and Codex. They claim fair use.

So with the four factors:

  • the purpose and character of your use: .Well, they present their Javascript excercises as original work while it's obvious they are not, they are reproducing the task they want letter by letter. It is even missing critical context that makes it hard to understand without the book, so their work does not even stand on its own. Also, they do this for monetary compensation, while not respecting the original license, which if someone was giving a commentary or criticism covered by fair use, would be as trivial as providing a citation of the book. They are also not producing information beyond what's available in the book. Quite funnily, the plaintiffs mention that the "derivative" work is also not quite valuable, as the model answered with an example from a "what's wrong with this, can you fix it?" section for a question about how to determine if a number is even.

  • the nature of the copyrighted work: It's freely available, the licence only requires if you republish it, you should provide proper attribution. It is not impossible to provide use cases based on fair use while honouring the license. There is no monetary or other barrier.

  • the amount and substantiality of the portion taken: All of it, and it is reproduced verbatim.

  • the effect of the use upon the potential market: Github Copilot is in the same market as the original work and is competing with it, namely in showing people how to use Javascript.

And again, I feel this is one layer. Copyright enforcement has never been predictable, and US courts are not predictable either. I think anything can come of this now that it's big tech that is on the defendant side, and they have the resources to fight, not like random Joe Schmoes caught with bootleg DVDs. Maybe they abolish copyright? Maybe they get an exception? Since US courts have such wide jurisdiction and can effectively make laws, it is still a toss-up. That said, the Github Copilot class action case is the one to watch, and so far, the judge denied orders to dismiss the case, so it may go either way.

Also by the way, the EU has no fair use protections, it only allows very specific exceptions for public criticism and such, none of which fits AI. Going by the example of Copilot, this would mean that EU users can't use Copilot, and also that anything that was produced with the assistance of Copilot (or ChatGPT for that matter) is not marketable in the EU.

[โ€“] [email protected] 1 points 1 year ago* (last edited 1 year ago)

I am not a lawyer either or a programmer for that matter, but the Copilot case looks pretty fucked. We can't really get a look at the plaintiff's examples since they have to be kept anonymous. Generative models weights don't copy and paste from their training data unless there's been some kind of overfitting, and some cases of similar or identical code snippets, might be inevitable given the nature of programming languages and common tasks. If the model was trained correctly, it should only ever see infinitesimally tiny parts of its training data. We also can't tell how much of the plaintiff's code is being used for the same reasons. The same is true of the plaintiff's claims about the "Suggestions matching public code".

This case is still in discovery and mired in secrecy, we might not ever find out what's going on even once the proceedings have concluded.