this post was submitted on 04 Dec 2023
888 points (97.9% liked)
Technology
59719 readers
2502 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
How can the training data be sensitive, if noone ever agreed to give their sensitive data to OpenAI?
Exactly this. And how can an AI which "doesn't have the source material" in its database be able to recall such information?
IIRC based on the source paper the "verbatim" text is common stuff like legal boilerplate, shared code snippets, book jacket blurbs, alphabetical lists of countries, and other text repeated countless times across the web. It's the text equivalent of DALL-E "memorizing" a meme template or a stock image -- it doesn't mean all or even most of the training data is stored within the model, just that certain pieces of highly duplicated data have ascended to the level of concept and can be reproduced under unusual circumstances.
Did you read the article? The verbatim text is, in one example, including email addresses and names (and legal boilerplate) directly from asbestoslaw.com.
Edit: I meant the DeepMind article linked in this article. Here's the link to the original transcript I'm talking about: https://chat.openai.com/share/456d092b-fb4e-4979-bea1-76d8d904031f