this post was submitted on 20 Aug 2023
39 points (95.3% liked)

lemmy.ml meta

1406 readers
1 users here now

Anything about the lemmy.ml instance and its moderation.

For discussion about the Lemmy software project, go to [email protected].

founded 3 years ago
MODERATORS
 

Some context about this here: https://arstechnica.com/information-technology/2023/08/openai-details-how-to-keep-chatgpt-from-gobbling-up-website-data/

the robots.txt would be updated with this entry

User-agent: GPTBot
Disallow: /

Obviously this is meaningless against non-openai scrapers or anyone who just doesn't give a shit.

all 14 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 1 year ago (1 children)

I could be wrong but wouldn't people be able to file class action lawsuits against these companies? because they are literally copying content without obtaining any prior explicit user consent, also I'm pretty sure Europeans have an upper hand with data privacy protection from GDPR (European data being extracted/harvested and transferred to US servers)

I could be wrong though

[–] [email protected] 3 points 1 year ago

Idk , I am not all for a private entity scraping comments and then using it , thats basically us doing free labour for them! No cool !

[–] [email protected] 2 points 1 year ago (1 children)

Wouldn't they theoretically be able to set up their own instance, federate with all the larger ones and scrape the data this way? Not sure if blocking them via the robots.txt file is the most effective barrier in case that they really want the data.

[–] [email protected] 11 points 1 year ago* (last edited 1 year ago)

Robots.txt is more of an honor system. If they respect , they won't do that trick.

[–] [email protected] 1 points 1 year ago (1 children)

If they'll pay us when they scrape our content, sure.

[–] [email protected] 1 points 11 months ago (1 children)

... Is that like a non-argument? How do you suppose they would pay sites, let alone site users to scrape their content?

[–] [email protected] 1 points 11 months ago

Yes that's the point

[–] [email protected] 1 points 1 year ago

I think this is a general question and problem for the whole fediverse, and can easily lead to the question of whether, or even when the fediverse is going to embrace having closed or private spaces or even invite only spaces, in order to try to secure some "human interaction only" social media.

[–] [email protected] 0 points 1 year ago

That won't stop OpenAI. We need actual blocking, on the server side. Problem is, with federation and all, it will be really, really difficult to do. And expensive.

[–] [email protected] -2 points 1 year ago* (last edited 1 year ago) (1 children)

I can understand privacy concerns, but I feel like it's inevitable that LLMs will be used to make lots of decisions, some possibly important, so wouldn't you want some content included in its training? For instance, would you want an LLM to be ignorant of FOSS because all the FOSS sites blocked it, and then a child asks an LLM for advice on software and gets recommended Microsoft and Apple products only?

[–] [email protected] 1 points 11 months ago* (last edited 11 months ago)

... It's probably going to recommend paid and non-FOSS apps and programs just on the basis that those companies probably will pay to be the top suggestions. Just like google ads. So no, I don't think that's a good enough reason. They can still scrape wiki's if they need info on FOSS sites, imo. Those shouldn't (?) block AI's and other aggregators.