this post was submitted on 23 Sep 2023
209 points (91.0% liked)

Technology

59105 readers
4009 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

cross-posted from: https://lemmy.ml/post/5400607

This is a classic case of tragedy of the commons, where a common resource is harmed by the profit interests of individuals. The traditional example of this is a public field that cattle can graze upon. Without any limits, individual cattle owners have an incentive to overgraze the land, destroying its value to everybody.

We have commons on the internet, too. Despite all of its toxic corners, it is still full of vibrant portions that serve the public good — places like Wikipedia and Reddit forums, where volunteers often share knowledge in good faith and work hard to keep bad actors at bay.

But these commons are now being overgrazed by rapacious tech companies that seek to feed all of the human wisdom, expertise, humor, anecdotes and advice they find in these places into their for-profit A.I. systems.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 10 points 1 year ago (3 children)

But these commons are now being overgrazed by rapacious tech companies that seek to feed all of the human wisdom, expertise, humor, anecdotes and advice they find in these places into their for-profit A.I. systems.

This analogy falls apart when you note that "overgrazing" these resources does absolutely nothing to harm them.

They're still there. They haven't been affected in any way by the fact that a machine somewhere has read them and learned a bunch of stuff from them. So what?

[–] [email protected] 7 points 1 year ago (1 children)

This analogy falls apart when you note that "overgrazing" these resources does absolutely nothing to harm them.

Only if you consider AI-supercharged misinformation to not be harmful.

Only if you consider the entropy of human interaction on the internet to not be harmful.

Only if you consider being unable to know who is real to not be harmful.

[–] [email protected] 8 points 1 year ago (1 children)

None of those things directly harm the resources being "grazed", and none of them are inevitable consequences of AI. If you think they are then you're actually arguing against AI in general and not the specific way in which they've been trained.

[–] [email protected] 0 points 1 year ago (1 children)

You think the internet being flooded with articles, comments etc. all being written by AI whose only goals are selling shit, disseminating misinformation, and manipulating elections and opinions - with no way to know what is human and what is AI - is going to be a great environment to continue to train your AI?

You might be interested to read about Model Autophagy Disorder.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (1 children)

That is not a problem caused by "overgrazing" those open resources. It's a separate problem with AI training that needs to be addressed anyway. You're just throwing out random AI-related challenges regardless of whether they're relevant to what's being discussed.

Simply put, quality control is always important.

[–] [email protected] 3 points 1 year ago (1 children)

If you pump toxic waste onto the field nobody gets to graze it.

Fuck you're being pedantic.

[–] [email protected] -1 points 1 year ago (1 children)

And you're completely missing the point.

Whether or not toxic waste is pumped into the field is completely independent of whether anyone is "grazing" on it. AIs are going to be trained and AIs are going to be generating content, regardless of whether those "commons" are being used as training material. If you wish to keep those "commons" high-quality you're going to have to come up with some way of doing that regardless of whether they're being used as training material. Banning the use of them as training material will have no impact on whether they get "toxic waste" pumped into it.

My objection is to those who are saying that to save the commons we need to prevent grazing, ie, that to save the quality of public discourse we need to prevent AIs from training on it. Those two things are unrelated. Stopping AIs from training on it will not do anything to preserve the quality of public discourse.

[–] [email protected] 1 points 1 year ago (1 children)

No mate, you're just being pedantic

[–] [email protected] 0 points 1 year ago (1 children)

And you're still keen to argue two days after the thread's moved off of everyone's feed.

[–] [email protected] 1 points 1 year ago

Yeah man I login a couple of times a day to check notifications

[–] [email protected] 5 points 1 year ago (1 children)

While the analogy is not perfect, you can think that the harm is getting lost in the noise. If the "overgrazing" of content on the internet (content which has the purpose of being read/listened/etc. Often for a job) causes a huge amount of other content based on it (AI-generated), then the original is damaged by being lost in the noise.

[–] [email protected] -1 points 1 year ago (1 children)

AI-generated content is coming regardless, whether those open sources get "grazed" or not.

[–] [email protected] 1 points 1 year ago

Yes, bit the qualitative difference of providing direct competition to the "grazed" material exists. There is a difference between AI generated audiobooks and AI generated audiobooks with the voice of X, for X. Once AI can perfectly reproduce X's voice, his/her value as a voice actor is 0, hence the "overgrazing". Is not the same thing compared to simply being able to provide audiobooks with any other voice.

[–] [email protected] 1 points 1 year ago (1 children)
[–] [email protected] 0 points 1 year ago (1 children)

That was entirely self-inflicted damage.

[–] [email protected] 1 points 1 year ago (1 children)
[–] [email protected] 0 points 1 year ago

Reddit is responsible for their own API changes. Not OpenAI or any other external agency who might have been using Reddit data for AI training. Only Reddit was capable of choosing to change their API, it's entirely under their control.