Politics
For civil discussion of US politics. Be excellent to each other.
Rule 1: Posts have the following requirements:
▪️ Post articles about the US only
▪️ Title must match the article headline
▪️ Recent (Past 30 Days)
▪️ No Screenshots/links to other social media sites or link shorteners
Rule 2: Do not copy the entire article into your post. One or two small paragraphs are okay.
Rule 3: Articles based on opinion (unless clearly marked and from a serious publication-No Fox News or equal), misinformation or propaganda will be removed.
Rule 4: Posts or comments that are homophobic, transphobic, racist, sexist, ableist, will be removed.
Rule 5: Keep it civil. It’s OK to say the subject of an article is behaving like a jerk. It’s not acceptable to say another user is a jerk. Cussing is fine.
Rule 6: Memes, spam, other low effort posting, reposts, advocating violence, off-topic, trolling, offensive, regarding the moderators or meta in content may be removed at any time.
Media owners, CEOs and/or board members
view the rest of the comments
While the content is abhorrent, I don't get the logic of its being trained on already existing images causing harm to the individuals in the images. It's not like csam was generated for the express purpose of training the AI model.
Yes, you copied and pasted the section of the article I disagree with. Did you have a point?
Don't forget the bolded section which answers your questions because it is being trained on specific child porn. You're dangerously close to sticking up for child porn. I'm also the mod, so tread lightly on that issue.
You aren't answering his question above and completely missing his point.
And then threatening him.
He isn't asking if there is CSAM in the dataset but why it would matter. Granted there is a lot to be said on the subject but you aren't saying much other than yetrippingbastard behavior.
I did answer with the bolded text. Move along.
The bolded section doesn't answer my questions because my questions are disagreeing with their assertions.
Then let's end this discussion here, I disagree.
If ai generated csam stops actual csam isn't that a good thing?
I think so