this post was submitted on 20 Jan 2025
19 points (88.0% liked)

Politics

435 readers
420 users here now

For civil discussion of US politics. Be excellent to each other.

Rule 1: Posts have the following requirements:
▪️ Post articles about the US only

▪️ Title must match the article headline

▪️ Recent (Past 30 Days)

▪️ No Screenshots/links to other social media sites or link shorteners

Rule 2: Do not copy the entire article into your post. One or two small paragraphs are okay.

Rule 3: Articles based on opinion (unless clearly marked and from a serious publication-No Fox News or equal), misinformation or propaganda will be removed.

Rule 4: Keep it civil. It’s OK to say the subject of an article is behaving like a jerk. It’s not acceptable to say another user is a jerk. Cussing is fine.

Rule 5: Be excellent to each other. Posts or comments that are homophobic, transphobic, racist, sexist, ableist, will be removed.

Rule 6: Memes, spam, other low effort posting, reposts, advocating violence, off-topic, trolling, offensive, regarding the moderators or meta in content may be removed at any time.

Rule 7. No conjecture type posts (this could, might, may, etc.). Only factual. If the headline is wrong, clarify within the body.

USAfacts.org

The Alt-Right Playbook

Media owners, CEOs and/or board members

founded 2 years ago
MODERATORS
 

Late last year, California passed a law against the possession or distribution of child sex abuse material (CSAM) that has been generated by AI. The law went into effect on January 1, and Sacramento police announced yesterday that they have already arrested their first suspect—a 49-year-old Pulitzer-prize-winning cartoonist named Darrin Bell.

The new law, which you can read here, declares that AI-generated CSAM is harmful, even without an actual victim. In part, says the law, this is because all kinds of CSAM can be used to groom children into thinking sexual activity with adults is normal. But the law singles out AI-generated CSAM for special criticism due to the way that generative AI systems work.

"The creation of CSAM using AI is inherently harmful to children because the machine-learning models utilized by AI have been trained on datasets containing thousands of depictions of known CSAM victims," it says, "revictimizing these real children by using their likeness to generate AI CSAM images into perpetuity."

Edit: Bolded out certain parts to clarify why they're doing it.

I'm locking this thread because I won't have time to watch it.

you are viewing a single comment's thread
view the rest of the comments
[–] earphone843 1 points 3 days ago (9 children)

While the content is abhorrent, I don't get the logic of its being trained on already existing images causing harm to the individuals in the images. It's not like csam was generated for the express purpose of training the AI model.

[–] _core -2 points 3 days ago (1 children)

If ai generated csam stops actual csam isn't that a good thing?

[–] earphone843 -2 points 3 days ago
load more comments (7 replies)