this post was submitted on 21 May 2024
512 points (95.4% liked)

Technology

57432 readers
4179 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 167 points 3 months ago (27 children)

Mhm I have mixed feelings about this. I know that this entire thing is fucked up but isn't it better to have generated stuff than having actual stuff that involved actual children?

[–] [email protected] 115 points 3 months ago (21 children)

A problem that I see getting brought up is that generated AI images makes it harder to notice photos of actual victims, making it harder to locate and save them

[–] [email protected] 3 points 3 months ago (3 children)

And doesn't the AI learn from real images?

[–] ricecake 24 points 3 months ago

It does learn from real images, but it doesn't need real images of what it's generating to produce related content.
As in, a network trained with no exposure to children is unlikely to be able to easily produce quality depictions of children. Without training on nudity, it's unlikely to produce good results there as well.
However, if it knows both concepts it can combine them readily enough, similar to how you know the concept of "bicycle" and that of "Neptune" and can readily enough imagine "Neptune riding an old fashioned bicycle around the sun while flaunting it's tophat".

Under the hood, this type of AI is effectively a very sophisticated "error correction" system. It changes pixels in the image to try to "fix it" to matching the prompt, usually starting from a smear of random colors (static noise).
That's how it's able to combine different concepts from a wide range of images to create things it's never seen.

[–] [email protected] 7 points 3 months ago

Basically if I want to create ... (I'll use a different example for obvious reasons, but I'm sure you could apply it to the topic)

... "an image of a miniature denium airjet with Taylor Swift's face on the side of it", the AI generators can despite no such thing existing in the training data. It may take multiple attempts and effort with the text prompt to get exactly what you're looking for, but you could eventually get a convincing image.

AI takes loads of preexisting data on airplanes, T.Swift, and denium to combine it all into something new.

[–] [email protected] 4 points 3 months ago

True, but by their very nature their generations tend to create anonymous identities, and the sheer amount of them would make it harder for investigators to detect pictures of real, human victims (which can also include indicators of crime location.

load more comments (20 replies)
[–] [email protected] 31 points 3 months ago (2 children)

Did we memory hole the whole ‘known CSAM in training data’ thing that happened a while back? When you’re vacuuming up the internet you’re going to wind up with the nasty stuff, too. Even if it’s not a pixel by pixel match of the photo it was trained on, there’s a non-zero chance that what it’s generating is based off actual CSAM. Which is really just laundering CSAM.

[–] [email protected] 32 points 3 months ago (1 children)

IIRC it was something like a fraction of a fraction of 1% that was CSAM, with the researchers identifying the images through their hashes but they weren't actually available in the dataset because they had already been removed from the internet.

Still, you could make AI CSAM even if you were 100% sure that none of the training images included it since that's what these models are made for - being able to combine concepts without needing to have seen them before. If you hold the AI's hand enough with prompt engineering, textual inversion and img2img you can get it to generate pretty much anything. That's the power and danger of these things.

load more comments (1 replies)
[–] [email protected] 11 points 3 months ago (1 children)
[–] [email protected] 7 points 3 months ago (1 children)

Fair but depressing, it seems like it barely registered in the news cycle.

[–] ricecake 1 points 3 months ago

By the time it made it to the news cycle it was being addressed, which did a lot to offset the staying power.
It also wasn't intentional or in significant quantities so there was no controversy, just everyone saying they agree that this should not have happened.

[–] [email protected] 27 points 3 months ago (10 children)

Yeah, it’s very similar to the “is loli porn unethical” debate. No victim, it could supposedly help reduce actual CSAM consumption, etc… But it’s icky so many people still think it should be illegal.

There are two big differences between AI and loli though. The first is that AI would supposedly be trained with CSAM to be able to generate it. An artist can create loli porn without actually using CSAM references. The second difference is that AI is much much easier for the layman to create. It doesn’t take years of practice to be able to create passable porn. Anyone with a decent GPU can spin up a local instance, and be generating within a few hours.

In my mind, the former difference is much more impactful than the latter. AI becoming easier to access is likely inevitable, so combatting it now is likely only delaying the inevitable. But if that AI is trained on CSAM, it is inherently unethical to use.

Whether that makes the porn generated by it unethical by extension is still difficult to decide though, because if artists hate AI, then CSAM producers likely do too. Artists are worried AI will put them out of business, but then couldn’t the same be said about CSAM producers? If AI has the potential to run CSAM producers out of business, then it would be a net positive in the long term, even if the images being created in the short term are unethical.

[–] Ookami38 24 points 3 months ago (3 children)

Just a point of clarity, an AI model capable of generating csam doesn't necessarily have to be trained on csam.

load more comments (3 replies)
[–] [email protected] 3 points 3 months ago (1 children)

I think one of the many problems with AI generated CSAM is that as AI becomes more advanced it will become increasingly difficult for authorities to tell the difference between what was AI generated and what isn't.

Banning all of it means authorities don't have to sift through images trying to decipher between the two. If one image is declared to be AI generated and it's not...well... that doesn't help the victims or create less victims. It could also make the horrible people who do abuse children far more comfortable putting that stuff out there because it can hide amongst all the AI generated stuff. Meaning authorities will have to go through far more images before finding ones with real victims in it. All of it being illegal prevents those sorts of problems.

[–] [email protected] 2 points 3 months ago* (last edited 3 months ago)

And that’s a good point! Luckily it’s still (usually) fairly easy to identify AI generated images. But as they get more advanced, that will likely become harder and harder to do.

Maybe some sort of required digital signatures for AI art would help; Something like a public encryption key in the metadata, that can’t be falsified after the fact. Anything without that known and trusted AI signature would by default be treated as the real deal.

But this would likely require large scale rewrites of existing image formats, if they could even support it at all. It’s the type of thing that would require people way smarter than myself. But even that feels like a bodged solution to a problem that only exists because people suck. And if it required registration with a certificate authority (like an HTTPS certificate does) then it would be a hurdle for local AI instances to jump through. Because they would need to get a trusted certificate before they could sign their images.

[–] mindbleach 2 points 3 months ago

The first is that AI would supposedly be trained with CSAM to be able to generate it.

There's photorealistic furry porn.

Unless I've missed some news, those obviously can't be based on real images. So maybe this technology that can generate a wizard raising zombies from a swamp made of ice cream doesn't require examples of the exact thing it's showing you.

[–] Kalcifer 1 points 3 months ago* (last edited 3 months ago)

But it’s icky so many people still think it should be illegal.

Imo, not the best framework for creating laws. Essentially, it's an appeal to emotion.

load more comments (6 replies)
[–] [email protected] 11 points 3 months ago (2 children)

I have trouble with this because it's like 90% grey area. Is it a pic of a real child but inpainted to be nude? Was it a real pic but the face was altered as well? Was it completely generated but from a model trained on CSAM? Is the perceived age of the subject near to adulthood? What if the styling makes it only near realistic (like very high quality CG)?

I agree with what the FBI did here mainly because there could be real pictures among the fake ones. However, I feel like the first successful prosecution of this kind of stuff will be a purely moral judgement of whether or not the material "feels" wrong, and that's no way to handle criminal misdeeds.

[–] [email protected] 16 points 3 months ago

If not trained on CSAM or in painted but fully generated, I can't really think of any other real legal arguments against it except for: "this could be real". Which has real merit, but in my eyes not enough to prosecute as if it were real. Real CSAM has very different victims and abuse so it needs different sentencing.

[–] [email protected] 1 points 3 months ago

Everything is 99% grey area. If someone tells you something is completely black and white you should be suspicious of their motives.

[–] [email protected] 9 points 3 months ago

Apparently he sent some to an actual minor.

[–] [email protected] 7 points 3 months ago (4 children)

You know whats better? Having none of this shit

[–] [email protected] 16 points 3 months ago

Did you just fix menal health?

[–] [email protected] 8 points 3 months ago

Yeah as I also said.

[–] [email protected] 3 points 3 months ago* (last edited 3 months ago)

Nirvana fallacy

Yeah would be nice. Unfortunelately it isn't so and it's never going to. Chasing after people generating distasteful AI pictures is not making the world a better place.

load more comments (1 replies)
[–] [email protected] 3 points 2 months ago

It reminds me of the story of the young man who realized he had an attraction to underage children and didn't want to act on it, yet there were no agencies or organizations to help him, and that it was only after crimes were committed that anyone could get help.

I see this fake cp as only a positive for those people. That it might make it difficult to find real offenders is a terrible reason against.

[–] [email protected] 2 points 3 months ago

Better only means less worse in this case, I guess

load more comments (19 replies)