this post was submitted on 21 Nov 2024
164 points (97.7% liked)

Technology

60203 readers
2576 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It's the earliest AI technology striving to expose unreported CSAM at scale.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 144 points 1 month ago (2 children)

Not a single peep about false positives.

I'm sure it won't be abused though. And if anyone does complain, just get their electronics seized and checked, because they must be hiding something!

[–] [email protected] 89 points 1 month ago (4 children)

Reminds me of the A cup breasts porn ban in Australia a few years ago, because only pedos would watch that

[–] [email protected] 50 points 1 month ago (2 children)

There was a a porn studio that was prosecuted for creating CSAM. Brazil i belive. Prosecutors claimed that the petite, A-cup woman was clearly underaged. Their star witness was a doctor who testified that such underdeveloped breasts and hips clearly meant she was still going through puberty and couldn't possible be 18 or older. The porn star showed up to testify that she was in fact over 18 when they shot the film and included all her identification including her birth certificate and passport. She also said something to the effect of women come in all shapes and sizes and a doctor should know better.

I can't find an article. All I'm getting is GOP trump pedo nominees and brazil laws on porn.

[–] [email protected] 15 points 1 month ago

Pretty sure the adult star was lil Lupe. She was everywhere at the time because she did, indeed, look underage.

[–] [email protected] 5 points 1 month ago

I'm just glad they protected her

[–] [email protected] 50 points 1 month ago (2 children)

Awe man, I love all titties. Variety is the spice of life.

[–] DScratch 50 points 1 month ago (1 children)

Not to mention the self image impact such things would have on women with smaller breasts, who (as I understand it) generally already struggle with poor self image due to breast size.

[–] [email protected] 23 points 1 month ago (1 children)

Clearly the state gives zero fucks about these women, or anyone else or even "the children"

Catholic Church is still around for a reason

[–] [email protected] 7 points 1 month ago* (last edited 1 month ago)

Typically the state only cares about things they perceive as children.

[–] [email protected] 21 points 1 month ago (1 children)

Believe it or not, straight to jail.

[–] [email protected] 24 points 1 month ago (1 children)

If this is the price I must pay, I will pay it, sir! No man should be deprived of privately viewing a consenting adults perfectly formed small tit's. They can take my liberty, they can take my livelihood, but they will never take away my boner for puffy nipples on a small chested half Japanese woman!

[–] [email protected] 14 points 1 month ago

What is the charge? Biting a breast? A succulent Chinese breast?

[–] [email protected] 17 points 1 month ago

This sort of rhetoric really bothers me. Especially when you consider that there are real adult women with disorders that make them appear prepubescent. Whether that's appropriate for pornography is a different conversation, but the idea that anyone interested in them is a pedophile is really disgusting. That is a real, human, adult woman and some people say anyone who wants to live them is a monster. Just imagine someone telling you that anyone who wants to love you is a monster and that they're actually protecting you.

load more comments (1 replies)
[–] [email protected] 16 points 1 month ago

It could also, of course, make mistakes, but Kevin Guo, Hive's CEO, told Ars that extensive testing was conducted to reduce false positives or negatives substantially. While he wouldn't share stats, he said that platforms would not be interested in a tool where "99 out of a hundred things the tool is flagging aren't correct."

I take this to mean it is at least 1% accurate lol.

[–] [email protected] 141 points 1 month ago* (last edited 1 month ago) (2 children)

Thorn, the company backed by Ashton Kutcher and which tried to get its way to monitor all messages in the EU via Chat Control. No thanks.

https://fortune.com/europe/2023/09/26/thorn-ashton-kutcher-ylva-johansson-csam-csa-regulation-european-commission-encryption-privacy-surveillance/

[–] [email protected] 73 points 1 month ago (3 children)

Just remember folks. Kutcher is a slimeball too.

The guy went from a D list star and hanging out with the likes of Danny Masterson and going to Diddy’s infamous parties - to suddenly overnight courting the US government and being the face of ‘helping’ children everywhere.

Yeah right…..

[–] [email protected] 25 points 1 month ago (1 children)

People can grow and change. Not saying he did or didn’t. Just saying that people aren’t a monolith. It’s plausible he just grew and his views changed / evolved.

That being said, it’s highly convenient where he’s positioned himself these days…

load more comments (1 replies)
[–] [email protected] 25 points 1 month ago (2 children)

I’d be wary of calling him guilty by association. Maybe when he realized who he was really hanging out with he was so horrified and disgusted that he just had to get involved and do something to fight back?

[–] [email protected] 9 points 1 month ago (1 children)

It’s awful coincidental that he seems to hang out with the ‘rapist’ crowd. Even going as far as writing a letter for Masterson as to how nice of a guy he is to try to get him a lenient sentence.

Even Hollywood has ostracized him and his wife - news sites recently reported they were looking to leave the country and let things cool off for a while.

I’m sure everyone is right though that keep posting here, that he is a swell guy who was just in the wrong place at the wrong time, multiple times. Several years worth of multiple times with wrong people. Just a coincidence.

[–] [email protected] 16 points 1 month ago

The difference between us giving him a benefit of the doubt and claiming innocence and your take, is that you are labeling him a pedophile without proof. That's a significant claim if false, and imo takes an assumption too far. Maybe he's bad and it should be looked into, but saying he did something because he was on a show with and good friends with a guy that happened to be a rapist is wrong.

load more comments (1 replies)
load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 52 points 1 month ago (2 children)

I am a bit confused how it is legal for them to have the training data here?

Like is there anything a corpo can't do?

Like why can't subway Jared and Catholic church "train the AI"

Only half way joking, what's the catch here?

[–] [email protected] 36 points 1 month ago (1 children)

There are laws around it. Law enforcement doesn't just delete any digital CSAM they seize.

Known CSAM is archived and analyzed rather than destroyed, and used to recognize additional instances of the same files in the wild. Wherever file scanning is possible.

Institutions and corporation can request licenses to access the database, or just the metadata that allows software to tell if a given file might be a copy of known CSAM.

This is the first time an attempt is being made at using the database to create software able to recognize CSAM that isn't already known.

I'm personally quite sceptical of the merit. It may well be useful for scanning the public internet, but I'm guessing the plan is to push for it to be somehow implemented for private communication, no matter how badly that compromises the integrity of encryption.

[–] [email protected] 18 points 1 month ago (1 children)

So doesn't that make the law enforcement having the biggest CP collection from everybody? This sounds kinda dangerous...

[–] [email protected] 26 points 1 month ago* (last edited 1 month ago)

It does. Kinda.

The police are seldom allowed to be in possession of CSAM, except for in terms of grabbing the hardware which contains it in an arrest. The database used in modern detection tools is maintained by NCMEC which has special permission to do so.

And of course there are risks, but it's just digital data. Unless you are creating more, you're not actively harming anyone. And law enforcement absolutely needs that data to take some of the most obvious steps to prevent it being spread further.

Obviously, someone has access, but to get to the actual media files wouldn't be simple. What typically happens, is that anyone wanting to detect CSAM, is given a hashed version of the database. They can then scan their systems for CSAM by hashing any media they are hosting, and seeing whether there are any matches.

Whenever possible, people aren't handling the actual media. But for any detection to be possible to begin with, the database of the actual media does need to be maintained somewhere.

AI is a touchier subject, as you can't train a model to recognize CSAM not already in the database using hashes, so in those cases you have to work with actual real media. This is only recently becoming a thing.

It also leaves open the possibility for false positives. An oft cited example is parents taking pictures of their own children for innocent reasons, or doctors and parents handling images for valid medical reasons. In a system that flagged such content, it would mean someone else would be seeing that "private" content because it was flagged.

load more comments (1 replies)
[–] [email protected] 49 points 1 month ago* (last edited 1 month ago) (1 children)

It's the earliest AI technology striving to expose unreported CSAM at scale.

horde-safety has been out for a year now. Just saying... It's not a trained AI model in this way, but it's still using Neural Networks (i.e. "AI Technology")

[–] [email protected] 8 points 1 month ago* (last edited 1 month ago) (1 children)
[–] [email protected] 8 points 1 month ago

haha, nah people reported some unexpected censors, and we investigated what part of their prompt might be causing it.

[–] [email protected] 25 points 1 month ago (1 children)

Man... That AI is going to be so fucked up when it gains sentience

[–] [email protected] 7 points 1 month ago (1 children)

Skynet's real origin story. We might just deserve judgement day.

load more comments (1 replies)
[–] [email protected] 24 points 1 month ago (8 children)

And will we get that technology to keep the Fediverse and free platforms safe? Probably not. All the predecessors have been kept away for sole use of the big players, despite populism always claiming we need to introduce total surveillance to keep the children safe...

[–] [email protected] 14 points 1 month ago (1 children)

I was going to say... Sure would be nice to have this feature in all the open source AI image generator tools but you're absolutely right 😩

[–] [email protected] 11 points 1 month ago

Yeah, unless someone publishes even a set of hashes of known bad content for the general public... I kind of doubt the true intentions are preventing CSAM to the benefit of everyone.

load more comments (7 replies)
[–] [email protected] 14 points 1 month ago (5 children)

This seems like a potential actual good use of AI. Can't have been much fun to train it though.

And is there any risk of people turning these kinds of models around and using them to generate images?

[–] [email protected] 24 points 1 month ago (6 children)

If AI was reliable, maybe. MAYBE. But guess what? It turns out that “advanced autocomplete” does a shitty job of most things, and I bet false positives will be numerous.

[–] [email protected] 12 points 1 month ago

This is not that kind of AI.

[–] [email protected] 5 points 1 month ago

It's possible to have a good AI system, but it takes millions of dollars and several thousand manhours to do, and most companies won't put in the effort.

But, there should always be a human in the loop.

load more comments (4 replies)
[–] [email protected] 14 points 1 month ago

And is there any risk of people turning these kinds of models around and using them to generate images?

There isn't really much fundamental difference between an image detector and an image generator. The way image generators like stable diffusion work is essentially by generating a starting image that's nothing but random static and telling the generator "find the cat that's hidden in this noise."

It'll probably take a bit of work to rig this child porn detector up to generate images, but I could definitely imagine it happening. It's going to make an already complicated philosophical debate even more complicated.

[–] [email protected] 8 points 1 month ago

I think image generators in general work by iteratively changing random noise and checking it with a classifier, until the resulting image has a stronger and stronger finding of “cat” or “best quality” or “realistic”.

If this classifier provides fine grained descriptive attributes, that’s a nightmare. If it just detects yes or no, that’s probably fine.

[–] [email protected] 7 points 1 month ago

Nobody would have been looking directly at the source data. The FBI or whoever provides the dataset to approved groups, but after that you just say "use all the images in this folder" and it goes. But I don't even know if they actually provide real full-resolution images, or just perceptual hashes, or downsampled images.

And while it's possible to use the dataset to generate new images assuming the training data had full-res images, like I said, I know they investigate the people making the request before allowing access. And access is probably supervised and audited.

load more comments (1 replies)
[–] [email protected] 14 points 1 month ago* (last edited 1 month ago) (3 children)

Jesus Christ. If someone ever got their hands on this model they could use it to generate new material. The grossest possible AI model to date

[–] [email protected] 27 points 1 month ago (1 children)

No. This is an inference model, not a generative model. You generally cannot train a model for both, unless you do it on purpose, and they certainly did not (especially since inference models are way easier to train than generative models).

[–] [email protected] 6 points 1 month ago* (last edited 1 month ago) (1 children)

A generative model uses the classifier as part of its training. If you generate a picture of pure random noise, then iteratively pick random noise that the classifier says "looks" more like csam, then you can effectively generate images that the classifier says it's 100% certain is csam. Whether or not that looks anything like what a human would consider to be csam depends on other factors but it remains a possibility.

[–] [email protected] 10 points 1 month ago

You are describing the way deepdream works, not the way modern Diffusion models work. It's the difference between psychedelic dog faces and a highly adherent generative image of a German Sheppard.

I can't imagine you're going to get anything out of this model that actually looks like CSAM, unless there's some sort of breakthrough in using these models for previously unrealized generative purposes.

load more comments (2 replies)
[–] [email protected] 9 points 1 month ago (1 children)

This is a great development, albeit with a lot of soul crushing development behind it I assume. People who have to look at CSAM or whatever the acronym is have a miserable job, so I'm very supportive of trying to automate that away from people.

load more comments (1 replies)
load more comments
view more: next ›