this post was submitted on 28 Aug 2023
1742 points (97.9% liked)
Lemmy.World Announcements
28383 readers
3 users here now
This Community is intended for posts about the Lemmy.world server by the admins.
Follow us for server news π
Outages π₯
https://status.lemmy.world
For support with issues at Lemmy.world, go to the Lemmy.world Support community.
Support e-mail
Any support requests are best sent to [email protected] e-mail.
Report contact
- DM https://lemmy.world/u/lwreport
- Email [email protected] (PGP Supported)
Donations π
If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.
If you can, please use / switch to Ko-Fi, it has the lowest fees for us
Join the team
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Isn't there semi-automated tools that can detect CP?
Those might be an automated solution to at least cut down on the volume.
The same can go for banned images. These can automatically be identified with perceptual hashing, and automatically be denied when uploading.
We should just track these people and send them the police directly, way more efficient than Symptom control.
There are. They are easy to abuse too, and cause a lot of other issues, along with not being 100% efficient. Apple tried that and was met with appropriate backlash. There are many issues : training data is one, but also it is impossible to rule out false positive/false negative automatically, and it is relatively easy (for now) to doctor pictures to pass through. A nefarious actor could easily bypass these. There's also the case of false positive that can very quickly push some bystander under the bus, and knowing how moderate and understanding internet is⦠yup.
It is also risky, as this will effectively be a censorship tool; they are often setup in the pretense of "helping", but once they're up, it all depends on who steers it. Such responsibility can hardly fall on moderators/admin of lemmy, and it would also be problematic to handle them on a more nationwide (or more) level, since it would give incredible censorship power to authorities.
And a bigger bottom-line is that working hard to prevent these kind of content from reaching us, while it have an obvious upside, also does nothing in respect to the actual issue of the content existing and being created.
tl;dr: there is no easy solution that doesn't come with one hell of a string attached to it.
On the other hand, it is quite hard to hide yourself from authorities online, and this kind of behavior (I hope, somehow, that the people that posted these only do so to be toxic to lemmy and not to actually disseminate content) should lead to some action from authorities, getting to actual people and subsequently moving upward the stream to actually act on it. Hopefully.
The one thing AI would be good for... Humans shouldn't have to see that shit.
Humans have to see that shit to train ai. That is why it is so difficult to find a model for that
Honestly what a shit job you must have when working with that stuff. it is sad and a difficult thing.
A lot of the mods for big providers like FB require counseling after the horrible crap they see (not just CSAM, but also terrible things like animal abuse and mutilation, etc). Unfortunately, the big companies have outsourced much of it to other countries where there aren't as many worker protections, traumatizing people and replacing them when they can't meet some arbitrary metric.
You'd need data to train an AI... Yeah that won't happen
IIRC there is a database that law enforcement uses during investigations to obtain access to these groups (they obtain consent from the victims to use the material).
the "risk" of false positives comes down to the consequence. if the consequence is being stuck in the slammer, don't use ai. if the consequence is you can't upload the image unless you manually appeal, or even maybe have to use an external image host; i think ai is fine
edit: ah bugger, wrong acct. ah well
(please tag @[email protected] if you want me to see your response)
yes, i agree. i don't think anyone should be banned off the back of an ai ruling; at most it should just immediately flag the admin team who could then review and ban. but i don't think it's too arduous to just have to upload to imgur or catbox instead. especially if the alternative is paying a team to do it manually, or shutting down
maybe i'm just old enough that externally hosting images on fora is the standard, and locally hosting is a newfangled feature that costs the site money; though. that's one of the reasons i like lemm.ee; and i've only used the local image feature once or twice apart from icons and banners
yes, this is really annoying.. but (and i don't know) it's possible that the cost of hosting images is what shuts these sites down. so you have the option of a forum with no images, or no forum at all. that's just conjecture though, i don't know
also, this bit isn't even true for the fediverse. if an instance (say, lemmy.world) goes down, a link like
https://lemmy.world/pictrs/image/168a4571-489d-46d8-b1d2-48acb6a3d1c2.png
will be dead even though the thread, community, and lemmy as a whole are still up[^1]you could have a prompt saying: "Your post may have been detected as CSAM. You may post it using an external image host, or appeal and wait for an admin to adjudicate"; thus informing people of their options
but the point is: without using ai, you have the option of: 1) no uploading of images at all, 2) relying on user reports (too slow and unreliable for illegal content), 3) no image uploads at all. now whilst i personally am a fan of the latter, i think it's objectively worse for the majority of people
basically; if there aren't enough admins to do this with ai; there definitely aren't enough admins to do it without ai
you could get around this bit specifically by actually posting the post when it gets approved, from a backend point of view
[^1]: see: the whole lemmy.fmhy.ml fiasco
Apple is building the tech, too they arent sharing it with the world for humanitarian purposes like this.
PhotoDNA.
Apple willingly made that tech, however its designed to get the individual in trouble by looking at the photos from their personal device and snitching on its owner (leaking them in the process). They can easly adapt it into a tool for social media, but it goes against their beleafs.