Give it some time, it'll get bigger I promise.
It's not 'now', it's at the time that he was first elected.
Wait, they let him live a year? Damn
Edit: No, he was alive for 3 weeks.
In the scenario you suggested, a user who has blocked a harasser should no longer be aware of continued harassment by the harasser. Thus while the mods may have to step in, there is no particular urgency required. Also, a determined harasser will just alt-account no matter what the admins do, regardless of the blocking model used.
BlueSky just passed 21 millions users.
BlueSky isn't really comparable, since they have a user-user interaction model as compared to Reddit / Lemmy which have a community-based interaction model. In a sense every BS user is an admin for their own community.
there would be more than 4 or 5 people who would call out about misinformation.
Agreed. However, good faith users by nature tend to stick to their accounts instead of moving around (excepting the current churn b/c lemmy is new). Regardless of how many people would call out disinformation, it's ultimately not too difficult to block them all. It can even be easily automated since downvotes are public, meaning you could do this not just to vocal users fighting disinformation but anybody who even disagrees with you in the first place. An echo chamber could literally be created that's invisible to everyone but server admins.
Can’t we use here the same argument other people use about Lemmy being a public forum, and thus the posts being public for everyone except the blocked accounts?
We could, but again, good faith users tend not to be browsing while logged out. They have little reason to do so, while bad faith users have every reason to.
Here's the thing. This technology is unequivocally one of the things AI would be very useful for. It can potentially do a lot of good. Yes, MBAs could screw it up like they screw anything else up in society. That doesn't mean we shouldn't be happy that we've created this new tech.
Don't remember that char, can you refresh my memory (I am fully aware of the irony given the topic under discussion)
From wikipedia:
In principle there is very little difference between a low-performance ICBM and a high-performance IRBM, because decreasing payload mass can increase the range over the ICBM threshold.
Sounds like different militaries just classifying things differently
You're conflating pedophilia with MAP, which is the very thing they're trying to combat.
The difference between MAD and MAP would be if you're referring to the person or the condition, obviously. And I'm sure you can see why nobody wants to be referred to as a MAD person.
I disagree with you to some extent.
- Moderation does not matter if the post is made on a comm or instance which favors it cough .ml cough
- Bots and brigading are not the issue here. Neither of them were a factor in the post I linked, and they are not a necessary part of the abuse process under discussion.
- Yepowertrippinbastards works on a small scale, but it is not inherently scalable. As the fediverse grows, it will become less practical to name and shame bad actors on an individual basis. It also does not matter when the abuse system (preliminary blocklist) can be implemented by any new account.
- The very nature of the abuse system being described means that anybody who would report it on YPTB or similar comms can only do so once before themselves being blocked and unable to view future posts of that sort.
We should try to keep in mind that the fediverse and lemmy will likely grow to larger scales. Any systems and safety measures we implement should take that into account. The block mechanism as you suggest is extremely ripe for abuse at large scale, and relying on mods / admins to combat it will place an unnecessary extra load upon them, if it is even possible.
I assume this is in person. Over screen sharing, or a tutorial video, right-click copy would actually be preferable so the audience can see what's happening.
"detect new or previously unreported CSAM and child sexual exploitation behavior (CSE), generating a risk score to make human decisions easier and faster."
False positives don't matter if they stick to the stated intended purpose of making it easier to detect CSAM manually.
Unless they're maintaining the software themselves, there's no such thing as perfectly loyal. In the past the revolutionaries needed to capture the armory, now they need to capture / subvert the servers / programmers.