Here's the thing. This technology is unequivocally one of the things AI would be very useful for. It can potentially do a lot of good. Yes, MBAs could screw it up like they screw anything else up in society. That doesn't mean we shouldn't be happy that we've created this new tech.
Don't remember that char, can you refresh my memory (I am fully aware of the irony given the topic under discussion)
From wikipedia:
In principle there is very little difference between a low-performance ICBM and a high-performance IRBM, because decreasing payload mass can increase the range over the ICBM threshold.
Sounds like different militaries just classifying things differently
You're conflating pedophilia with MAP, which is the very thing they're trying to combat.
The difference between MAD and MAP would be if you're referring to the person or the condition, obviously. And I'm sure you can see why nobody wants to be referred to as a MAD person.
I disagree with you to some extent.
- Moderation does not matter if the post is made on a comm or instance which favors it cough .ml cough
- Bots and brigading are not the issue here. Neither of them were a factor in the post I linked, and they are not a necessary part of the abuse process under discussion.
- Yepowertrippinbastards works on a small scale, but it is not inherently scalable. As the fediverse grows, it will become less practical to name and shame bad actors on an individual basis. It also does not matter when the abuse system (preliminary blocklist) can be implemented by any new account.
- The very nature of the abuse system being described means that anybody who would report it on YPTB or similar comms can only do so once before themselves being blocked and unable to view future posts of that sort.
We should try to keep in mind that the fediverse and lemmy will likely grow to larger scales. Any systems and safety measures we implement should take that into account. The block mechanism as you suggest is extremely ripe for abuse at large scale, and relying on mods / admins to combat it will place an unnecessary extra load upon them, if it is even possible.
I assume this is in person. Over screen sharing, or a tutorial video, right-click copy would actually be preferable so the audience can see what's happening.
"detect new or previously unreported CSAM and child sexual exploitation behavior (CSE), generating a risk score to make human decisions easier and faster."
False positives don't matter if they stick to the stated intended purpose of making it easier to detect CSAM manually.
From the article, they made it an emphasis.
TBF, some OEM mouses are among the best I've used in my life.
Lets start with an English homework diving into WHY the passive voice is so bad in this particular case, that you felt the need to call it out in particular? There is nothing grammatically wrong with the passive voice, and it can be a valid stylistic choice. In this case, it was used because I as the author chose to emphasise the subject (English) as opposed to the responsible party (the world). The usual reasons to avoid passive voice do not seem to apply here. Would you care to explain why you decided to be a nitpicky asshole?
I believe the original purpose of that phrase, was to differentiate between people who act on those attractions, and those who understand it's wrong and refrain from doing so.
In the scenario you suggested, a user who has blocked a harasser should no longer be aware of continued harassment by the harasser. Thus while the mods may have to step in, there is no particular urgency required. Also, a determined harasser will just alt-account no matter what the admins do, regardless of the blocking model used.
BlueSky isn't really comparable, since they have a user-user interaction model as compared to Reddit / Lemmy which have a community-based interaction model. In a sense every BS user is an admin for their own community.
Agreed. However, good faith users by nature tend to stick to their accounts instead of moving around (excepting the current churn b/c lemmy is new). Regardless of how many people would call out disinformation, it's ultimately not too difficult to block them all. It can even be easily automated since downvotes are public, meaning you could do this not just to vocal users fighting disinformation but anybody who even disagrees with you in the first place. An echo chamber could literally be created that's invisible to everyone but server admins.
We could, but again, good faith users tend not to be browsing while logged out. They have little reason to do so, while bad faith users have every reason to.