[-] [email protected] 30 points 9 months ago

He's so transparent with his intentions, it's embarrassing. The only explanation of removing your own tool to combat misinformation is because it does not align with your own interests. There's no way to spin that fact in a positive light and yet there will still be people using twitter. It's actually getting really fucking hard to not be a misanthrope.

[-] [email protected] 22 points 9 months ago

Considering he means to kill someone, it's sufficient trigger discipline I believe.

[-] [email protected] 24 points 10 months ago

The probative value of the article is massively outweighed by its prejudicial effect.

In other words, it's a smear campaign. The author is literally saying, oh I can fix all of these issues, but I don't know what other issue might come up. This is horse raddish. Balloon juice. A downright dismissal. As if you'd have better luck with the walled-off garden that is Unity or UE. They simply stated issues the community has already been talking about, and framed it as Godot is a lost cause not even worth fixing.

And here's the bullshido that the author implemented. They sprinkled in the thing about Godot being tied to the Vulkan API. This is valid criticism. Surprise surprise, a FOSS engine being worked on by a handful of paid devs and some volunteers has more work that it needs done on it. But now if you disagree with the thing I said about it being a smear campaign, they throw Ol' Faithful at you:

"An engine is a tool, not a cult." "Oh, you disagree with the article. Are you saying that Godot is perfect?" "So you're saying that there are no technical issues with Godot?" "You can only release low poly games with 3D Godot."

As soon as the status quo was disturbed, suddenly the imperfections of Godot are on full blast. Juan Linietsky and Co. are now to drop literally everything they were doing and address the smear campaign's concerns, lest it be successful. I suppose that's both a positive and a negative.

[-] [email protected] 53 points 10 months ago

Precisely what I'm talking about. They can afford to do so, since they lost the trust of the user about 2 statements from the CEO ago.

And not to go too deep into it, but how the hell are you going to create a brand new pricing scheme in only "a couple of days", without already having a draft of it ready? Don't you wanna check in with your lawyer? Your CFO? This shit must take more than 2 days to do.

[-] [email protected] 84 points 10 months ago

We apologize for the confusion and angst the runtime fee policy we announced on Tuesday caused. We are listening, talking to our team members, community, customers, and partners, and will be making changes to the policy. We will share an update in a couple of days. Thank you for your honest and critical feedback.

Allow me to translate:

We're now publishing the terms that we were actually going for from the very beginning. We've always known that the flaming bag of shit that we laid on your doorstep was unreasonable. If it worked, it worked, but if it didn't, it can stand in contrast to the new less shit terms that you're either supposed to agree to or rewrite your whole game. Not like our PR was great before this gambit. What have we to lose?

[-] [email protected] 65 points 10 months ago

Like laying down a mighty fart just as the elevator doors close, Unity management abandon the aircraft they were supposed to captain on their golden parachutes. The corporate money making machine continues to chug on.

[-] [email protected] 16 points 10 months ago

It's a brand new flavor of bullshit that people need to adjust to. In the same way that you may be wary of clicking certain links, reading spam mail, not giving personal info out, we will need to have an extra sensor in our brains for how LLM-y a user is.

[-] [email protected] 52 points 10 months ago

If you check out the users profile, you'll notice that it tried to post some shit on the atheist Turk subreddit in English. Primo bot behavior and dead giveaway. Also the Turks didn't seem to enjoy it much.

By perusing further, you might notice that the majority of people don't notice that they are conversing with a fucking bot. This is profoundly upsetting shit

[-] [email protected] 92 points 10 months ago

Imagine if you were the owner of a really large computer with CSAM in it. And there is in fact no good way to prevent creeps from putting more into it. And when police come to have a look at your CSAM, you are liable for legal bullshit. Now imagine you had dependents. You would also be well past the point of being respectful.

On that note, the captain db0 has raised an issue on the github repository of LemmyNet, requesting essentially the ability to add middleware that checks the nature of uploaded images (issue #3920 if anyone wants to check). Point being, the ball is squarely in their court now.

[-] [email protected] 26 points 10 months ago

Traditional hash like MD5 and SHA256 are not locality-sensitive. Can't be used to detect match with certain degree. Otherwise, yes you are correct. Perceptual hashes can create false positive. Very unlikely, but yes it is possible. This is not a problem with perfect solution. Extraordinary edge cases must be resolved on a case by case basis.

And yes, simplest solution must be implemented first always. Tracking post reputation, captcha before post, wait for account to mature before can post, etc. The problem is that right now the only defense we have access to are mods. Mods are people, usually with eyeballs. Eyeballs which will be poisoned by CSAM so we can post memes and funnies without issues. This is not fair to them. We must do all we can, and if all we can includes perceptual hashing, we have moral obligation to do so.

[-] [email protected] 20 points 10 months ago

Good question. Yes. Also artefacts from compression can fuck it up. However hash comparison returns percentage of match. If match is good enough, it is CSAM. Davai ban. There is bigger issue however for developers of Lemmy, I assume. It is a philosophical pizdec. It is that if we elect to use PhotoDNA and CSAI Match, Lemmy is now at the whims of Microsoft and Google respectively.

[-] [email protected] 40 points 10 months ago

I guess it'd be a matter of incorporating something that hashes whatever it is that's being uploaded. One takes that hash and checks it against a database of known CSAM. If match, stop upload, ban user and complain to closest officer of the law. Reddit uses PhotoDNA and CSAI-Match. This is not a simple task.

view more: next ›

TsarVul

joined 1 year ago