[-] [email protected] 3 points 1 day ago

And that's the third time you've tried to put words into my mouth, rather than arguing my points directly.

Have fun battling your straw men, I'm out.

[-] [email protected] 4 points 1 day ago* (last edited 1 day ago)

you're wanting to give people the right to control other people's ability to analyze the things that they see on public display.

For the second time, that's not what I want to do - I pretty much said so explicitly with my example.

Human studying a piece of content - fine.
Training a Machine Learning model on that content without the creator's permission - not fine.

But if you honestly think that a human learning something, and a ML model learning something are exactly the same, and should be treated as such, this conversation is pointless.

[-] [email protected] 5 points 1 day ago

No, Just the concept of getting a say in who can train AIs on your creations.

So yes, that would leave room for a loophole where a human could recreate your creation (without just making a copy), and they could then train their model on that. It isn't water tight. But it doesn't need to be, just better than what we have now.

[-] [email protected] 16 points 2 days ago

Agreed. It was fun as a thought exercise, but this failure was inevitable from the start. Ironically, the existence and usage of such tools will only hasten their obsolescence.

The only thing that would really help is GDPR-like fines (based as a percentage of income, not profits), for any company that trains or willingly uses models that have been trained on data without explicit consent from its creator.

[-] [email protected] 24 points 3 days ago

Well, just a month ago they couldn't pay out a bounty to Kaspersky for a 0day exploit they found due to the sanctions, so this seems a little off.

[-] [email protected] 1 points 3 days ago

Who knows, maybe it'll teach people to be more skeptical of the things they read online, and actually look for the underlying sources.

[-] [email protected] 1 points 4 days ago

But why wouldn't those same limits not apply to biological controllers? A neuron is basically a transistor.

[-] [email protected] 8 points 4 days ago* (last edited 4 days ago)

I'd wager the main reason we can't prove or disprove that, is because we have no strict definition of intelligence or sentience to begin with.

For that matter, computers have many more transistors and are already capable of mimicking human emotions - how ethical is that, and why does it differ from bio-based controllers?

[-] [email protected] 5 points 4 days ago

The line has been changed to be gender neutral 9 hours ago. Victory!

[-] [email protected] 67 points 6 days ago

Oh my. Sometimes Betteridge's law of headlines is wrong.

[-] [email protected] 237 points 3 months ago

Personally I find it far more important that it's not run by a company that will try its hardest to track your every movement on the web, but to each their own, I suppose.

view more: next ›

admin

joined 9 months ago