this post was submitted on 06 Sep 2023
6 points (87.5% liked)

BecomeMe

815 readers
2 users here now

Social Experiment. Become Me. What I see, you see.

founded 2 years ago
MODERATORS
all 7 comments
sorted by: hot top controversial new old
[–] [email protected] 5 points 1 year ago (1 children)

How does it matter when humans can't see it? People already spread lies in text form all the time. Even if it only takes 5 second to Google that it is made up bullshit, people don't do that. They swallow the lie. Just look at Trump.

Also, I really doubt you can't edit it out. If there is a tool to extract this information, then there is a tool to remove it. Also, the example pictures with ~10x20 pixel resolution each are not exactly amazing.

[–] [email protected] 3 points 1 year ago

The SynthID watermark is meant to be impossible for you to see in an image but easy for the detection tool to spot. Google’s ready and willing for it to get tested and broken.

Well this sounds promising!

That’s as technical as Hassabis and Google DeepMind want to be for now. Even the launch blog post is sparse on details because SynthID is still a new system. “The more you reveal about the way it works, the easier it’ll be for hackers and nefarious entities to get around it,” Hassabis says

Oh. Never mind.

Also, on an unrelated note - I actually do think that the possibility for deepfakes to create evidence for something that didn't happen, as a political problem, is a little bit overblown. I say that because Fox News and Donald Trump have already created a whole alternate reality for their fans to inhabit, and all that was really needed was bald-faced lying. Maybe I am wrong, but I actually think it might be counterproductive for them to base their alternate reality on cunning fakes that stand up to scrutiny (e.g. their viewers can run SynthID on them, because they defeated the SynthID by cleverly processing it out or something). The "big lie" strategy is working fine, and I don't think they would want to lead people down the path of "you should verify the evidence that I'm presenting and make sure for yourself that it's genuine"... it's easier and safer just to present bullshit and swear that it's gold and have the followers say it's gold because that's what they were told.

Like I say, I do support trying to address the is-it-real problem (e.g. if a video is presented in court, it would be nice if the court had a way to verify that it's genuine and not AI-generated), but the "fake news" problem is a totally separate class of problem unfortunately.