this post was submitted on 27 Oct 2024
176 points (97.8% liked)

Ask Lemmy

26959 readers
623 users here now

A Fediverse community for open-ended, thought provoking questions

Please don't post about US Politics. If you need to do this, try [email protected]


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected]. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 1 year ago
MODERATORS
 

If AI and deep fakes can listen to a video or audio of a person and then are able to successfully reproduce such person, what does this entail for trials?

It used to be that recording audio or video would give strong information which often would weigh more than witnesses, but soon enough perfect forgery could enter the courtroom just as it's doing in social media (where you're not sworn to tell the truth, though the consequences are real)

I know fake information is a problem everywhere, but I started wondering what will happen when it creeps in testimonies.

How will we defend ourselves, while still using real videos or audios as proof? Or are we just doomed?

top 50 comments
sorted by: hot top controversial new old
[–] logos 103 points 3 weeks ago (1 children)

Fake evidence, e.g. forged documents, are not not new things. They take things like origin, chain of custody etc into account.

[–] [email protected] -1 points 3 weeks ago (2 children)

Sure, but if you meet up with someone and they later have an audio recording that is completely fabricated from the real audio, there's nothing for chain of anything. Audio used to be damning evidence and was fairly easily discoverable if it was hacked together to try to sound different. If that goes away, then it just becomes useless as evidence.

[–] [email protected] 7 points 3 weeks ago

It becomes useless as evidence unless you can establish authenticity. It just makes audio recordings more in a class with text documents; perfectly fakeable, but admissible with the right supporting information. So I agree it's a change, but it's not the end of audio evidence, and it's a change in a direction which courts already have experience.

[–] [email protected] 4 points 3 weeks ago

You can't just use an audio file by itself. It has to come from somewhere.

The courts already have a system in place that if someone seeks to introduce a screenshot of a text message, or a printout of a webpage, or a VHS tape with video, or just a plain audio file, needs to be able to introduce that as evidence, with someone who testifies that it is real and that it is accurate, with an opportunity for others to question and even investigate where it came from and how it was made/stored/copied.

If I just show up to a car accident case with an audio recording that I claim is the other driver admitting that he forgot to look before turning, that audio is gonna do basically nothing unless and until I show that I had a reason to be making that recording while talking to him, why I didn't give it to the police who wrote the accident report that day, etc. And even then, the other driver can say "that's not me and I don't know what you think that recording is" and we're still back to a credibility problem.

We didn't need AI to do impressions of people. This has always been a problem, or a non-problem, in evidence.

[–] [email protected] 65 points 3 weeks ago (1 children)

As someone who works in the field of criminal law (in Europe, and I would be shocked if it wasn't the same in the US) - I'm not actually very worried about this. By that I don't mean to say it's not a problem, though.

The risk of evidence being tampered with or outright falsified is something that already exists, and we know how to deal with it. What AI will do is lower the barrier for technical knowledge needed to do it, making the practice more common.

While it's pretty easy for most AI images to be spotted by anyone with some familiarity with them, they're only going to get better and I don't imagine it will take very long before they're so good the average person can't tell.

In my opinion this will be dealt with via two mechanisms:

  • Automated analysis of all digital evidence for signatures of AI as a standard practice. Whoever can be the first person to land contracts with police departments to provide bespoke software for quick forensic AI detection is going to make a lot of money.

  • A growth in demand for digital forensics experts who can provide evidence on whether something is AI generated. I wouldn't expect them to be consulted on all cases with digital evidence, but for it to become standard practice where the defence raises a challenge about a specific piece of evidence during trial.

Other than that, I don't think the current state of affairs when it comes to doctored evidence will particularly change. As I say, it's not a new phenomenon, so countries already have the legal and procedural framework in place to deal with it. It just needs to be adjusted where needed to accommodate AI.

What concerns me much more than the issue you raise is the emergence of activities which are uniquely AI dependent and need legislating for. For example, how does AI generated porn of real people fit into existing legislation on sex offences? Should it be an offence? Should it be treated differently to drawing porn of someone by hand? Would this include manually created digital images without the use of AI? If it's not decided to be illegal generally, what about when it depicts a child? Is it the generation of the image that should be regulated, or the distribution? That's just one example. What about AI enabled fraud? That's a whole can of worms in itself, legally speaking. These are questions that in my opinion are beyond the remit of the courts and will require direction from central governments and fresh, tailor made legislation to deal with.

[–] ryathal 1 points 3 weeks ago

My bigger concern is the state using AI created fake data. It's far harder to stop that, as false confessions and coerced confessions are already a problem. The process can't really catch it, because it's the people in charge of the process doing it.

[–] [email protected] 31 points 3 weeks ago (4 children)

I think other answers here are more essential - chain of custody, corroborating evidence, etc.

That said, Leica has released a camera that digitally signs its images, and other manufacturers are working on similar things. That will allow people to verify whether the image is original or has been edited. From what I understand Leica has some scheme where you can sign images when you update them too, so there's a whole chain of documentation. Here's a brief article

[–] andrew_bidlaw 7 points 3 weeks ago (2 children)

It's an interesting experiment, but why would we trust everything that Leica supposedly verified? The same shit with digital signatures and blockchain stuff. We are at the gates of the world where we have zero trust by default and would only intentionally outsource verification to third parties we trust, because penalties for mistakes are growing each day.

[–] [email protected] 4 points 3 weeks ago (2 children)

I don't think we should inherently. I've thought about the idea of digitally signed photos and it seems sound unless someone is quite clever with electronics. I'm guessing there's some embedded key on the camera that is hard but maybe not impossible to access. If people can hack Teslas for "full autopilot" or run Doom on an ATM machine I'm not confident that this kind of encryption will never be cracked. However, I would hope an expert witness would also examine the camera that supposedly took the picture. I would think it to be impossible for someone to acquire the key without a 3rd party detecting the intrusion.

[–] andrew_bidlaw 3 points 3 weeks ago

Today we have EXIFs and it's better to wipe them all of these for privacy reasons. Because every picture you take otherwise contains a lot of your data like geoloc, model, exposuer, etc. That's the angle they are yet to tackle - because most of these things are also leave us vulnerable.

[–] [email protected] 1 points 3 weeks ago* (last edited 3 weeks ago)

They make Hardware Security Modules (HSMs) that are very difficult to crack, to the point that it is unbreakable at our current technology level. With a strong HSM, a high-bit per-device certificate signed by the company's private key gives you authenticity and validation until the root key or HSM are broken, which is probably good enough for today while we try to figure out something better IMO.

[–] [email protected] 2 points 3 weeks ago

Well as I said, I think there's a collection of things we already use for judging what's true, this would just be one more tool.

A cryptographic signature (in the original sense, not just the Bitcoin sense) means that only someone who possesses a certain digital key is able to sign something. In the case of a digitally signed photo, it verifies "hey I, key holder, am signing this file". And if the file is edited, the signed document won't match the tampered version.

Is it possible someone could hack and steal such a key? Yes. We see this with certificates for websites, where some bad actor is able to impersonate a trusted website. (And of course when NFT holders get their apes stolen)

But if something like that happened it's a cause for investigation, and it leaves a trail which authorities could look into. Not perfect, but right now there's not even a starting point for "did this image come from somewhere real?"

[–] [email protected] 2 points 3 weeks ago (1 children)

A camera that authenticates the timestamp and contents of an image is great. But it's still limited. If I take that camera, mount it on a tripod, and take a perfect photograph of a poster of Van Gogh's Starry Night, the resulting image will be yet another one of millions of similar copies, only with a digital signature proving that it was a newly created image today, in 2024.

Authenticating what the camera sensor sees is only part of the problem, when the camera can be shown fake stuff, too. Special effects have been around for decades, and practical effects are even older.

[–] [email protected] 1 points 3 weeks ago

You're right, cameras can be tricked. As Descartes pointed out there's very little we can truly be sure of, besides that we ourselves exist. And I think deepfakes are going to be a pretty challenging development in being confident about lots of things.

I could imagine something like photographers with a news agency using cameras that generate cryptographically signed photos, to ward off claims that newsworthy events are fake. It would place a higher burden on naysayers, and it would also become a story in itself if it could be shown that a signed photo had been faked. It would become a cause for further investigation, it would threaten a news agency's reputation.

Going further I think one way we might trust people we aren't personally standing in front of would be a cryptographic circle of trust. I "sign" that I know and trust my close circle of friends and they all do the same. When someone posts something online, I could see "oh, this person is a second degree connection, that seems fairly likely to be true" vs "this is a really crazy story if true, but I have no second or third or fourth degree connections with them, needs further investigation."

I'm not saying any of this will happen, just it's potentially a way to deal with uncertainty from AI content.

[–] [email protected] 1 points 3 weeks ago

Cameras with stronger security will become more and more important, though on a theoretical level, they could be cracked or forged, but I suppose it's the usual cat and mouse game

[–] [email protected] 1 points 3 weeks ago

Hardware signing stuff is not a real solution. It's security through obscurity.

If someone has access to the hardware, they technically have access to the private key that the hardware uses to sign things.

A determined malicious actor could take that key and sign whatever they want to.

[–] [email protected] 20 points 3 weeks ago (2 children)

When video or audio evidence is submitted, it will be questioned as to its authenticity. Who recorded it? On what device? Then we'll look for other corroborating evidence. Are there other videos that captured the events in the background of the evidence video? Are there witnesses? Is there contradictory evidence?

Say there's a video depicting a person committing murder in an alley. The defense will look for video from the adjoining streets that show the presence or absence of the murderer before or after. If those videos show cars driving by with headlights on, they will look for corresponding changes in the luminosity of the crime video. If the crime happened in the daytime, they will check that the shadows correspond to Sun's position at that moment. They'll see if the reflections of objects match the scene. They'll look for evidence that the murderer was not at the scene. Perhaps a neighbor's surveillance camera shows they were at home or their cell phone indicated they were someplace else.

But if all these things indicate the suspect was in the alley and the video is legitimate, that's powerful evidence toward a conviction.

[–] [email protected] 3 points 3 weeks ago

Are there other videos that captured the events in the background of the evidence video?

I think this is key in a trial setting. A published picture might be unique but to think the photographer snapped just one picture while nobody else was present or also photographing is a bit of a stretch.

[–] [email protected] 2 points 3 weeks ago

So every piece of video or photo evidence will need an expert witness to assess the legitimacy? Or will lawyers just have to know those skills in the future?

[–] [email protected] 18 points 3 weeks ago

Science has proven that the entire model of human memory as factual testimony is a fallacy. That came out long before AI in the public space. I don't think anyone has addressed that revelation. I doubt anyone will address this one. Hell, there are still people sketching the courtroom like cameras don't exist. A president can stage a failed coup and a SC judge can fly the traitor's flag and there are no consequences for either.

So what will be done, absolutely nothing, unless some billionaires stage a proxy war over it.

[–] [email protected] 13 points 3 weeks ago (1 children)

A bit dramatic imo. For most of legal history we didn't actually have perfectly recorded video or audio, and while they are great tools at the present, they are still not the silver-bullet people would expect them to be at trial. (Think Trump and his cucks) Furthermore, most poor people try to avoid being recorded when doing crimes.

It will probably mean that focus will shift to other kinds of evidence and evidence-gathering methods. But definitely not the end of law as we know it, far from it.

[–] [email protected] 2 points 3 weeks ago

Right, but anyone would like to not be in a video implying them in a crime, but I was wondering what would happen if fake videos of said person were to appear implicating a crime that actually did not take place

[–] [email protected] 12 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Eventually, we will just have to accept that photographic proof is no longer proof.

There are ways that you could guarantee an image is valid. You would need a hardware security module inside the camera, which signs a hash of the picture with its own built-in security key that can't be extracted and a serial number that it generates. That can prove that an image came from a particular camera, and if you change even one pixel of that image the signature won't match anymore. I don't see this happening anytime soon. Not mainstream at least. There are one or two camera manufacturers that offer this as a feature, but it's not on things like surveillance cameras or cell phones nor will it be anytime soon.

[–] [email protected] 2 points 3 weeks ago (1 children)

True, sooner or later there might be ways to make sure that a picture or video are digitally signed and probably it would be very hard to crack, but theoretically a fake video might still pass for real (though it would require a lot of resources to make that happen)

[–] [email protected] 1 points 3 weeks ago

More likely, most of the sources that produce photos and videos would not be using the digital signatures. Professional cameras for journalists probably would have the signature chip. Cheapo Chinese surveillance cameras? Unlikely.

[–] [email protected] 12 points 3 weeks ago

We're not. Its going to upend our already laughably busted "justice" system to new unknown heights of cartoonish malfeasance.

[–] [email protected] 10 points 3 weeks ago (1 children)

Fake does not change what actually happened. Just look for facts in the real world that support the theory. Remember, photoshop existed before AI. We have DNA checks today.

load more comments (1 replies)
[–] [email protected] 4 points 3 weeks ago

A camera can only show us what it sees. It doesn't objectively necessitate a viewer's interpretation of it. I remember some of us being called down to the principal's office (before the age of footage-based scandals, which if anything imply shortcoming in the people progressing the rulings to be in so much awe at, sadly a common occurrence, adding to the "normal people distaste" I have, and something authorities have made sure I'm no stranger to) who may say "we saw you on the camera doing something against the rules" only to be responded to with "that's not me, I have an alibi" or "that's not me, I wouldn't wear that jacket" or "that's not me, I can't do that person's accent" (aforementioned serial slander of me serving as a prime example where this would be the case). In connection to the process, you might say it's witness testimony from a machine and that they've "just started" to get into the habit of not being very honest to the humans in thw court. ~~I remember my first lie.~~

[–] [email protected] 3 points 3 weeks ago (3 children)

Maybe each camera could have a unique private key that it could use to watermark keyframes with a hash of the frames themselves.

[–] [email protected] 5 points 3 weeks ago (1 children)

How would you prove that the camera itself is real, is the only device with access to the private key and isn't falsifying it's video feed?

[–] [email protected] 1 points 3 weeks ago (1 children)

The sort of case I was thinking of is if different parties present different versions of an image or video and you want to establish which version is altered and which is original.

[–] [email protected] 1 points 3 weeks ago (1 children)

You still have the same problem though. You can produce a camera in court and reject one of the images, but you still need to prove that the camera wasn't tampered with and it was the one at the scene of the crime.

[–] [email protected] 2 points 3 weeks ago (1 children)

Leica has one camera that does this, and others are working on them. Just posted this link in another comment

[–] [email protected] 1 points 3 weeks ago (1 children)

The camera can sign things however it wishes, but that doesn't automatically make the camera trustworthy.

In the same sense, I can sign any number of documents claiming to have seen a crime take place but that doesn't make it sufficient evidence.

[–] [email protected] 1 points 3 weeks ago

In this case, digitally signing an image verifies that the image was generated by a specific camera (not just any camera of that brand) and that the image generated by that camera looks such and such a way. If anyone further edits the image the hash won't match the one from the signature, so it will be apparent it was tampered with.

What it can't do is tell you if someone pasted a printout of some false image over the lens, or in some other sophisticated way presented a doctored scene to the camera. But there's nothing preventing us from doing that today.

The question was about deepfakes right? So this is one tool to address that, but certainly not the only one the legal system would want to use.

[–] [email protected] 3 points 3 weeks ago

I think that’s exactly how it’s going to work - you can’t force all ‘fake’ sources to have signatures- it’s too easy to make one without one for malicious reasons. Instead you have to create trusted sources of real images. Much easier and more secure

[–] [email protected] 2 points 3 weeks ago

Usually I see non-technical people throw ideas like this and they're stupid, but I've been thinking about this for a few minutes and it's actually kinda smart

[–] VintageTech 3 points 3 weeks ago

One step closer to requiring smart phones to track an individual for their alibi

[–] [email protected] 3 points 3 weeks ago

It's a scary question, made a lot less scary by whoever it was that said "you know, I guess we've had text deepfakes a long time"

Eventually people just know it could be fake, so they look for other ways of verifying. The inevitability and the scale of it mean that, at the very least, we'll have all our brainpower on it eventually.

It's the meantime where shit could get wild.

[–] [email protected] 2 points 3 weeks ago (1 children)

I'm not a tech person, so I'll take the lowest hanging fruit. The obvious answer is to write a program that can detect AI. Then there will be a competition between AI fakes and AI detection. This is similar to what we have in sports. There are forbidden enhancement procedures (e.g., steroids, blood doping, etc.) that have to keep improving in subtlety so not to be detected by Anti cheating measures.

[–] [email protected] 7 points 3 weeks ago

That's essentially how Generative adversarial networks work, and the effect is that the generative program gets better at making its fakes be undetectable

[–] [email protected] 1 points 3 weeks ago* (last edited 3 weeks ago)

Disclaimer: I'm not an expert, just an interested amateur wanting to chat and drawing comparisons from past leaps in tech and other conversations/videos.

For a time expert analysis will probably work. For instance, the "click here to prove your not a robot" boxes can definitely be clicked by robots, but for now the robot moves in detectably different ways. My guess is that, for at least a while, AI content will be different from actual video in ways like code. There will probably be an arms race of sorts between AI and methods to detect AI.

Other forms of evidence like DNA, eyewitness accounts, cell phone tracking etc. will likely help mitigate deceitful AI somewhat. My guess is that soon video/audio will no longer be considered as ironclad as it was even a few years ago. Especially if it comes from an unverified source.

There are discussions about making AI tools have a digital "watermark" than can be used to identify AI-generated content. Of course this won't help with black market-type programs, but it will keep most people out of the "deep fake for trials" game.

When it comes to misinformation on social media though, well...it's probably going to get crazy. The last decade or so has been a race at an unprecedented scale to try and keep up with BS "proof", psuedoscience, etc. Sadly those on the side of truth haven't always won. The only answer I have for that is making sure people are educated about how to deal with misinformation and deepfakes - eg. awareness they exist, identifying reputable sources and expert consensus, and so on.

[–] [email protected] 0 points 3 weeks ago

I doubt these tools will ever get to a level of quality that can confuse a court. They'll get better, sure, but they'll never really get there.

[–] [email protected] -3 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

For the longest time now, from before AI, before NFT was a thing i had an idea to incorporating blockchain tech into real life media footage to combat the rise of misinformation.

The metadata, original author would be stored on this chain the moment footage is recorded. The biggest challenge is that this means the devices themselves need to be connected.

Adoption would be slow but i imagined news and official channels make use of this tech first. Eventually all footage outside of this will be seen as not trustworthy

Then NFTBros came along and people have shit on this idea ever since. Some days i feel that was a conspiracy to ruim out perception of potential but more likely humans where just greedy.

I still believe this could work. Detailed example below:

The system works with a fair amount of transparency, verifiable digital signatures for recording devices and their owners. Professional cameras and organizations would have publicly known IDs, while individuals could choose to remain pseudonymous authors but would need to build credibility over time.

Let's say BBC records an interview. When viewers watch this content on any platform, they can access blockchain verification through an embedded interface (perhaps a small icon in the corner). This shows the complete chain of custody from recording to broadcast.

The system verifies content through computational comparisons. When a raw interview is edited into a final piece:

  • Each original clip has a unique blockchain signature
  • The final edited version's signature can be compared against source material
  • Automated analysis shows what percentage of original footage matches
  • Modifications like color correction or audio adjustments are detected through signature differences
  • Additional elements like station logos or intro sequences have their own verified identifiers
[–] conciselyverbose 6 points 3 weeks ago (1 children)

Because it's insanely idiotic. Signing videos is one thing.

Hooking it into blockchain bullshit is entirely deranged. It adds a bunch of complexity to provide literally zero benefit in any possible context.

[–] [email protected] 1 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

I am not sure what you think blockchain actually is but in essence its a decentralized ledger of signatures.

Not coins, no sellable goods. Just that. Computers connected in a network to verify the correctness of a cloud ledger.

So if you say signing footage is one thing how do you propose a laymen can verify that signature without centralized databank.

I understand some people may not mind centralized authority but i prefer against it.

I am willing to hear peoples thoughts on this. I am not pro or against blockchain or any form of technology. With the information i have this just seems like a reasonable and practical solution.

[–] conciselyverbose 2 points 3 weeks ago (1 children)

I am well aware of what it is. It serves no purpose and provides no benefit.

Ignoring the fact that hardware signing doesn't validate inputs as "real", because it's entirely possible to replicate the actual signals entering the camera, and the fact that the entire premise by definition would be a terrible power grab by big hardware/software tools, the very obvious way to implement such an approach would be the exact same system as certificate authorities. You have to have actual root certificate signers.

Blockchain is horseshit and serves no purpose.

[–] [email protected] 0 points 3 weeks ago (1 children)

That hardware inputs can be faked is part of my reasoning here because there would be transparency of the source of footage.

If a reputable journalists fake their own footage and it would be found out their credibility would be gone.

If they often rely on borrowing footage and don't fact check it. Credibility will degrade as well.

Journalist media that does their work and only uses credible sources will thrive.

My solution isn't about who or how signature gets created but how ordinary people can check for themselves where a clip within footage originates from.

I am fine with inventing a new system that does this and call it something else than blockchain. But my understanding is that it does pretty much provide this functionality in a robust manner.

Also typing these comments on the go caused me to lose something dear to me on public transport. I am very sad now and probably wont engage further.

[–] conciselyverbose 2 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Again, you have to completely ignore that the core premise is evil intended to give big players even stronger monopoly control. It's anti-free in every sense, and as an added bonus, would very certainly make possession of specific hardware sufficient to be executed in some countries, because everything it has ever captured would be tracked to it.

But if you do that, there is already a system that does exactly what you're asking. You don't need to invent anything. It's certificate authorities.

I'm not actually trying to be an asshole, though I'm sure I'm coming off as one. But the only thing blockchain actually does is validate transactions. It's a shared ledger.

[–] [email protected] 0 points 3 weeks ago

Sure i’ll have a look at decentralized certificate authorities options.

Very possibles to adapt my idea to whatever technology provides those function honestly.

The only actual connection i have with blockchain is that reading about it when it was new directly inspired in me a possible way to combat fake news.

load more comments
view more: next ›