If AI and deep fakes can listen to a video or audio of a person and then are able to successfully reproduce such person, what does this entail for trials?

It used to be that recording audio or video would give strong information which often would weigh more than witnesses, but soon enough perfect forgery could enter the courtroom just as it’s doing in social media (where you’re not sworn to tell the truth, though the consequences are real)

I know fake information is a problem everywhere, but I started wondering what will happen when it creeps in testimonies.

How will we defend ourselves, while still using real videos or audios as proof? Or are we just doomed?

  • JaggedRobotPubes@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    It’s a scary question, made a lot less scary by whoever it was that said “you know, I guess we’ve had text deepfakes a long time”

    Eventually people just know it could be fake, so they look for other ways of verifying. The inevitability and the scale of it mean that, at the very least, we’ll have all our brainpower on it eventually.

    It’s the meantime where shit could get wild.