"Deepfakes" in The Courtroom

ABSTRACT

Seeing is believing—but for how long? At present, people attach a lot of probative weight to images and videos. They’re taken at face value as evidence that an event occurred as alleged. The advent of so-called “deepfake” videos might change that. Thanks to advances in artificial intelligence, it is now possible to create a genuine-looking video that makes real people appear to do and say things they never did or said. Software for creating deepfake images, video, and audio is already freely available online and fairly easy to use. As the technology rapidly advances, it will become harder for humans and computers alike to tell a fake video from a real one. Inevitably, deepfakes will start coming up in the courtroom context. This Article surveys the ramifications of deepfakes for pre-trial and trial practice, including authentication of evidence, professional responsibility, and a potential “reverse CSI effect” on juries primed to question even authentic evidence in an era of disinformation and “fake news.” Fortunately, courts are no stranger to the phenomenon of evidence tampering and forgery. The rules of evidence have long imposed authentication requirements to help screen out fakes. I argue that those requirements are sufficient as-is to deal with deepfakes, and that raising the bar for authenticating video evidence would do more harm than good. Although it may prove costly, courts will be able to handle the challenges posed by deepfakes as they have ably handled previous generations of inauthentic evidence.


Know What’s Next