Yesterday, CNN’s Jim Acosta had his press pass pulled by the White House after press secretary Sarah Sanders claimed he had “plac[ed] his hands on a young woman just trying to do her job”.
Acosta disputes this.
To justify their move, the White House released a video of Acosta pushing back a White House aid. Now, news sources, including The Daily Beast, are saying that the video that the White House released was 'doctored' and that it came from the website Infowars.
Up until now, video has been a pretty good determnant of the truth of a statement.
In 1991, video shot by George Holiday of Rodney King being beaten by the LAPD set off nights of rioting. What people saw on the video was indisputable. And that was in the early days of video, when hardly anyone had a video camera, let along carried it around with them.
In those pre-iPhone days, if there was video, it mostly happened by accident, much like the famous Zapruder film of JFK's assasination in Dallas in 1963. But the 'visual evidence' was enough.
With the arrival of some 3 billion smartphones (and seemingly non stop and ubiquitous CCTV cameras recording every move), video as 'proof positive' seems inarguable, and even expected. If the police are accused of abusing their power, there's the video. If there's a mass shooting, there's the video.
We accept what we see as the truth because we know that 'seeing is believing'.
But now, that fundamental truth may not hold much water.
Recent advances in video and audio AI have created a new genre of software manipulation called DeepFakes.
As little as a year ago, DeepFakes was nothing more than an experiment and an obscure user name on Reddit. But then the Reddit developers released a pornographic film in which famous actors such as Scarlet Johansson had their faces superimposed on someone else's bodies. Algorithms were trained to match exactly every facial move and expression, and voila, you could not tell the difference.
The Deepfakes account posted the code—built on Google’s open source TensorFlow AI software—and the secret was out and available to anyone.
Now it is possible to manipulate video to make anyone say or do pretty much anything.
So far, there’s no public evidence of deepfake clips being used to sow political disinformation, but a series of stunts have demonstrated what that might look like.
A startup called Lyrebird developing voice-cloning technology has promoted its wares with a fake clip of Trump saying he is considering sanctions against countries that do business with North Korea.
Up until now, video at least has been dependable.
Now, who knows.
What happens when we can no longer trust what we are seeing?