A look into Deepfake technology and its effects
Deepfake is an image editing technique that uses artificial intelligence (specifically deep learning) to superimpose one (video) image onto another – akin to Photoshop for photos. What might sound boring in theory in practice has led to public outrage, debates about the nature of privacy, paranoia and some very funny compilations of Nicholas Cage starring in hollywood classics.
Deepfake technology first garnered mainstream media attention in 2017 when a Reddit-User using the pseudonym ‘Deepfakes’ uploaded multiple adult videos featuring celebrities’ faces being superimposed upon adult-film actresses’ bodies (for example the adult actresses’ body with Gal Godot’s face and voice.) Despite having been created using a computer program, the results (so called deepfakes) looked surprisingly real and the videos became the central topic of internet debate and widespread panic.
What was this debate about and what makes deepfakes so terrifying?
First of all, it is incredibly easy to make videos using deepfake technology. Face swapping is nothing new, in fact many people have used the Snapchat ‘face swap’ filter before. However to get a convincing result prior to the technological advancement required sophisticated knowledge of video editing and a CGI expert. Deepfake technology changed that completely. It enables anyone (you, me, your dog) to create a convincing video provided the algorithm – which can be downloaded from the internet – is fed enough source material. Celebrities, with millions of photos and videos of them available at all times, are therefore a natural target. But so are political figures.
With deepfake technology only improving and videos becoming more convincing it is not out of the question that the technology could be used as a political weapon. Superimposing Donald Trump’s face on someone declaring war or Theresa May’s face on someone using racial slurs could have devastating political consequences – perhaps even of the nuclear war proportion. In fact, the technology has already been used by comedian Jordan Peele (in collaboration with Buzzfeed) to create a fake video of Barack Obama giving a PSA about fake news and has been used to switch Donald Trump and Angela Merkel’s faces.
Nuclear war, however, despite being a terrifying implication, is an unlikely worst case scenario. Deepfake technology raises many other concerns. For example, the issue of consent and bodily autonomy. None of the actresses in the adult videos consented to the use of their face, nor did the adult film actresses consent to their face being deleted. Whilst it can be argued that ‘that is what you sign up for when you become a celebrity/ adult actress’ , no one actively consented to the use of anything and being a celebrity / actress hardly counts as tacit consent to allow people to do what they want with your face or body and, therefore by extension, your identity.
It could also be said that actresses and actors have been victims of image editing their whole life and this, is therefore not such a big deal, after all there are plenty of photoshop edits of celebrities out there. However, whilst the notion of photos being potentially tampered with is already ingrained in many people’s minds, the same does not hold for videos. In fact, many people regard videos as foolproof evidence. Furthermore, it is easier to ignore a fake video if it looks evidently fake; deepfakes look strikingly real and therefore can evoke the same shame in individuals as watching yourself commit the actual act could.
That aside, given how many selfies and videos from different angles most people post online, the technology could soon be used against anyone. Motherboard released an article chronicling forums and people’s attempts at making fake adult videos of friends, classmates and exes. One user allegedly claimed to have come up with a ‘pretty good’ result using merely 380 pictures they found on their subject’s instagram and facebook accounts. This also means that deepfakes raise the question of how much we own our online personality. Is it so easy for us to argue ‘that is what you sign up for when you post a photo online’ when we could be the next victim?
What is also worrying is the idea of the technology itself. The technology could lead to a shift in how we view evidence – if we cannot trust video evidence, then what can we trust? Any perpetrator could simply claim that the video evidence of them committing the crime was tampered with. The issue of deepfakes is concerning enough that the U.S. government and other non-government affiliated agencies have started to try and develop programs that use artificial intelligence to spot deepfakes, however none of these are one hundred percent successful yet.
So until then, be careful what and how much you post and do not trust everything you see, no matter how real it seems.