Deepfakes: Two Realities

So, what are Deepfakes?

Deepfakes are AI (Artificial Intelligence) generated media that map one person’s likeness onto another’s. They do this by utilising deep learning, which is a subset of AI that takes in a large array of data and trains itself to recognise certain patterns in them.

Deep learning

Deep learning is a technique that uses neural networks, inspired by the brain, to form an understanding of something via the patterns it recognises. To do this reliably, the algorithm must be fed large amounts of data to critique or confirm its biases. For example, you could feed a neural network thousands of pictures of cats, and it might pick out commonalities such as their tails and whiskers, which it will decide are cat-like. Once trained, when presented with a new image, it will search for those patterns in the image to determine whether it is a cat or not. In the case of Deepfakes, that means training the neural network on multiple images of faces, each taken from different angles and lighting. Similar to facial recognition. Using CGI (Computer Generated Images), it can replace/alter the face of an existing image or video with that of your choice. It can also do similar things with voice patterns and lip-synching.

State of Deepfakes 

The term was first used in 2017 by Reddit user u/Deepfakes. Since then, there has been a surge in the number of Deepfakes across the web, with varying levels of quality, from home-made projects to those indistinguishable from reality by the untrained eye. An example of the latter would be the video titled “In the event of moon disaster”. While currently available on open-source sites like Git-hub, it used to be the case that Deepfake creation was limited to those with specialised technical skills. However, commercialisation has led the rise of mobile apps which simplify the development and deployment of Deepfakes. Notable examples of this would be FakeApp, FaceApp and Zao, with FaceApp facing blow-back over allegations of storing users’ photos on their servers. This might be because the neural networks used to make Deepfakes require as much data as possible for maximum quality. The use of this technology ranges from changing a scene in a movie so that a reshoot is not necessary, to making funny memes for your friends to see. However, as the realism of this technology improves, the potential for the exploitation of it expands. Therefore, if we consider the ever-increasing accessibility to such technology, the chances of it being abused are high, to say the least. 

A report in September 2019 by Sensity (formerly Deeptrace) states 96% of Deepfakes found online are pornographic content, with a large majority of them being celebrities. A study on AI-related future crime by researchers and UCL (University College London) stated:

“Audio/video impersonation was ranked as the overall most-concerning type of crime out of all those considered

Dangers

In March 2019, a call was made to the CEO of a UK based energy firm from the CEO of their German parent company regarding the urgent transfer of $243,000 to a Hungarian supplier. However, the calls originated from fraudsters using Deepfake audio to impersonate the CEO.  This incident highlights the dangers of this kind of technology in facilitating evermore sophisticated cyberattacks. Paired with a visual Deepfake, it becomes increasingly difficult for untrained people to detect what is true or not, such as who you are interacting with over the internet. Aside from fraud, the potential for misuse of this technology can be observed in:

  1. Fake news: Deepfakes of politicians or celebrities saying or doing things they didn’t have been spreading across the internet. The implications of this can range from ruining an individual’s reputation to destabilising societies. An example of this is an attempted military coup in Gabon over an alleged Deepfake of their president delivering a new year’s address. The time taken to both identify and disprove these fakes can sometimes be enough for it to spread to a very large number of people. Not to mention, information can spread via channels that aren’t very regulated such as Whatsapp. Even if proven false, once spread it is difficult to completely rectify the situation because the credibility of all other information becomes questionable. Every person is within their right to accuse information of being illegitimate. This may lead to a lack of accountability as people can attribute their misdeeds to  a false construction.
  2. Obstruction of Justice: A Deepfake can be submitted as real evidence in a legal investigation in an attempt to sway the outcome of a case. This would compromise the very sanctity of the law, letting well equipped and technologically literate criminals go unpunished.
  3. Extortion: A Deepfake of a person doing an illicit activity can be used to blackmail target. A blog post by Senstity AI reports up to 680,000 individuals realistically stripped naked by an AI bot on Telegram. It was estimated that “70% of targets are private individuals whose photos are either taken from social media accounts or private material“. This hints at the possibility of some fakes being targeted attempts.  

Along with these, there is also the potential of spreading social unrest and inciting extremist ideology. Currently, there have been surprisingly few reports of misuse outside of Deepfake pornography, but even that possibility alone warrants a degree of caution, and, countermeasures put in place. With the technology already being freely accessible, it begs the question, “What can we do to prevent malpractice?”.

Detection

For some Deepfakes, it is possible to detect them by eye, especially once an individual is trained in what to look for. Such tells are the light flickering, smooth faces and abnormal blinking to name a few. It is hypothesised that as people get exposed to Deepfakes, the more likely they are to spot them. However, visual identification becomes difficult with higher quality Deepfakes (the aforementioned case with the fraudsters). There is no guarantee that two people will look at the same piece of media and conclude it is a Deepfake especially when you introduce emotional investment:

“increased emotionality is associated with increased belief in fake news”

One such countermeasure is the use of Deepfake detection tools, such as Microsoft’s visual authenticator tool. Although such tools may work in the short term, those same tools can be used to improve the quality of the same Deepfakes they were meant to identify. Much like how the use of antibiotics aid in creating superbugs. Essentially, the authenticator would not be a permanent solution. Another countermeasure proposed would be the use of the Blockchain to identify the source of the media so that only those from trustworthy sources are allowed to exist. This would hamper the spread of generally suspicious media as well as Deepfaked media. At the moment, this idea is incomplete. The blockchain would only be able to identify the source of these media and not the validity. So more research into a protocol capable of verifying the integrity of the media must be done.

National policies

As of now, there are no countries which outlaw the use or distribution of Deepfakes except for the People’s Republic Of China. Beginning in 2020, China stated that failure to disclose the use of Deepfakes or Virtual Reality (VR) in a post will be considered a criminal offence. Meanwhile, the US has enacted laws in two states criminalising the use of certain types of Deepfakes. For example, in Virginia, the distribution of falsely created explicit images is punishable by up to a year in prison, and a fine of $2,500. In Texas, malicious Deepfakes targeted towards public office candidates are also punishable by a year in prison as well as a $4000 fine. Other countries have little in the way of laws regarding Deepfakes but are still in the early phases.

Deepfakes, along with other facets of AI, present an attractive opportunity where fast, cheap, and most importantly believable media can be manufactured with minimal effort. On the other hand, heinous and possibly untraceable crimes can be committed with just as little effort. The number of Deepfakes on the web according to Sensity AI have roughly been doubling every 6 months as of their report in July 2020. There are efforts to limit and control the spread of Deepfakes, but present methods will always be chasing after their tails. Although we are not seeing the effects of it right now, in 3 to 5 years we may see ourselves in a situation where one “fact” is just one of many equally valid contradicting facts.

Leave a Reply