Deepfakes and Their Relationship With Law and Politics

As deepfakes increase in prominence so too will their capacity to wreak havoc, particularly on political institutions. As such, governments must react to this growing threat in a timely fashion. 

The term ‘deepfake’ refers to a piece of synthetic media for example an image, video, or audio clip. More specifically, this synthetic or fake media is so called because it is created using deep learning technology, a type of artificial intelligence (AI). As such, it is cleverly an amalgamation of the two phrases: ‘deep learning’ and ‘fake’. The term and the controversy surrounding it first rose to prominence in 2017 after a Reddit user of the same name posted pornographic videos on the forum where adult entertainers’ faces were replaced with that of celebrities. Since then, there has been a plethora of synthetic media hitting the web. 

The popular film ‘Rogue One: A Star Wars Story’ utilised technology similar to that which is used to create deepfakes in order to de-age actress Carrie Fisher

The most well known form of deepfaked media are videos. These are generated through AI learning what a source face looks like by analysing numerous images of an individual, and then superimposing the face onto a target body. Similarly, deepfaked audio clips can be created through algorithms being used to analyse an individual’s speech patterns and inflections which can then be used to synthesise voice clones. Deepfake technology may even be able to create entire fictitious persons complete with a unique face and profile.

Media manipulation itself cannot be described as a new phenomenon. For example, fashion publications have been retouching and smoothing pictures to achieve more polished looks for many decades. Similarly, the technology to produce deepfakes has arguably been around for some time, the most recognisable use being special effects and CGI in the film industry to create, de-age, age up or even revive dead actors. However, beforehand, key barriers to the production of synthetic media were the requirements of often expensive specialised equipment and software as well as appropriate proficiency in them. 

Now, recent technological and software advances mean it is entirely possible for anyone to create a deepfake. Mobile applications make it all too cheap, easy and accessible, with examples that do all the hard work for you being FaceApp, FakeApp, and ZaoApp. So popular was ZaoApp, which allows users to change a character’s face in a film or TV clip by using selfies from their phone and generates it in under eight seconds, that within three days, it became the most downloaded app in China. 

Potential impacts

As deepfakes continue to increase in scope, scale and sophistication, the possible effects and the affected are immeasurable. For instance, deepfakes could spell trouble for the judicial system, with faked events being entered as evidence. They also pose a personal security risk. Given that deepfakes can mimic individuals’ faces and voices, they could potentially trick systems that rely on this as a mode of secure biometric recognition and as such, the potential for scams is apparent. 

Perhaps most worrisome of all are deepfakes featuring political figures. Individuals and organisations alike now wield the means to produce fake news for political sabotage. They can create synthetic media of political figures purporting to do or say things they never actually did. This may in turn have profoundly negative consequences on the functioning of liberal democracies. For example, targeted videos portraying politicians making hateful comments may impact vital democratic processes such as electoral campaigns. If weaponised by hostile governments, deepfakes could even pose threats to national security or impair international relations.

Current examples of synthetic media in politics

YouTube – Deepfake Queen: 2020 Alternative Christmas Message

In April 2018, Buzzfeed released a deepfake video of Barack Obama purporting to say crude words about Donald Trump to demonstrate how easy it is to discharge and highlight its risks to public discourse. More recently, Channel 4 published a deepfake of the Queen’s 2020 Message which garnered two million views.

One confirmed employment of a deepfake by a political party in an electoral campaign occured in 2018 when sp.a, a Flemish socialist party posted a video on its social media pages where Trump appeared to taunt Belgium for remaining signatory to the Paris climate agreement. Although a poor forgery and signposted as fake as an concluding statement this could soon become part and parcel of political debate. It is also entirely possible that a deepfake played a role in the crisis that occurred in Gabon in the same year.

Relatedly, shallowfakes are beginning to emerge in politics. These are media that are either doctored with rudimentary editing tools or presented out of context. An example is a video where Nancy Pelosi, the Speaker of the United States House of Representatives, had one of her speeches slowed down, making it sound slurred, with the intention of making her look convincingly incapable. It reached and was shared by millions, even by the likes of Donald Trump’s lawyer and the former mayor of New York, Rudi Giuliani. Similar tactics were seen in the run-up to the 2019 UK general election, during which the Conservative party manipulated an interview with Keir Starmer, then Shadow Brexit Secretary, to make it seem as though he was unable to give an answer on the party’s Brexit stance. 

This mischief-making is only likely to increase and is exacerbated by the fact that deepfakes are often very realistic. Though there is some way to go before they are totally undetectable, strides in that direction continue to be made. There may come a time when even the trained eye may struggle to distinguish forgeries from factual media with this crisis of misinformation we may be facing being dubbed the ‘infocalypse’. 

The role of law and lawmakers

Since the publishing of the Online Harms White Paper no further action has been made by Parliament to pass legislation to address the rise of deepfakes

Currently, no country in the world has passed legislation specifically made to combat the rise in deepfakes. However, in the meantime, deepfakes may be limited by established laws and legal doctrines that target disinformation. This includes the torts of passing off, which may apply only insofar as the victims have previously commercialised their public image; malicious falsehood, albeit this requires that the false words spewed result in quantifiable monetary loss, and defamation, although in order to be actionable, the individual depicted must normally show such a publication caused them ‘serious harm’.

In truth, the use of deepfake technology and potential for malicious deployment begs the need for Parliament and the courts to react. The Online Harms White Paper published by the Department for Digital, Culture, Media and Sport subcommittee in 2019 shows that the UK is alive to the issue and indicates it is planning to regulate soon. The challenge, however, is how exactly to go about regulating this emerging threat, particularly to liberal democracy. What is the best approach? How far should it go? Should any obligations be imposed on platform operators to control what content is disseminated? Though an outright ban of deepfakes may be the most effective way to limit all problems, this would no doubt be incompatible with the fundamental human right of freedom of expression contained within Article 10 of the European Convention on Human Rights and transposed into UK law through the Human Rights Act 1998. 

Across the pond, political deepfakes are banned in two US states: Texas, being the first, and California which makes it illegal “to create or distribute videos, images, or audio of politicians doctored to resemble real footage within 60 days of an election”. However, even with appropriate law, criminal sanctions, civil remedies and effective enforcement, this does not tackle the issue of the detrimental consequences viral deepfakes may have, nor their clean-up. Consequently, it appears there must be a two-pronged solution. 

The role of tech and tech companies

Companies like Faculty will be instrumental to combating malicious deepfakes

As is the case with many other evolving technological issues, legislating is not the only answer. In fact, it is unlikely to deter those who are determined to undermine political trust and sow disinformation.

Ironically, AI has a starring role to play in combating malicious deepfakes with governments, universities and tech firms all currently funding research to help with its detection. For example, Faculty, a UK-based startup whose clients include the Home Office and numerous police forces, has teams exclusively focused on generating thousands of deepfakes using all the leading deepfake algorithms in the market in order to train up systems to distinguish true media from synthetic. Similarly, Amber, a company in New York is looking to create a form of “truth layer” software that can be embedded in smartphone cameras which will act as a watermark to verify a video’s authenticity. 

The Big Tech players are not being stagnant either. Last June the first Deepfake Detection Challenge Dataset launched, backed by the likes of Facebook and Microsoft as well as academics from an array of top universities. This drew more than 2,000 participants and saw research teams around the globe collaborating and competing to innovate new technologies in order to detect deepfakes and manipulated media, speeding up progress in this area. 

Sparked in part by the 2020 US election, Facebook also banned deepfake videos that are likely to mislead viewers. However, this does not extend to deepfakes meant as parody or satire, nor shallowfakes. Meanwhile, Twitter’s synthetic and manipulated media policy states that “you may not deceptively promote synthetic or manipulated media that are likely to cause harm”, the consequence being its labelling or removal. 

Alarmingly, as detection improves so too will deepfake algorithms. For instance, after it was discovered in 2018 that deepfake faces blink unnaturally this was soon addressed. Such is the nature of the game. To quote the co-founder and CEO of Faculty, Marc Warner, “it’s an extremely challenging problem and it’s likely there will always be an arms race between detection and generation.” 

My view

Government proactivity in taking actionable steps to limit the spread of harmful deepfakes by investing in the advancement of deepfake detection and watermark technology is certainly welcomed, but strangely they are yet to deploy the most obvious tool in their arsenal: the law. This may be because there is a belief that the law is ill-equipped to tackle the issue. Regardless, alongside having a deterring function, the law also operates to denounce and label certain behaviours as undesirable. As such, it still makes sense to legislate. Further, existing laws that may restrict the circulation of deepfakes only do so under certain circumstances: specific legislation aimed at limiting their potential damage is now overdue. 

Currently, governments have taken a largely hands-off approach in relation to regulating social media platform operators. They should instead pass legislation going further than that of current tech companies policies, requiring that both malicious deepfakes and shallowfakes be removed from their servers whenever found. Though some may criticise this as beyond the remit of public institutions, the power and influence that the biggest social media platforms have over society is immense, it follows that the law should reflect this fact. In my view, an ideal framework would be one where numerous states band together and create some form of standardised regulation. This would not only help push tech companies to abide by these stipulations but would also make it easier for them to do so by reducing the number of different rules coming from different countries that they would have to comply with. 

I genuinely worry about the possible chaos deepfakes can bring to all walks of life, especially the threat they pose to democracy. I am fearful of the prospect of an ‘infocalypse’, where individuals and countries will be facing an onslaught of disinformation, and that politicians could be undermined or leverage the circulation of deepfakes as a veil to hide behind by merely denouncing unspeakable comments they are found to say as just another example of synthetic media. I am, however, optimistic about the progress tech companies are making. Continued investment in deepfake spotting and watermarking is vital as is increasing public awareness of the issue, because no matter how good and efficient legislation might be, it can only be applied after the damage has already been done.

Though chilling, this is just another example of the law and governments struggles to come to grips with technological advancements, other recent phenomena being the regulation of drones, autonomous vehicles and taxing Big Tech. 

Leave a Reply