Essex Uni law student Raksha Sunder unpacks the rise of deepfakes, their legal implications, and what global regulation could mean for this evolving digital frontier
In 2018, Sam Cole, a reporter at Motherboard, uncovered some troubling news on the internet. A Reddit user known as “deepfakes” was posting fake pornographic videos using an AI algorithm to swap celebrities’ faces with those of adult film actors. Cole raised awareness about the issue just as the technology gained momentum. By the following year, these deepfake videos had expanded well beyond Reddit, with apps available that could digitally “strip” a person’s clothing from a photo.
Deepfake technology has since been associated with these malicious purposes, and it is still used to create fake pornography. This has significant legal implications; for instance, the UK’s Online Safety Bill includes provisions aimed at criminalising the sharing of non-consensual deepfake pornography. There’s also the risk that political deepfakes will generate convincing fake news that could wreak havoc in unstable political environments.
The European Union’s Code of Practice on Disinformation highlights these dangers, calling for measures to combat the spread of manipulative deepfake content. Two deepfake advertisements were released by the nonpartisan advocacy group RepresentUs in the run-up to the 2020 US presidential election. Fake profiles showed North Korean leader Kim Jong-un and Russian President Vladimir Putin claiming that they “didn’t need to intervene in the US elections as America would destroy its democracy on its own”. RepresentUs aimed through these deepfake videos, to promote awareness of voter suppression in order to defend voting rights and boost turnout, despite experts’ concerns that technology could cause confusion and interfere with elections. In a 2020 campaign video, an Indian politician reportedly employed deepfake technology to replicate the sound of the Hindi dialect spoken by his target audience, Haryanvi.
Additionally, a deepfake was circulated recently, making it appear that singer Taylor Swift endorsed Donald Trump. This caused a media frenzy until Swift clarified that she did not support Trump and instead endorsed Kamala Harris. This event proved the disruptive potential of deepfakes in public opinion, showing how easily fabrications can manipulate perceptions of political endorsements.
How should deepfakes be regulated?
Governments worldwide have been discussing how to regulate these new technologies as the use of AI and deepfake technology spreads. There are still a lot of gaps in the laws and regulations that various jurisdictions have introduced to handle the special difficulties presented by deepfakes.
The DEEPFAKES Accountability Bill (H.R. 3230), which was introduced in the US, was a significant step in deepfake technology governance. Introduced in the 116th Congress, this proposed law includes provisions for labelling deepfake content and would require producers of deepfake content to disclose when an image or video has been altered with artificial intelligence. The purpose of the Bill is to stop the propagation of malicious deepfakes that may threaten people, circulate false information, or obstruct democratic processes.
Social media sites such as YouTube and Instagram already have standards in place to prevent harmful content from being hosted on their sites. However, these restrictions are frequently unenforced, as banned content may not always be detected by automated systems, and manual inspection procedures can be laborious or inefficient. Therefore, users continue to monetise content that includes deepfakes, especially if they evade detection, allowing them to gain profits while violating legal and/or platform guidelines.
Want to write for the Legal Cheek Journal?
Find out moreThe European Union (EU) has implemented the General Data Protection Regulation (GDPR) and the Code of Practice on Disinformation, both of which can be used to combat deepfakes. Deepfakes may be governed by the GDPR, which regulates data protection throughout the EU, if personal information or photos are used without consent. A person’s voice or appearance used in deepfakes constitutes personal information under Article 4 of the GDPR. Article 6 prohibits the processing of personal data without the subject’s approval. Meanwhile, tech companies are urged by the voluntary Code of Practice on Disinformation, which was introduced in 2018, to demonetise misinformation and promote transparency in political advertising to stop the spread of deepfakes and other manipulative content online. However, this code heavily relies on voluntary compliance, limiting its effectiveness in stopping the spread of harmful deepfakes.
The regulation of deepfakes: a new path forward?
A direct global regulatory structure that targets deepfake technology would be one of the biggest leaps forward. This may depend on agreements already in place, such as the Convention on Cybercrime (Budapest Convention) of the Council of Europe, which establishes guidelines for national cybercrime laws and promotes international collaboration. To address the creation and spread of deepfakes, a treaty similar to this one might be created, emphasising disclosure and consent and applying the disclosure requirements outlined in Section 104 of the DEEPFAKES Accountability Act on a global scale.
However, requiring deepfake creators to disclose their content is not enough to deal with the growing challenges of these technologies. A better solution would be to establish international guidelines that include penalties for people who misuse AI and deepfakes. This way, creators would be required to disclose when they have altered content and for any harm caused by their creations. This idea could follow similar rules for other digital threats, like cybersecurity or online scams. By adding strict punishments for those who misuse deepfakes to deceive or damage reputations, there would exist a more robust defence against their adverse effects
Another way the misuse of deepfake technology could be dealt with is through international data protection agreements, akin to the EU-US Data Privacy Framework. Such agreements would standardise the protection of personal data used in deepfakes across borders, preventing data laundering by ensuring consistent safeguards regardless of jurisdiction. The agreement could incorporate a mechanism similar to the European Arrest Warrant (EAW), enabling the swift transfer of suspects involved in deepfake crimes to the country where the offence occurred. This would prevent perpetrators from evading justice by exploiting weaker legal systems in other countries.
The costs are higher than ever as deepfakes continue to blur the distinction between fact and fiction. The days of “seeing is believing” are coming to an end, and if the legal system doesn’t keep up, we might live in a society where reality is nothing more than a digital illusion.
Raksha Sunder is a law student at the University of Essex with a keen interest in corporate law. She is the Vice President of the Essex Law Society and enjoys competing in writing competitions during her free time.
The Legal Cheek Journal is sponsored by LPC Law.