Use of deepfake technology to discredit protest organizers or participants.
Artificial intelligence (AI) can generate text, believable imitations of a person’s voice, and realistic photos and videos today. The real danger of AI does not lie in the creation of humanoid robots but in the mastery of language. By using deepfake technology, it is possible to discredit the organizers or participants of protests, creating false evidence of violence, hate speech, or other illegal activities.
Deepfake technology can portray peaceful protesters as violent, thereby justifying a harsh police response or the imposition of restrictive measures. Deepfakes and AI challenge our ability to distinguish reality from manipulation. Realistic videos of violent demonstrations and peaceful protest leaders calling for violence may become ubiquitous shortly. This technology could discourage citizens from joining protests or cause division among people over the brutal crackdown on peaceful protests.
In addition to generating deepfake content, AI algorithms can spread disinformation through social networks, amplifying the impact of disinformation campaigns. This poses a severe threat to public gatherings, as deepfakes are strategically used to spread false narratives, manipulate public opinion, and incite conflict. The viral nature of digital content makes it difficult to counter the effects of disinformation based on deepfakes, which further threatens the process of informed decision-making and makes it challenging to organize peaceful protests.
Faced with new challenges, we often hastily reach for the idea of improving legislation without considering how well the existing regulations are implemented. Serbia does not lack a legal framework for responding to the prevention of public gatherings through threats, force, or deception. Article 151 of the Criminal Code prescribes the criminal offense of Preventing a Public Assembly. However, this and many other crimes are rarely prosecuted despite the serious social need to combat them. The existing legal protection mechanisms often lose their purpose due to the length of the proceedings and other obstacles. Lawsuits for defamation will not discourage tabloids from breaking the law if the state pays them more money through funding schemes than the amount of compensation for defamation. If today we cannot understand how anyone can trust the incredible tabloid headlines, imagine what it will be like tomorrow when the tabloid televisions materialize those headlines in the form of believable moving pictures.
Legal actions can play an essential role in curbing the malicious use of deepfakes and correcting the damage caused. However, it should be borne in mind that they often cannot fully restore things to the state before harmful events, especially when spreading disinformation. To reduce the profitability of the malicious use of deepfakes, a holistic approach combining legal, technological, and educational measures should be implemented.
This includes passing clear regulations prohibiting the abuse of deepfake technology, improving the efficiency of the judiciary in these cases, developing technological tools for detecting deepfakes, and campaigns informing the public about the risks and recognition of fake content. The goal is to create an environment where the malicious use of deepfakes would be less attractive while at the same time preserving freedom of expression.
The rise of artificial intelligence and deepfakes raises complex legal and ethical issues. AI has the potential to improve society, but at the same time, it poses a severe risk to human rights. Erosion of trust, manipulation of public discourse, suppression of dissent, dissemination of misinformation, and legal challenges require urgent attention. Demands for banning the use of AI can be heard more and more often in society, mainly due to the fear of losing jobs that could be replaced by this technology. However, states’ current efforts are mainly focused on mandatory labeling of fake or manipulated content, although it is difficult to imagine that such regulations could stop those who want to abuse this technology.
The new regulations target tech companies that must moderate content. According to EU requirements, users must be informed that they are not talking to a real person when using chatbots. In addition, the European Union directed Facebook, TikTok, and Google to begin to mark content created using artificial intelligence. At the beginning of the year, the guidelines of the Chinese Administration for Cyberspace (CAC) on the method of creating deepfake content, so that it is now prohibited to create it without the consent of the person to whom it relates, as well as in the case when it is in conflict with the national interests of this country, and it was ordered that deepfake content be clearly marked to fight against online fraud and defamation. In line with this, the popular platform TikTok, owned by Chinese entrepreneurs, has already aligned its guidelines. In other countries like the UK and the US, the emphasis is on combating the use of deepfake content in revenge porn.
The European Court of Human Rights has already found the platform responsible for moderating hate speech comments in Delphi vs. Estonia. While the judgment in that case, at worst, resulted in the media’s decision to delete the comment sections and lose some of the interaction with the audience, the effect on the platforms where the video content is uploaded is challenging to predict. One of the possible consequences of curbing the misuse of deepfakes could be the inability to upload content by unverified members, making it difficult for dissidents in authoritarian regimes who, for fear of reprisals, want to anonymously post footage of police brutality at protests.
Even more worrying is that, in the absence of human capacity, moderation will be left to artificial intelligence.
In Serbia, a proactive approach regarding deepfake content has yet to be adopted yet.
However, creating and publishing a deepfake video of a real person without their consent in Serbia can violate the law if it invades their privacy, damages their reputation, or violates their copyright.
Public figures, including celebrities from the entertainment and sports world, often face less privacy than ordinary citizens. However, even though they are exposed to more public attention, this does not mean they have no privacy rights. It is important to emphasize that the unauthorized use of deepfake material of public figures for advertising should be prohibited.
As holders of public functions, politicians face pressures arising from their role. In many democratic societies, freedom of expression and criticism are essential values to protect. Politicians should be more tolerant of public criticism, including political parodies and satire. Therefore, clearly labeled deepfakes of politicians should be allowed.
Photographing and filming people who are not public figures in a public place is, in principle, allowed as long as it does not significantly intrude on their privacy. However, if such a recording was used to create deepfake content showing that person in a private act, it could be a criminal offense.
With its content, a deepfake video can harm a person’s reputation, and if it was created by processing an author’s work (e.g., a lecture), it could be a copyright violation. Depending on the circumstances of the case, in addition to criminal proceedings, victims of the abuse of deepfake technology may also address their protection in civil proceedings for damages due to injury to honor and reputation, i.e., infringement of copyright, as well as in proceedings before independent and regulatory bodies.
For more information on the rights and obligations of organizers of public gatherings in the digital age, you can see “Protests and Digital Technologies – Guide for organizers,” prepared by the Lawyers Committee for Human Rights – YUCOM.