Associated Incidents

Incident is believed to be one of the first examples in the EU of using AI to manipulate the likeness of a journalist – and could portend the use of such tools to discredit the media. From the International Press Institute.
The 2023 Slovak parliamentary election campaign made history, and not for the right reasons. For the first time, deepfakes played a role in the pre-election battle. Two days prior to the 29 September election, a video with an audio clip that allegedly featured Monika Todova, a well-known journalist from independent news platform Dennik N, and Michal Simecka, the chairman of the Progressive Slovakia party, circulated on social networks. The fake conversation – which appeared to discuss ways to rig the election – never happened, but the video reached thousands of social network users anyway.
The “conversation” about electoral fraud was pure fabrication. But in a short video spread just a few days before the poll, the digitally generated voices of the journalist and party chairman were heard talking about it. In the artificially manipulated video, known as a deepfake, a static image of the journalist and the politician is shown. The audio track is of poor quality and doesn’t sound authentic, but their words are also in the subtitles. Both involved immediately denied the authenticity of the conversation. Several fact-checking organizations and experts also confirmed the inauthenticity of the video.
According to press freedom groups, the deepfake involving Todova is believed to be one of the first examples in the EU of artificial intelligence technology being used to manipulate the likeness of a journalist – and could offer a worrying portend of the future abuse of such tools to smear and discredit the media.
While deepfake videos or audio clips can be made by almost anyone with minimal effort, detecting them is not so simple. “There isn’t, as of yet, a really effective, accessible, and above all reliable tool for detecting AI-generated content, from audio to video to photos,” confirmed Michael Colborne, an analyst at Bellingcat, a Netherlands-based investigative journalism group that specializes in fact-checking and open-source intelligence. It is also challenging to determine how, exactly, it was created. “Whether it was created by someone uploading it and then using artificial intelligence to dub it, or whether someone trained artificial intelligence and then created an audio file from it, it’s impossible to say,” added investigace.cz data analyst Josef Slerka.
Tracing the path of the deepfake video is also problematic. The fact that it first started spreading from Telegram, a messaging app known for its custom security settings, made looking into its origin even more difficult. Identifying the original source disseminating the clip with absolute certainty is almost impossible. However, it is clear that the recording could have served two purposes: to discredit a journalist and to provide evidence for the Kremlin’s narrative that the United States wanted to manipulate the Slovak election results.
Pattern of Discrediting Attacks
Monika Todova, a prominent figure in Slovak journalism, has faced smear campaigns before. According to an analysis of the Investigative Center of Jan Kuciak (ICJK), she was the most attacked journalist in the months prior to the parliamentary election. While this deepfake video surpassed previous tactics, she had been the object of these attacks long before.
A continuous smear campaign is being waged against Todova involving not only politicians but also disinformation platforms. Constant discreditation, hate speech, and even surveillance are what she has had to face in recent years. And it is often the politicians who fuel the attacks, including at their press conferences.The deepfake could represent a completely new tool for smear campaigns in the future. Todova has already taken legal action and with the help of protection platform Safe.Journalism.SK, filed a criminal complaint – on suspicion of defamation and criminal damage to the rights of others.
However, as noted above, this deepfake video may have served a different purpose, too – fitting into the Kremlin narrative that the U.S. intended to interfere with the Slovak election results. Social networks and chain emails played a major role in spreading the video before people went to the polls, with two politicians acting as the most successful social network sharers: former Supreme Court head and ex-minister Stefan Harabin and former parliamentary member Peter Marcek. Both are strong advocates of pro-Kremlin positions and propaganda in the country. Marcek himself visited annexed Crimea in 2018.
Findings from the Jan Kuciak Investigative Center (ICJK) in Slovakia show that the fake video initially became publicly widespread after being shared by Harabin’s Telegram account. However, he forwarded it from a hidden private account with the name “Gabika Ha.” The former justice minister’s account did not share content from this user by accident. He has done so at least 27 times. Moreover, analysis of the account’s content shows that the person using the account, “Gabika Ha,” must have accompanied the former justice minister in person at an event (Salamander Days) in Banska Stiavnica in early September 2023. When asked if the profile could be associated with his wife, Gabriela (abbreviated Gabika) Harabinova, a name that bears similarity to “Gabika Ha,” Harabin hung up the call with ICJK’s reporter.
Despite initially surfacing on Telegram, the fabricated video spread to tens of thousands of users through Facebook. According to AFP fact-checker Robert Barca, who verifies the truthfulness of information on Slovak Facebook, it spread mainly through personal accounts, “like most viral false content on Facebook,” he said. The most successful post, according to Barca, was that of Marcek.
Notably, the emergence of the deepfake video involving Todova and Simecka coincided with a press release from the Russian intelligence agency SVR published shortly before the video appeared, alleging U.S. involvement in influencing the outcome of the Slovak elections. SVR intelligence chief Sergei Naryshkin accused the United States of instructing allies to cooperate with business and political circles “to secure the American-demanded voting results,” particularly aiming for “the victory of its proxy, the liberal party Progressive Slovakia (…).” The press release mentioned Simecka, the party’s chairman.
The night prior to the September poll, the Slovak elections took center stage in the news broadcasted on the Russian state television network, Rossija 1. The report’s primary focus revolved around the potential for U.S. interference in the Slovak election outcomes. It was Stefan Harabin who echoed this theory on camera and, according to the footage, was the only one who gave an interview to the station, known for its propaganda.
Unless law enforcement authorities in Slovakia are able to identify the source of the disinformation attack, the ultimate motive behind the deepfake may never be established. However, as the use of AI tools becomes more readily available in the coming years, the likelihood of similar deepfake smearing attacks and AI-supported disinformation campaigns could increase, posing serious questions not just for Slovakia, but for Europe and the world. As debates continue about how AI will affect journalism and press freedom, this case represents a worrying example of the threats deepfake technology could pose, both now and in the future.
…
Karin Kovary Solymos is a contributing writer for the International Press Institute (IPI), where this article was originally published, and works at the Jan Kuciak Investigative Centre (ICJK) in Slovakia. This article was commissioned by IPI as part of the Media Freedom Rapid Response (MFRR), a Europe-wide mechanism which tracks, monitors and responds to violations of press and media freedom in EU Member States, Candidate Countries, and Ukraine. Reprinted with permission, with slight edits to fit Transitions’ style.