The European elections were not the
deepfake Armageddon that some media had predicted.
Yet, in several parts of Europe, deepfakes of politicians have
already become a critical part of the public debate.
It is a phenomenon that cannot be contained with agreements with
political parties and online platforms without the ability of
governments to enforce such agreements.
A phenomenon that above all risks undermining voters' trust in
the political system.
This is the snapshot taken by the University of Texas at Austin
on the use of generative artificial intelligence (GenAI) in
European politics.
The report, which examines both the European elections and the
elections in France and the United Kingdom, focuses in
particular on the issue of deepfakes although, experts write, it
is not yet a dominant factor in electoral processes.
Several prominent politicians, including German Chancellor Olaf
Scholz, British Prime Minister Keir Starmer, and Rassemblement
National leader Marine Le Pen, have been targets of deepfakes,
some of which have more satirical undertones.
"When the discourse becomes muddled," the report says, "voters
begin to question the integrity of not just particular deepfakes
but all political campaign communications, a real cause for
concern."
Non-binding voluntary agreements signed by political parties and
tech platforms "have not prevented the sharing of political
deepfakes," the experts say.
The report cites the Atlantic Council's Digital Forensics Lab
(DFRLab), which found that the far-right EP caucus Identity and
Democracy (ID) used GenAI in its election campaigns, violating
ID's code of conduct for the 2024 European Parliament elections.
Similarly, AlgorithmWatch reported that companies like OpenAI
have failed to comply with rules that prevented them from
creating realistic images of candidates for elections.
"These examples illustrate how crucial transparency and clarity
are," the experts write.
"When neither political parties nor platforms label content
clearly enough, voters are left with a losing battle because the
specter of doubt persists."
Voluntary commitments from platforms to provenance,
transparency, and disclosure are important, but, the report
notes, "the ability of governments to enforce" those agreements,
by pressuring platforms to follow through on promises of
watermarking, detection, and mitigation, is even more important.
ALL RIGHTS RESERVED © Copyright ANSA