(by Alessio Jacona*)
Is artificial intelligence a danger to democracy? Perhaps not in
itself, but if combined with "a lack of digital literacy of the
average user, because generative AI models can be used to create
highly convincing disinformation and manipulated media, which
can then spread rapidly online," says Izidor Mlakar, a scientist
who leads HUMADEX (Human-centric Explorations and Research in
AI, Technology, Medicine and Enhanced Data), a research group of
the Laboratory for Digital Signal Processing at the Faculty of
Electrical Engineering and Computer Science of the University of
Maribor, in Slovenia.
According to Mlakar, in addition to our unpreparedness, AI is
also dangerous because "social media fueled by it...can
accelerate the spread of disinformation", for example because
"AI-based bots and targeted messaging can be used to further
influence public opinion and voter sentiment".
The HUMADEX research group is a multidisciplinary team of
experts in artificial intelligence, conversational intelligence
and human-computer interaction. Together with ANSA, it is part
of the European research project Solaris, which aims to define
methods and strategies to manage the risks, threats but also
opportunities that generative artificial intelligence brings to
democracy, political engagement and digital citizenship.
HUMADEX is primarily composed of AI experts, who contribute to
the development and improvement of the algorithms that power the
interactive and intelligent aspects of the SOLARIS platform,
including natural language understanding, machine learning and
data analysis, essential for creating responsive and adaptive
user experiences. They are also joined by psychologists and
human-computer interaction specialists, who ensure that the
technology is user-friendly, engaging and accessible, but also
oversee the development, execution and analysis of use cases.
What exactly is your role in the Solaris project?
"Our main activity was to design, test and validate a scale
measuring trust in AI-generated content, namely the Perceived
Deepfake Trustworthiness Questionnaire (PDTQ). We conducted
extensive validation studies in three languages: English,
Slovenian and Italian. We also worked on the detailed design of
experiments in the cases of use. In use case 1 we will focus on
climate change and immigration. We will assess the impact of
AI-generated content on explicit and implicit attitudes towards
these topics, with objective video quality, perceived trust and
political orientation as benchmarks. In use case 2 we will focus
on policy makers and press offices to develop methodologies and
policy recommendations to prevent AI-generated content from
negatively impacting social systems. We will simulate the spread
of deepfakes with severe socio-political impacts (e.g., an
attack on a nuclear power plant or a prime minister stating a
conspiracy theory) and manage the potential offline
consequences. Finally, use case 3 will focus on the potential of
(generative) AI to improve digital citizenship and democratic
engagement. Here the focus is on co-creating AI-generated
content with citizens to raise awareness on key issues (e.g.,
climate change, gender equality).
What are the real and immediate problems that AI technologies,
especially generative AI, are posing to the democratic process?
"Many people struggle to critically evaluate the accuracy and
reliability of online information, especially when it comes from
AI-generated sources. Without digital literacy, people are
vulnerable to making decisions based on false narratives and
manipulative content. Furthermore, the reactions and reactivity
of the state and traditional, trusted media cannot compete with
the quality and potential reach of negative content".
How will the Solaris project address these issues and, more
generally, what should we do to safely adopt this powerful
technology and use it for good?
In use case 1, SOLARIS is developing mechanisms to "explain" to
citizens and "educate" them on what elements to consider when
trying to understand whether a content is "real" or not. Indeed,
the key competence of AI literacy includes understanding how AI
works not at a technical level, but at a functional level.
Privacy and security risks around AI also need to be addressed
through digital literacy. In use case 2, we focus on how experts
should react when a threat to democracy occurs. We will try to
provide policy makers and the media with the means to better
review mitigation protocols. This is a significant step to
better understand and mitigate the democratic risks posed by
advanced AI, and in this regard, public-private collaboration on
industry standards and self-regulation for the responsible
development and implementation of generative AI, such as the
AI-Act, can help to some extent, but if the rules are not
adopted by everyone, the development of the technology as well
as the services based on it will tend to move to less regulated
contexts. Therefore, AI literacy and education are key to
empowering citizens to critically evaluate the content generated
with it, as well as to helping them understand how these
technologies can be used both inappropriately and for good".
*Journalist, innovation expert and curator of the ANSA
Artificial Intelligence Observatory
ALL RIGHTS RESERVED © Copyright ANSA