AI in Democracy:artificial intelligence, threat or strength for democracy?
Is artificial intelligence threatening or strengthening democracy? This is the questions we asked our panellists at a recent event organised by Sorbonne University Alumni Club and AFRAN. Sorbonne University is the lead partner of the PostGenAI@Paris cluster, a project that was awarded 35 million euros by the French government last May.
What Is Artificial Intelligence?
Gérard Biau, Director of the Sorbonne Center for Artificial Intelligence (SCAI) introduced the topic by defining artificial intelligence. In the 1950s, the initial goal was to develop technologies that could simulate or imitate human intelligence. AI systems worked in a ‘mechanical’ way, programmed to execute a series of small actions that imitate intelligent mechanisms. AI is a vast field of study and incorporates different types of technologies. When we talk about AI today, we usually mean machine learning and deep leaning, which are methods to infer algorithms by training models on very large data sets and tuning a very high number of parameters.
Gérard Biau then offered an overview of the challenges posed by AI in democracies: mixing up correlation and causation or ignoring biases that exist in all data sets. This can lead to the manipulation of information and discriminatory decisions. He stressed, however, that the danger lies not in these technologies but in the way they are used.
Other challenges less mentioned but as important are the economic and environmental cost of AI technologies. Indeed, generative AI models are extremely energy-intensive and require colossal computing power and a lot of natural resources to function: water and air-conditioned centre for the cooling of data centres, rare earth material for the hardware, etc.
AI Systems as Threats to Democratic Processes
Thibault Grison, PhD student specialising in social networks at SCAI and CELSA, expanded on the threats AI systems pose to democracy. His research focuses on the use of AI systems in moderating online content, notably on social network platforms. More broadly, he looks at associated issues of freedom of expression, censorship and online treatment of minorities.
He introduced the notions of algorithmic bias – defined by French Computational Scientist & Entrepreneur Aurelie Jean (and Sorbonne University alumna) as ‘transforming a general observation into a systematic algorithmic condition’ – and of algorithmic discrimination, which is a direct consequence of algorithmic bias. He then gave a few examples of algorithmic discrimination. For example, content published by a person from the LGBTQ+ community is more likely to be flagged as hateful or sexual content, because the systems don’t take into account the fact that the community as reclaimed some of its slurs. He also mentioned a 2020 Twitter example of algorithmic bias in a cropping image tool used to create thumbnails, which favoured white male faces. Thibault Grison stressed, however, that every time biases are brought to public attention, they are corrected, like in this latest instance, or decommissioned, as was Twitter’s 2016 racist chatbot ‘Tay Tweets’.
Thibault Grison then talked about other AI-fuelled online practises that could threaten democracy:
Astroturfing, which is the use of bots to simulate trends on social networks, in order to fake public consensus and manipulate opinions. This tactic was for example used during the recent French legislative election by far-right parties on X (ex-Twitter).
Deepfakes, or AI generated content that can seem authentic. Although, these practices have mostly been humoristic.
According to Thibault Grison, fake news represent, in the end, a very insignificant portion of all online content and do not spread widely. It is people from a very specific social demographic that actively look for them and share them. To fight this, education to digital literacy plays a central role.
Developing an AI System to Strengthen Democracy
But AI systems can also strengthen democratic processes. François Yvon, researcher at the Sorbonne Institute of Intelligent Systems and Robotics (ISIR) focused his talk on his latest work for the project ‘Commun démocratique’ (democratic commons), for which Sorbonne University partnered up with Sciences Po, Make.org and the CNRS. The goal of the program is to research, test and make available open-source generative AI solutions for democracy.
Francois Yvon’s starting point is that generative AI models like large language models (LLMs) undermine democratic information systems which are based on trust. This is not a new threat as disinformation has been around for a long time, but the phenomenon is exacerbated by the broadcasting power of social networks and the recent democratisation of LLMs.
Instead of focusing on fighting disinformation/misinformation, ‘Commun démocratique’ focuses on how information processing methods can help create augmented democratic debates. The aim of the project is to study:
how AI systems can be moderating tools for large-scale online debates, and
methods to empower debate participants.
To do this, academics are partnering with Make.org, a French civic tech start-up that curates an online platform to engage citizens in participatory democracy. Make.org developed a unique method of mass consultation, capable of reaching several million people. The challenge is to develop a trustworthy LLM able to summarise or translate the large number of opinions exchanged on the platform in an unbiased way, to answer truthfully answer questions about the debate, etc. Francois Yvon explained that the project is pluridisciplinary: the first step is for social scientists to determine what the normal/expected behaviour of a moderator in a democracy is, and to list what principles should be followed (representing all opinions, avoid gender discrimination, etc.). The second step is for computer scientists to measure the extent to which these principles are respected by the LLM.
During exchanges with the public, several issues were raised:
the question of neutrality of AI systems,
the need for a plurality of generative AI tools as the best known (ChatGPT, Gemini, Llama) are US-centric (the BLOOM model, trained in France is an answer to this concern).
the issue of technological sovereignty and its cost,
the need to have open-source models (the French start-up Mistral AI developed several open-weights models).
Reporting by Sarah Vallée, AFRAN AI Community Lead.
Comments