Agentic AI—a type of AI system that can independently plan and execute multi-step processes—has become much more prominent in recent years due to the rise in generative AI. Generative AI models, notably large language models, which allow for the interpretation of text and creation of responses, enable AI agents, such as chatbots, to mimic human attributes and dialogue much more convincingly than was possible even a few years ago.
These new capabilities have given rise to a multitude of novel applications—notably in the field of political deliberation—enabling representatives and non-policymakers alike to consider three types of roles for AI agents:
- AI to support deliberations between human beings;
- AI as a participant in hybrid deliberations (where human beings are also present); and
- AI as a replacement for human deliberations (where human beings are absent).
Today, new tools and initiatives enabling these roles are being deployed, including the free online platform pol.is and the United Nations Department of Political and Peacebuilding Affairs (UN-DPPA) digital dialogues for peace, both of which assign a moderation role to AI agents.
The Deliberation Evolution: From Analog to the Internet and AI
Even before the rise of generative AI, deliberative processes, which allow for a multitude of participants to discuss and shape their ideas through dialogue ideally converging towards a common decision, have been heavily impacted by new technologies over the last several decades.
In 1967, when German political philosopher Jürgen Habermas published The Structural Transformation of the Public Sphere, he criticized the information technologies of the time—television and radio—for having destroyed public dialogue by creating one-way information dissemination tools where people could only receive ideas, not contribute to them. However, with the emergence of the Internet followed by social media, this trend was reversed, allowing people with digital connectivity to share their ideas much more independently.
Thus began an era, for better or for worse, where the Internet provided a dynamic space for debate and dialogue around political issues. New barriers to participation emerged, from digital literacy to connectivity, as did new deliberative challenges, from increased polarization to the development of online echo chambers.
We are now faced with a new question, which concerns the participation of AI models in deliberative processes, both formally and informally, and what that could mean for societal decision-making, especially when it comes to setting global norms and objectives.
Formal and Informal Deliberation in Politics
Two types of political decision-making processes are important to note here.
We first have formal deliberative processes, which typically take place in connection with government policymaking. For non-policymakers, this can take place in the form of a town hall with government representatives, or in a citizen assembly, both of which aim to gather information on the needs and ideas of voters to better tailor the decisions of their elected officials. Citizen assemblies, such as the 2020 French Citizens Convention on Climate, offer structures in which, according to deliberative theorists, optimal outcomes can emerge. These structured dialogues use tools such as expert briefings, working groups, moderators, and voting. Town halls are usually simpler in architecture, involving a speech from an elected representative or candidate, and a question-and-answer period with members of the electorate.
Informal deliberations, however, are discussions that shape political opinion more broadly, without being specifically designed or targeted. They can include discussions around political topics on social media, comment threads on blogs or news articles, or video commentary on current events. Online discussions often spread like networks, with people reposting, commenting, liking, and providing original content about a specific political topic.
Of course, informal deliberations are far more effective than formal ones in shaping public opinion, but they often don’t have clear pathways towards policy action. Political scientist Jane Mansbridge calls this “everyday talk,” explaining that people may be more open to changing their minds through back-and-forth dialogue, where defenses are down and emotions and experiences can be integrated more naturally. However, informal deliberations can create closed loops where good ideas fail to reach decision-makers.
Emerging AI Agent Roles and Their Implications
AI agents are beginning to be used intentionally in formal contexts in order to address some of the current challenges of deliberation, allowing deliberation at scale, overcoming linguistic barriers, monitoring harmful behavior in dialogues, and guiding participants towards clear outcomes. However, AI agents are also present, wittingly or not, in informal deliberations. In fact, AI chatbots are nothing new to social media, and have been deployed for both benign and harmful reasons for nearly a decade. The difference today is that AI agents are much more convincing, versatile, and easy to create.
These various uses of AI agents lead to different layers of ethical grey zones, which will increasingly need to be addressed.
AI as tools for deliberation run the risk of manipulating the proceedings, for example by misrepresenting perspectives in summarization, inappropriately monitoring comments, or omitting action points for follow-up. AI agents as participants can risk “polluting” the deliberative process with ideas that are inappropriate or harmful, leading to negative societal outcomes. And of course, deliberations conducted solely by AI agents, that is deliberations without humans and between “digital twins” pose significant questions about the loss of human agency in shaping the society that we live in, as well as the loss of the benefits of dialogue in developing and strengthening our social fabric.
The table below outlines some of the key uses and ethical concerns for each type of AI agent.
Type of Deliberation | Type of AI Agent | Primary Uses | Ethical Concerns |
---|---|---|---|
Formal | Moderator | Summary of key points, translation, guidance of participants towards outcomes. | Omission of certain points of participants, erroneous or biased outcomes. |
Formal | Participant | Representation of excluded voice in deliberation. | Misrepresentation, lack of agency and benefits from deliberation for "replaced" group. |
Formal | "Twin" | Testing of perceptions and ideas, for example potential population response to a policy change. | Misrepresentation, lack of agency and benefits from deliberation for "replaced" group. |
Informal | Moderator | Content recommendation, advertising. | Information siloes and echo chambers, polarization, manipulation. |
Informal | Participant | Influence (can be disinformation, but also certain causes), content aggregation (for example, a bot retweeting on a certain topic). | Manipulation, especially if human participants do not know the chatbox is artificial. |
Informal | "Twin" | An informal deliberation of digital twins might happen if chatbots "converse" with each other. | There are very mixed perceptions on this, with some arguing that chatbot informal dialogue could have positive effects on deliberations, while others fear that without constraints, chatbots could take any number of harmful actions. |
Being aware of these ethical challenges can restore some of this lost human agency, especially if it comes accompanied with intentional design choices for both formal and informal deliberations.
Deliberative processes are natural means for human beings to exchange their ideas and shape the societies in which they live. The insertion of artificial “thinkers,” so to speak, is not straightforward, but responsible usage may be possible in some cases if risks are understood and addressed. This question will be particularly important to consider in the coming years as policymakers and the general public move towards new rounds of Sustainable Development Goals (SDGs) and use multilateral fora in order to address a multitude of global challenges.
As adoption of agentic AI increases, it is critical for researchers and policymakers to explore the ways in which these autonomous AI systems can influence our ideas and political decisions, and then agree on ethical principles to inform AI governance.
Eleonore Fournier-Tombs is head of anticipatory action and innovation at the United Nations University Centre for Policy Research (UNU-CPR).
Carnegie Council for Ethics in International Affairs is an independent and nonpartisan nonprofit. The views expressed within this article are those of the author and do not necessarily reflect the position of Carnegie Council.