To what extent can the dominance of AI affect or even replace the importance of real political figures?
AI has the potential to fundamentally change the political landscape – and with it the role and influence of real political figures. While AI has long been involved in political decision-making processes, be it by analyzing moods, predicting voting behavior or controlling campaign strategies, it could challenge the classic image of the politician in the long term.
An exciting example of this is the Use of AI-supported chatbots in election campaigns. Politicians in various countries are already using AI tools to send personalized messages to millions of voters and address their concerns in near real time. This increases the efficiency and reach of a campaign, but runs the risk of replacing real, tangible interactions with citizens with automated algorithms. This development can weaken trust in politics if the personal character of a political leader is reduced to a mere algorithm.
In addition, AI-supported Opinion and decision analysis directly influence political figures – they could be pushed to make decisions driven primarily by data rather than beliefs. The political reality in which an AI comes up with survey data and social trends poses the risk that political personalities will be perceived as interchangeable – if their decisions appear to be exclusively data-driven.
Another, less well-known example is the use of Deepfake technologies in political contexts. Deepfakes can create deceptively realistic videos and audio recordings that can be used to manipulate public opinion. An artificially generated “political appearance” could contribute to disinformation or discredit real politicians.
This poses a double risk: on the one hand, political figures could be devalued through false representations, and on the other hand, the authenticity of every public appearance could be questioned – which in turn leads to a Loss of trust in the entire political sphere leads.
Nevertheless, it should be emphasized that AI also offers an opportunity for political personalities to to spread messages more specificallyto develop data-based strategies for real social challenges and to communicate more efficiently with their voters.
Prof. Dr. Anabel Ternès is an entrepreneur, futurologist, author, radio and TV presenter. She is known for her work in digital transformation, innovation and leadership. Ternès is also President of the Club of Budapest Germany, board member of the Friends of Social Business and a member of the Club of Rome.
How can citizens be manipulated by digital technologies in politicians' communications, for example through surveys, keyword control or topic placement?
This is a highly topical challenge. Politicians and political strategists use targeted digital tools to influence opinions, often without conscious awareness. Here are some examples:
- Targeted surveys and data analysis: Politicians resort to so-called Microtargeting-Techniques to target groups of voters with tailored messages. Comprehensive data about citizens' preferences, interests and behavior is collected. A well-known example is the use of data analysis by Cambridge Analytica, which ran targeted advertisements during the Brexit campaign or in the 2016 US election campaign.
- Keyword control and social media: Contributions are often algorithmically amplified via digital platforms. Keywords play a crucial role here. When politicians use certain buzzwords that resonate virally – such as “security,” “justice,” or “crisis” – the algorithm encourages that content to be viewed more often. The result: An apparent topic relevance that focuses the audience on it.
- Topic placement and agenda setting: By strategically placing topics in the digital space, politicians can steer debates and discourses. This happens through social media campaigns, online advertising or even bots that push certain content and displace others. An example of this is the so-called Astroturfingwhere seemingly organic social media posts are actually coordinated and artificially created to simulate widespread approval.
- Filter bubbles and echo chambers: Many citizens consume digital content that is curated by algorithms and reinforces their pre-existing opinions. Politicians can use this specifically by producing content that serves certain filter bubbles and thus spreads in an echo chamber. The effect? Citizens increasingly receive a one-sided presentation of topics and opinions, which ignores other perspectives.
- Emotional manipulation through images and videos: Digital technologies make it possible to quickly distribute emotionally charged content. A strong example are Deepfake-Videos that show manipulated but deceptively real recordings of politicians. Such technologies can be used to fuel distrust or spread certain narratives that become deeply embedded in citizens' consciousness.
What measures can be taken to increase transparency in digital political communication and prevent manipulation?
- Mandatory disclosure of advertising budgets and goals: Platforms such as Facebook and X (formerly Twitter) could be required to disclose detailed information about the origins and sources of funding for political ads. One example is the EU initiative for greater transparency in political advertising, which requires citizens to be able to see who is funding a campaign and which target groups are being targeted.
- Stricter regulations for microtargeting: Politicians and parties should only be allowed to spread personalized messages within clearly defined and verifiable frameworks. To prevent manipulation, regulations could be introduced that restrict highly data-driven targeting methods. This prevents certain population groups from being targeted with manipulative content.
- Platform transparency: Digital platforms must be required to disclose algorithms that curate and amplify content. A transparent algorithm governance system would make it possible to better understand the mechanisms behind information dissemination and to detect targeted manipulations at an early stage.
- Introduction of a digital “seal of origin” for content: Blockchain-based technology or digital watermarking could ensure that every political message and campaign can be traced back to its source.
- Ethics committees to review campaigns: Parties could commit to having ethical standards reviewed by independent commissions before publishing digital content. These commissions can act as a monitoring body and ensure that political content is transparent and fact-based without resorting to manipulative strategies.
- Education and media literacy promotion: A long-term measure is to promote media and data literacy among the population. An example of this are projects such as the “New Responsibility Foundation” or the media education plan, which help citizens to better recognize disinformation and manipulative communication strategies.
- Ethical standards for artificial intelligence: AI-powered communication tools used in election campaigns must be subject to ethical guidelines. For example, automated bots used to spread political content may need to be labeled as such.
New elections are coming up in Germany – how is AI changing the digital PR of politicians and what role does the politician as a person play in this?
The digital PR of politicians in Germany is increasingly characterized by the use of technologies such as AI. Expect increased use of digital tools by politicians in the coming elections as they aim to communicate with their constituents in a more targeted, personalized and effective manner. Here are some examples:
- AI-powered microtargeting: Politicians are increasingly using AI to target their messages to specific population groups. Large amounts of data from social networks, surveys or other sources can be used to analyze which topics and which type of approach have the best effect on different groups. The downside?
- Chatbots and voice assistants: Politicians are increasingly using digital assistants and chatbots to interact directly with citizens. They can provide information about policy programs, answer questions and obtain feedback. An example: During past election campaigns, an AI-supported bot called “Election Bot” answered citizens’ questions and thus enabled interactive dialogues.
- AI-powered opinion analysis: AI can detect and evaluate trends in public opinion in real time. Politicians and parties use such tools to quickly adapt their campaigns and respond to current issues. This makes it possible to directly address discussions that arise on social media platforms and send targeted messages. This kind of responsiveness shows how profoundly AI influences political discourse today.
- Deepfakes and ethical challenges: The dark side of new technologies also includes the risk of manipulation through deepfake technology. Although ethically extremely questionable, some campaigns around the world have experimented with deceptively real videos to influence opinions. German politicians are therefore under pressure to present themselves as credible and authentic, as digital deception could be more serious.
- Using Augmented Reality (AR) for political storytelling: AR is increasingly being used to make political content more tangible. Citizens can use AR glasses or smartphone apps to experience interactive stories in which political projects are presented or successes are visualized.
- The role of humans in this? Despite technological developments, the authenticity and credibility of the politician remains crucial.