Recently, a video of Tory MP George Freeman announcing his defection to Reform UK was spread far and wide across social media before the Mid Norfolk MP denounced the claims of defection. It was a deepfake video, a product of artificial intelligence. But it spread fast, people believed it, and it undermined public trust.
But with artificial intelligence supporting so much of many people’s day-to-day lives, it’s proven to generally be a force of good amongst the everyday public, which begs the question as to whether it could actually be a tool to support democracy rather than act as a mere hindrance to it.
Disinformation has been harmful to the democratic process for years, with many arguing that democracy cannot truly exist in societies where disinformation is too highly concentrated. AI acts as a catalyst in supercharging the spread and helping generate fake information and render fake photos and videos in the first place. This not only leads to fake information distorting public perceptions, but also leads to a breakdown of trust and legitimacy when the computer-generated lies are eventually unravelled.
Misinformation generated by AI could be enough to mislead enough voters to vote one way or another and tip the balance in a marginal seat – if this is repeated across the country, entire governments could be the creation of misinformation and deepfakes.
Where the lies go unnoticed, even greater damage is dealt: elections distorted, public opinion shifted, and trust in centuries-old institutions – tarnished. AI can generate text, images, and even video. When these tools are abused, they can create incredibly realistic depictions of celebrities, be weaponised in social media comments sections on political posts, or even be used to carry out cyber-attacks to gain access to sensitive data or infiltrate trusted channels on social media.
This is nothing new. In fact, in February last year, when AI was arguably less advanced, an employee at the global engineering firm Arup attended what appeared to be a genuine video call with their Chief Financial Officer and other colleagues, where they were instructed to carry out a transaction of $25 million. It then emerged that all the others on the call were actually deepfakes – showing how even large institutions can be infiltrated by criminals using AI. Government organisations, media firms, and social media services are not immune to these forms of attack – and the consequences of such attacks might not just be financial but could even undermine democracy.
The dangers extend further than just elections, the news, and social media, posing a threat in the civil service and at the heart of government decision-making. AI systems used to vet or synthesise government policy could prove extremely dangerous, especially given studies showing that AI already has political biases. If these biases slip into government decision-making, AI could become a new form of silent, unregulated lobbying as research into influencing AI responses catches up with the existing technology.
Risks of ‘non-human’ decision-making loom large, where algorithms influenced heavily by data and statistics may become blind to unique human sympathy and emotion, leading to erroneous decisions that could appear insensitive and be incredibly unpopular with voters. Healthcare, tax and welfare could all be hit hard by this unsympathetic form of decision-making. With AI systems also scouring the internet for facts and figures, there is a great risk that policies could be driven by false information or lack the context required for effective decision-making in the broader realm of decision-making.
When it all goes wrong – who is accountable? Does AI entering politics help ‘depoliticise’ political decisions? When decisions can’t be traced back to a single person, or even a single team, then where does the buck stop when it all goes wrong? Traditionally, voters could easily identify who was behind a flawed policy, and the Prime Minister could sack the relevant minister or civil servant, or if more severe, the electorate could vote the government out of office. But if a series or algorithms and computer programs enter this equation, it becomes more complicated as civil service tools often continue from one government to another, meaning a biased programme or a flawed algorithm could cause damage for consecutive terms, even when elections are lost without any real accountability.
Data has been a useful tool in engaging with voters for years; polls, petitions, and surveys have helped shape policy direction significantly. With AI’s ability to research trends deeper than surface level, potentially scanning the internet for opinions and perspectives, it could provide a new way for policy-makers to identify and address the issues that matter most to voters – without the hassle of canvassing, sending out surveys, and risking being an annoyance to the very people they wish to represent.
Political discourse between senior political actors, such as parliamentarians, is often deemed confusing amongst voters, especially those from more deprived backgrounds or who are less well educated. Artificial intelligence could help bridge this gap effectively. Terms such as ‘quantitative easing’ can often be a cause of significant confusion to many voters. AI tools can help voters digest these confusing yet important political terms and definitions so that they can understand the real-world impact and implications for them. In essence, making politics more human through the use of non-human resources. Helping voters understand policies allows voters a more informed choice and drives political engagement, and revitalises democracy.
In nations such as the UK, where political apathy has plagued politics for decades, resulting in lower turnouts and a steep decline in the participation of other political activities, could AI prove to be what is needed to re-engage with voters who feel left behind in a world where politics has become seemingly overcomplicated and difficult to follow?
AI could dramatically streamline government work, reduce bureaucracy, analyse data more quickly and cut costs. Institutions such as the Bank of England, who has a significant wield over the UK’s monetary policy, could utilise AI tools and algorithms to streamline economic modelling and optimise policies to deliver on the ambitions of elected governments in a fiscally responsible way. AI systems making this process quicker could allow institutions such as the Bank of England to react to supply and demand shocks more quickly and ensure that the UK economy is more stable in both the short run and the long run.
Using AI systems in these sorts of institutions could free up the time of civil servants and allow them to focus on more human tasks, whether that be looking at the human impact of policies or engaging with the public more through more effective and human-led consultations, such as focus groups and interviews.
Evidence-based policy making also offers a vast array of advantages and could even improve policy, providing the tools are transparent and well-regulated. An AI system could have access to all statutes simultaneously and assist in consolidating acts, fixing outdated laws, and ensuring the law is consistent across legislation. Avoiding incidents such as those seen in the Dangerous Dogs Act 1991, which it refers to the ‘type’ of dog, when no other statute mentions dog ‘types’ at all. Compared to AI, humans lack attention to detail and are ultimately inferior at multitasking. AI could be a tool to ensure legislation is fairer, consistent, and effective.
Fundamentally, AI offers both an incredible opportunity to support legislators and improve governance and the law, but also acts as an existential risk. Striking the correct balance is crucial to ensure AI is used in the most effective, yet safe, way possible. The debate surrounding AI in politics doesn’t surround whether it will shape politics, but is instead about how we manage it.
AI is simply the newest sector in need of regulation. Lobbying is regulated, donations are regulated and now it’s time to regulate the newest influence on politics and government. It’s clear that the safe use of AI in politics comes from an effective collaboration between AI developers and government regulators.
AI isn’t an enemy, but it also isn’t a friend. It’s a tool that can be used to enhance or ruin democracy, and the responsibility to draw that path lies with the government. Providing AI is transparent and ethical, it can unlock great potential for democracy and governance, but political actors must take action, and fast, before AI starts leading government. The silent influence that has the potential to conquer Westminster. Governments are already behind the AI curve and time is certainly of the essence. Failure to act now could have significant, long-lasting consequences.
Daniel Molloy-Brookes