The UK has positioned itself as a world lead in AI technology, but do UK regulations keep up with the rapidly progressing technology? With the government keen to drive forward AI innovation domestically, the UK finds itself at risk of exposing the world to a rapidly advancing and potentially dangerous piece of technology with little in the way of regulations to protect the world from potential harm.
No AI-specific legislation is currently entrenched in UK domestic legislation, instead, the regulatory landscape is set with a reliance on existing regulators such as the CMA, FCA and ICO. Though, white papers such as the 2023/24 AI White Paper did propose principles over rules, fairness, transparency and safety of the emerging technology but the white paper seemingly did not survive the transition of government.
In stark contrast, the European Union has its AI Act, a piece of legislation that is both risk-based and easily enforceable whilst the US uses state-level regulations to actively address issues that arise from AI. Meanwhile, the UK’s ‘agile’ model seeks to coordinate and work with policy makers without need for direct intervention from the Department for Science Innovation and Technology.
Many make the argument that the government’s ’pro-innovation’ strategy works to support and grow talent and opportunities presented by AI as opposed to stifling them, something the government fears could happen in a harsh regulatory environment.
With the UK seeking to establish the UK as a hub for AI investment and development post-brexit, the government is clearly fearful that too much regulation would hurt the nation’s potential as a leader in AI.
Many in the AI and technology sector align with the government’s stance, arguing that flexibility will encourage experimentation and therefore innovation. The encouragement of AI sandboxes to help AI thrive in areas of finance, healthcare and business help AI learn rapidly.
Critics of AI technology and the ‘pro-innovation’ approach, such as academics, some MPs and wider society argue that with a lack of robust regulation, voluntary guidance lacks sufficient power or influence to ensure the technology remains safe and risk-free.
Many marginalised groups feel vulnerable to bias and potential harm with concerns that AI may discriminate against certain groups and cause harm if clear regulations are not put in place to address so-called algorithmic discrimination.
Further lack of statutory guardrails such as the lack of an AI ombudsman, legal transparency and algorithmic audits create increased concern for critics whilst the wider public significantly lacks trust in automated decision making.
Public affairs will play a key role in shaping the regulatory landscape of AI both now and in the future. Parliamentary groups, think tanks and governmental organisations and stakeholders such as ministers will play a crucial part in AI regulation.
Effectively lobbying these groups will ensure that AI regulations work for organisations. The Bletchley Park Summit demonstrated the previous governments commitment to the safe use of AI, but did not become binding policy. Whether the current government will commit to safe AI policy with sufficient stakeholder consultation is yet to be seen.
Forming a regulatory landscape which permits both AI Innovation and protects citizens is a legislative tightrope that must be tread cautiously. It’s a dilemma the government will have to face. While the government says it seeks to ‘turn our country into an AI superpower’, it needs to weigh up the benefits and risk of any regulation and balance the regulation in such a way that innovation is encouraged whilst not putting safety at risk.
Though they are complicated, regulations in some form are an absolute necessity for the safety of the public. So it is very likely that the government will proceed with some form of regulation. AI and tech companies must have their say in shaping those regulations – something which can only be done through effective stakeholder engagement with those responsible for drafting and implementing the regulations.
Public affairs therefore plays a crucial role at a pivotal time in technology and innovation and should be used by AI and other tech firms in order to ensure any regulations continue to enable them to carry out their research and technological developments effectively and lawfully.
Walk Through Walls