The UK held the world’s first AI Safety Summit last month, where 28 countries signed the Bletchley Declaration and agreed to work together to ensure AI is used in a ‘human- centric, trustworthy and responsible’ way
The countries recognized the importance of pro-innovation and proportionate governance and a ‘regulator approach that maximizes the benefits and takes into account the risks associated with AIbut emphasized those developing AI capabilities have a particularly strong responsibility for ensuring the safety of those AI systems.
Further summits are to beheld next year; however it remains to be seen whether any internationalagreement on AI regulation can be reached or whether the race to regulationbetween the various countries has begun.
The EU has been aregulatory front runner, having started with the Artificial Intelligence Act(the AI Act) in spring 2021. Mere days before the Summit, President Bidensigned an Executive Order setting out new standards and safety for AI.
In contrast, the UKgovernment aims to encourage responsible AI innovation, seeking to regulate AIthrough existing legal framework rather than specific AI legislation and theKing’s Speech last month was notable for the absence of AI specific legislativeproposals.
The UK approach
The government’s March 2023 white paper, ‘A pro-innovation approach to AI regulation’, highlighted the intention to take a unique approach to AI regulation by leveraging and buildingon existing regimes, such as the financial services regulation and consumerrights laws.
Some AI risks arise across or in the gaps between existing regulatory remits creating conflicting requirements from regulators, and unnecessary burdens on businesses, slowingdown AI adoption. Accordingly, the government intends to intervene in aproportionate way to address such concerns.
The Law Society and the Prudential Regulation Authority have found that further regulatory guidance is still needed to address this uncertainty.
The white paper noted five principles to be followed but does not propose to put these principles into statute initially so as not to stifle any innovation.
A Private Members’ Bill introduced last month seeks to make provision of the regulation of AI however it remains to be seen whether this Bill will be endorsed by the government.
The EU approach
The AI Act is acomprehensive legal framework seeking to regulate the development, marketingand use of AI in the EU. Expected to come into force at the end of 2025 orearly 2026, its effect is likely to be far reaching. AI systems that aredeveloped, operated, and even used outside of the EU may fall under the AI Actif the output of those systems is used in the EU.
The AI Act provides for aself-assessment mechanism by which those who develop the technology in question will be responsible for categorizing perceived risks.
Perceived risks are divided into four categories:
- unacceptable risk such as those using subliminal messaging;
- high risk which includes applications involving transport, education, employment, and welfare;
- limited risk, such as a chatbot;
- and minimal risk, such a spam filter
Unacceptable risks will be prohibited (something the Law Society has suggested the UK should emulate), high risks are to be heavily regulated and the remaining two categories will be subject to several transparency obligations.
The US approach
Despite attending the summit, Vice-President Kamala Harris stated that she believes the US should beat the forefront of regulating AI. She stated that
‘when it comes to AI, America is the Global Leader. It is America that can catalyze global action and build global consensus in a way that no other country can'
She also suggested the US can both regulate AI and advance innovation at the same time; which contrasts to the light-touch regulatory approach proposed by the UK.
Unlike the EU, there has been limited progress in this area before now in the US. The US government sought to bring in the AI Bill of Rights and the American Data Protection and Privacy Act placing AI risk assessment sand obligations on AI companies, however neither progressed into law.
Accordingly, several states have sought to enact their own legislation, such as the State of California which enacted the Bolstering Online Transparency Act and the California Consumer Privacy Act.
On 30 October 2023, President Biden signed an Executive Order establishing new federal AI standards for safety and security, setting out the most sweeping actions ever taken to protect Americans from the potential risks of AI systems.
Its requirements include that AI developers share their safety test results with the US government. The Executive Order is currently the most comprehensive AIdirective, beating the EU to the top spot (for now) and provides for a displayof US dominance in the technology field.
It remains to be seen who will win the race to effectively regulate AI and whether such regulation isable to adapt to fast paced technological innovations. The success of any regulation may well hinge on cross jurisdictional co-operation to provide greater clarity to those developing and using AI systems.
With the extraterritorial reach of the EU AI Act and attempts by the Us to lead a global consensus, a fragmented regulatory landscape may result.
From a UK perspective, we must ensure that any regulatory approach taken is consistent across all sectors and any gaps are adequately catered for by government intervention.
Any regulation required in the future, must be implemented with a pro-innovation approach so as not to disadvantage the UK in the AI marketplace.
For questions related to the topic raised in this article, please contact Felicity Potter.
This article first featured in the New Law Journal on 8 December 2023: Global AI regulation — a race to set the rules?