
Legislation that mandates safety testing of artificial intelligence technologies is at risk of being pushed aside by the U.K. government, the head of the tech select committee says. Labour’s Chi Onwurah warned that the delay may reflect political efforts to align more closely with the United States, particularly the Trump camp’s outspoken opposition to AI regulation.
One key focus of the AI Safety Bill is to legally mandate that companies uphold their voluntary agreements to submit frontier AI models for government safety evaluations before deployment. Nine companies, including OpenAI, Google DeepMind, and Anthropic, made such agreements with a number of international governments in November 2023.
In November 2024, technology secretary Peter Kyle said he would implement the legislation in the next year. At the time, Chi Onwurah, the Labour chair of the Science, Innovation and Technology Select Committee which is in charge of examining tech policy, was under the impression it was “coming soon,” she told The Guardian, but now she’s worried about whether that is really the case.
Political influences and transatlantic ties
“The committee has raised with Patrick Vallance [the science minister] the lack of an AI safety bill, and whether that is in response to the significant criticism of Europe’s approach to AI, which J.D. Vance and Elon Musk have made,” she added.
In a speech at February’s Paris AI Action Summit, U.S. Vice President Vance disparaged Europe’s use of “excessive regulation” and said that the international approach should “foster the creation of AI technology rather than strangle it.”
Europe has solidified a pro-regulation reputation through the AI Act and numerous ongoing regulatory battles with major tech companies — resulting in hefty fines. Trump is clearly unhappy about this, referring to the fines as “a form of taxation” at the World Economic Forum in January.
In an effort to please the Trump administration, UK ministers do not intend to publish the AI Bill before the summer, according to anonymous Labour sources who spoke with The Guardian last month. However, this is not the only recent evidence that the country is attempting to maintain its alliance with the United States.
Safety vs. Innovation: The UK’s strategic shift
Last month, the U.K.’s AI oversight body was renamed the AI Security Institute, a move some see as a shift away from risk-averseness and towards national interest farming. Prime Minister Keir Starmer announced the AI Opportunities Action Plan in January, which prioritised innovation while making little mention of AI safety. He also skipped the Paris AI Summit, where the U.K. declined to sign a global pledge for “inclusive and sustainable” AI, as did the U.S.
The shift toward innovation-first policymaking comes with economic implicationsLimiting AI innovation in the U.K. could have a significant economic impact, with a Microsoft report finding that adding five years to the time it takes to roll out AI could cost over £150 billion. Stricter regulations could also deter major tech firms like Google and Meta from scaling in the U.K., prompting concern from investors.
A spokesperson for the Department for Science, Innovation and Technology told The Guardian: “The government is clear in its ambition to bring forward AI legislation which allows us to safely realise the enormous benefits and opportunities of the technology for years to come.”
“We are continuing to refine our proposals which will incentivise innovation and investment to cement our position as one of the world’s three leading AI powers, and will launch a public consultation in due course.”