UK unveils ‘light touch’ AI regulation plans
"Much of the UK AI industry will choose to comply with the EU AI Act"
The UK government has set out a light touch, “pro-innovation” approach to AI regulation, with no legislation planned as yet, and oversight devolved to sector-specific regulators - but with the EU also working on AI legislation, it may not even matter.
Britain’s AI regulation plans diverge significantly from the EU’s approach, outlined in its AI Act published last year, not least in its definition of AI. Where the EU sought to offer a detailed definition of artificial intelligence, the UK’s policy paper instead suggests systems which are “adaptive” and “autonomous” would fall under the regulatory regime.
Commentary on the AI regulation policy paper by law firm Osborne Clarke noted: “The advantage of this ‘no definition’ approach is its malleability – it can be as flexible as the technology requires and, assuming the regulators will be able to refine and adapt their definitions over time, can accommodate shifts and breakthroughs in the underlying techniques that might fall outside a more specific definition.
“This is important because, currently, references to artificial intelligence typically mean machine learning or deep learning – but these are not the only methodologies by which a machine can be made to generate a human-like response, and radical shifts in approach cannot be ruled out.”
The law firm also noted this approach could create difficulties for businesses which need to understand risk and compliance issues around AI, and added that a “patchwork of inconsistent definitions” would be hard to manage.
See also: One to Watch #7: Top Goldman quant’s new AI investment platform “SkopeAI”
In line with the government’s desire to keep AI regulation flexible, the policy paper made clear there are no plans for legislation – yet.
“We propose initially putting the cross-sectoral [regulatory] principles on a non-statutory footing. This is so that we can monitor, evaluate and if necessary update our approach and so that it remains agile enough to respond to the rapid pace of change in the way that AI impacts upon society,” said the paper – adding this would be kept under review.
The paper also kicks responsibility for identifying and prioritising AI risks to sector-specific regulators, with central government issuing “supplementary or supporting guidance” such as interpretation of terms used in the regulatory principles. And the government isn’t expecting regulators to deal with “mandatory obligations”.
“Indeed we will encourage regulators to consider lighter touch options in the first instance – for example, through a voluntary or guidance-based approach for uses of AI that fall within their remit. This approach will also complement and support regulators’ formal legal and enforcement obligations using the powers available to in order to enforce requirements set out in statute,” said the paper.
Osborne Clarke’s analysis said the UK’s focus on flexibility and proportionality were “to be welcomed”, but again noted the risk of having differing definitions and regulatory standards across sectors. It also suggested the proposed light-touch approach might not be sufficient when faced with a “move fast and break things” approach to AI development.
See also: Healthcare AI environment ‘very confusing’ for devs: National Data Guardian
But the lawyers’ most damning observation about the UK’s AI regulation plans was that they might not matter all that much.
“Developers and suppliers in the UK may welcome a light touch approach at home, but many will also want to access the much larger EU market. There is therefore a clear practical risk that the EU's proposed AI Act becomes the ‘gold standard’ for AI compliance (as the GDPR has for data privacy),” said the Osborne Clarke commentary.
“However proportionate and context-specific AI regulation may be in the UK, the reality is that much of the UK AI industry will, in practice, choose to comply with the EU AI Act in any case.”
If you have views on the UK’s AI regulation proposals, the government’s Office for Artificial Intelligence at DCMS is accepting submissions on the policy paper until 26 September.