Mon 19 Feb 2024

An update on the UK Government’s approach to AI Regulation

On 6 February 2024, the Department for Science Innovation and Technology published the UK Government's response to the consultation on its March 2023 AI White Paper.

In contrast to the approach adopted at EU level, the UK Government has confirmed that it does not intend to introduce AI-specific legislation at this moment in time. Instead, the UK Government reaffirmed its preference for an “agile, sector-based” approach and the five cross-sectoral principles for UK regulators to follow as proposed by the White Paper

  1. Safety, security and robustness;
  2. Appropriate transparency and explainability, 
  3. Fairness, 
  4. Accountability and governance and 
  5. Contestability and redress. 

In its response, the UK Government refers to building on its “pro-innovation framework and pro-safety actions by setting out our early thinking and the questions that we will need to consider for the next stage of our regulatory approach”.

In outlining its latest thinking on AI regulation, the UK Government recognised that the creation of targeted binding requirements for the small number of organisations creating highly capable general purpose AI systems may be necessary at some future date to ensure accountability for making such technologies sufficiently safe. However, this would be done in the context of allowing regulators to provide effective rules for the use of AI within the boundaries of their remits. Whilst some regulators already address AI within their remit, the UK Government feels that the legal framework more widely may not effectively mitigate the risks that AI presents. Existing rules may leave UK regulators struggling to enforce obligations on developers or trainers of AI. At the same time, the UK Government has asked several regulators to publish an update outlining their strategic approach to AI by 30 April 2024.

Increased funding

The UK Government has committed to review potential regulatory gaps on an ongoing basis and, in announcing over £100m to help realise new AI innovations and support regulators’ technical capabilities, committed significant funding to AI, including:

  • £80m on 9 new UK research hubs
  • £19m for responsible AI projects
  • A £9 million partnership with the US on responsible AI as part of its International Science Partnerships Fund
  • £2m of Arts and Human Research Council funding to projects seeking to define responsible AI
  • An additional £10m on upskilling and educating UK regulators.

Further signalling the UK Government’s focus on AI, the Centre for Data Ethics and Innovation will be renamed as the Responsible Technology Adoption Unit, to reflect the Unit’s role in supporting responsible adoption of AI.

AI and copyright

The UK Government confirmed that the UK IPO working group on AI and copyright could not agree on a voluntary code of practice. The UK Government will now work with AI developers and copyright holders instead to tackle the large-scale use of copyright protected content for training AI.

The UK Government’s response to the consultation is available here.

How can we help?

If you would like to discuss any of the issues discussed in this bulletin, please get in touch with our Data Protection team.

Make an Enquiry

From our offices we serve the whole of Scotland, as well as clients around the world with interests in Scotland. Please complete the form below, and a member of our team will be in touch shortly.

Morton Fraser MacRoberts LLP will use the information you provide to contact you about your inquiry. The information is confidential. For more information on our privacy practices please see our Privacy Notice