In the last few years, we have seen many publications from financial services regulators, the Bank of England (BoE) and the UK Government alike, focused on the rapidly growing use of Artificial Intelligence (AI) within the UK, and how it ought to be governed and monitored moving forward.
Upcoming regulatory changes…
In February 2022, the AI Public-Private Forum (a forum created between the BoE and the Financial Conduct Authority (FCA) to encourage discussion on AI innovation between the public and private sectors) released its Final Report. The Report looked at how to tackle the various barriers to the adoption of AI, and how to mitigate the challenges and risks in Data, Models and Governance - the three levels within AI systems where risk can arise.
In October 2022, the BoE published its own discussion paper on AI and Machine Learning, which focused on the regulation of AI in UK financial services and the debate surrounding whether AI should be managed through clarifications of the existing regulatory framework, or whether a new approach is needed. The recurring theme throughout was the petition for a regulatory environment that is conducive and proportionate in ensuring and encouraging the safe adoption of AI, so as to avoid hindering beneficial innovation.
A similar proposition was put forth by the UK Government in its White Paper: AI regulation: a pro-innovation approach, which was published in March of this year. AI is evolving at an extremely fast pace and, accordingly, the UK Government is seeking to regulate the use of AI in an agile and iterative way, in recognition of the speed of its development. The aim is to avoid anything too prescriptive and rigid which could thwart AI innovation and the UK's ability to respond quickly and proportionately to future technological developments.
The UK Government proposes using existing UK laws and regulations, taking a context-specific approach, and drawing upon the following five principles:
- Safety, security and robustness
- Appropriate transparency and explainability
- Accountability & governance
- Contestability & redress
On the back of the UK Government’s White Paper, the Competition and Markets Authority (CMA) has now been tasked with giving thought to how safe AI deployment can be supported by the five key principles above. The CMA is opening an initial review of competition and consumer protection considerations in the development and use of AI foundation models, to ensure that AI innovation continues to grow in a way that is beneficial to consumers, businesses and the UK economy. We expect to see a published report from the CMA on their review in September 2023.
The anticipated UK approach differs entirely from that in the EU, where specific AI-related laws have been proposed through The Artificial Intelligence Act. The Act assigns applications of AI into the varying risk categories, from “Unacceptable Risk” applications, which would be banned, to “Minimal Risk” applications, which would be left largely unregulated, unless through voluntary codes of conduct.
The AI Act also proposes steep non-compliance fines, some of which are higher than GDPR penalties. For companies, fines can reach up to €30 million or 6% of global income.
What impact might these regulatory changes have in the banking world?
AI has already evolved hugely and will continue to do so, meaning the way in which banks operate and conduct business will continue to be impacted.
AI is expected to bring about more tailored contributions to customers and clients, but with less human interaction. As customers continue to carry out an increasing number of their daily transactions via digital channels, they will become more accustomed to the ease, speed and personalised services offered, and accordingly their expectations will keep rising. Therefore, banks are seeking to meet and exceed those rising expectations and beat competitive threats in the AI world.
However, the growing interest and development of AI use continues to give rise to important ethical and regulatory questions – particularly when it affects consumers. As a result, it is crucial that firms are transparent in answering those questions.
One of the most important reasons for transparency in AI development and use is to demonstrate trustworthiness, which is vital for the adoption and public acceptance of AI. Being transparent can also expose where AI systems are succeeding or failing in their reliability or robustness, how the systems are treating consumers, how the system’s data is being managed, and the system’s competence and accountability.
In moving forward, financial services firms are encouraged to take steps to prepare for any upcoming regulation by reviewing existing practices and putting in place processes specific to the regulation and governance of AI systems. Consideration should also be given to the existing laws in the UK and how these laws may be applicable to AI or impacted by AI. For example:
- The Financial Conduct Authority’s (FCAA) emerging regulatory approach to AI and Big Tech
- The FCA has published a Feedback Statement (FS23/4) in response to its recent Discussion Paper (DP22/5) on the potential competition impacts of Big Tech entry and expansion in retail financial services. Chapter 3 includes more information on each of the FCA's current actions and next steps.
- The publication of the Feedback Statement has been supported by a speech from the FCA's Chief Executive, Nikhil Rathi, on 12 July 2023 on the FCA's approach, setting out further detail on how it proposes to regulate AI in the financial sector.
- Consumer Rights laws/Consumer Duty
- The overarching aim of the FCA’s new Consumer Duty is to ensure that firms provide good outcomes for their retail customers. In this context, AI can be a useful tool due to its ability to harness large volumes of data to identify demographics with specific needs and produce better product matches for consumers.
- However, the lack of human engagement in AI-led practices could also potentially widen existing gaps and exploit characteristics of vulnerability. This is particularly concerning in the context of the Consumer Duty, as the FCA has emphasised the importance of ensuring that vulnerable customers receive consistently fair treatment.
- It is, therefore, crucial that a balance is struck, and a symbiotic relationship is created between AI and vulnerable consumers, to avoid the scales tipping to one extreme.
- The Prudential Regulation Authority’s (PRA) expectations on outsourcing and third-party risk management
- Financial services firms may not only develop their own AI systems, but also procure them from third parties. The same goes for the data inserted into the AI models, which is often sourced from third parties. Therefore, firms will need to be mindful to ensure compliance with these PRA rules when doing so.
- Markets in Financial Instruments Directive 2014 (MiFID II)
- The MiFID II requires more transparency in terms of research conducted by financial organisations who are investing on behalf of clients. This will need to be kept in mind as it could impact data analysis and AI strategies that firms are relying on for in-house research.
- UK GDPR
- Data is the crux of AI models; therefore, real thought should be given on how to maintain compliance with Data Protection laws, whilst deploying and using AI technologies.
- Equality Act 2010
- One of the main risks in the use of AI technologies is unfair treatment and discrimination as a result of implicit or sampling bias within the training data.
- As AI systems learn from data which may be unbalanced or discriminatory, they may produce outputs which have unfair effects on people based on their gender, race, age, health, religion, disability, sexual orientation or other characteristics.
UK banks will also need to be cautious in their approach to procuring from and engaging with AI suppliers outwith the UK, so as to ensure they are complying with not just UK regulations/laws, but also international ones.
Hopefully the regulatory and legal developments we can expect to see soon will pave the way for safe AI implementation and help build consumer trust in AI by ensuring that the technology is used ethically, transparently and with human needs and rights at the forefront.
The content of this webpage is for information only and is not intended to be construed as legal advice and should not be treated as a substitute for specific advice. Morton Fraser LLP accepts no responsibility for the content of any third party website to which this webpage refers. Morton Fraser LLP is authorised and regulated by the Financial Conduct Authority.