AI in Financial Services: Regulatory Opportunities

Written By

tom hepplewhite module
Thomas Hepplewhite

Senior Associate
UK

I am an associate in the Banking and Finance team, advising clients within the financial services sector.

As improvements in the capabilities of AI continue apace, many industries are learning how they can effectively leverage this new technology. The financial services industry is no exception to this trend with many providers integrating AI into their services in order to improve customer outcomes and reduce costs. However, unlike many other industries, financial services providers are heavily regulated (in the UK and elsewhere) and so both regulated firms and their regulators are grappling with the implications of a world where the use of AI becomes commonplace.

AI’s role in the financial services industry

In addition to the generic use of AI by financial services providers (for example, in customer service interactions), there are many potential use cases for AI which are specific to the financial services industry:

  • Credit underwriting assessments: For loans (especially to consumers), AI technology can be used to make more accurate and more granular underwriting decisions. The AI might be fed with a variety of different types of data such as income and expenditure data as well as data from other sources (for example, using open banking technology to retrieve account data) – all of this data can be used to help make better underwriting decisions;
  • Insurance pricing: AI could be used by insurers to assess the likelihood of certain events occurring and therefore how to set prices for insurance policies. Most insurers will have a wealth of data on which they can train AI models;
  • Payment fraud detection: Payment service providers (PSPs) are subject to regulations requiring them to take steps to reduce the incidence of fraud and unauthorised payments. It is likely that AI can make this fraud detection even more accurate and granular than existing “rules-based” approaches to fraud detection;
  • Investment management: AI will be used in investment management, with AI models capable of making real-time trading decisions, offering potentially better outcomes in asset management.
  • Regulatory compliance: AI could be used in order to facilitate compliance with the rules which financial services providers are subject to. So, for example, a firm could use AI to review and summarise large quantities of complaints data to find important trends or use AI to facilitate the provision of returns due to regulators.

While the opportunities presented by AI for the financial services industry are significant, its deployment is not without its challenges and risks. Issues such as data bias, model explainability, cyber risks, and operational resilience have led UK regulators (such as the FCA and PRA) to closely examine AI’s role in financial services.

Regulatory Approach of the FCA and PRA

The UK’s financial regulators (the FCA and the PRA) have adopted a principles-based approach to regulating AI in the financial services sector. Whilst the regulators are aware of the potential for AI to benefit financial services firms and their customers (and the regulators acknowledge that they are exploring how they, as regulators, can use this new technology to regulate more effectively), they are also conscious of specific risks that the use of AI may pose to consumers. Their regulatory stance reflects a balance between fostering innovation and ensuring consumer protection, financial stability, and market integrity.

As with the development of other technologies (such as blockchain technology), the UK regulators have said that their approach to the regulation of AI will be “technology neutral”. This means that the FCA and PRA are unlikely to adopt specific rules addressing AI but are more likely to address any underlying issues posed by the use of AI. The FCA has, however, said that it will be closely monitoring how firms are integrating the use of AI into their risk management frameworks.

Regulatory rules which may be relevant

Although the FCA has no specific rules concerning the use of AI, a number of the FCA’s existing rules will be relevant to the deployment of AI by UK regulated firms:

  • The Principles for Businesses: The FCA’s Principles for Businesses include high-level rules such as using “due care and skill”. Firms using AI will want to ask themselves how they have assured themselves that their use of AI is consistent with this requirement.
  • Rules in the SYSC division of the FCA Handbook: The Systems and Controls part of the FCA Handbook sets out operational rules for firms, including on issues such as risk management, operational resilience and governance. Firms using AI should consider how, for example, the risks associated with AI will be managed, what governance processes will be adopted in respect of the usage of AI and the firm’s operational resilience in relation to the use of AI.
  • Outsourcing: Most financial services firms will be contracting with third party technology companies when deploying AI within their business. This being the case, the FCA’s and PRA’s rules on outsourcing will likely be relevant. This means carrying out a risk assessment in relation to the contract and ensuring the contract includes all mandated provisions.
  • Critical Third Parties (CTP) regime: UK regulators are currently implementing a regime for non-regulated providers providing critical services to financial services firms. Although not specific to AI providers, it may well be extended to those providing AI services given the increasing importance of AI.
  • Senior Managers & Certification Regime: The SMCR sets out rules applicable to key individuals working at most UK regulated firms. Although no specific changes are envisaged for AI, firms should ask themselves which key individuals will be responsible for AI and how any Statement of Responsibilities may need to change.

The Consumer Duty

In the last year, the FCA’s duty to ensure good outcomes for consumers (the “consumer duty”) has come into force. To the extent that firms are using AI in the provision of services to consumers, firms will need to consider how this may impact upon their compliance with the consumer duty. Key questions which a firm may wish to ask itself are:

  • How might the use of AI adversely impact our customers?
  • How will we monitor for any such adverse impacts?
  • What steps will we take to mitigate such adverse impacts?

By way of an example, if a consumer credit firm is using AI to make creditworthiness assessments (see example above) then it may wish to monitor whether the AI-facilitated assessments are accurate. So, for example, are significant numbers of borrowers being granted credit based on the AI’s recommendations but who go on to default on their loan?

AI Explainability and Fairness

The FCA has stated that it is important that AI models are explainable and free from bias. Many AI systems are “black boxes” in the sense that, although they are often highly accurate in predicting outcomes, they cannot explain why they have arrived at a particular conclusion.

This problem, the lack of explainability, is a concern to regulators because it means that important decisions could be made that affect a regulated firm without anyone being able to understand why that decision has been made.

Latest insights

More Insights
featured image

Update on recent UK data protection guidance in the financial services space

3 minutes Dec 19 2024

Read More
Bank card propped up against laptop

Germany: BaFin updates AML guidance

Dec 19 2024

Read More
Colourful building

FinTech Features December 2024

Dec 18 2024

Read More