Generative AI tools in Financial Services: What’s your policy going to be?

Written By

jonathan emmanuel module
Jonathan Emmanuel

Partner
UK

I am a partner in the Tech Transaction team and Co-Head of our International Financial Services Sector Group, based in London. I advise clients on disruptive digital technology adoption including cloud computing, AI, blockchain, agile software development and open source licensing, with a particular focus on FinTech.

tom hepplewhite module
Thomas Hepplewhite

Senior Associate
UK

I am an associate in the Banking and Finance team, advising clients within the financial services sector.

gavin punia module
Gavin Punia

Partner
UK

I am a senior financial services regulatory specialist with a particular focus on advising firms who are digitally transforming the way financial services are being delivered.

In May 2022, most of us were unaware of the potential of generative Artificial Intelligence (AI) tools. We’d seen AI generated content in presentations, but they always seemed to recycle the same examples. The overall impression was that AI content generation was hard and expensive.

Much has changed in the short time since then. In June 2022, GitHub launched Co-Pilot, allowing software developers to incorporate AI generated code into their projects. Image creation was next up with MidJourney’s OpenBeta in July, Stable Diffusion in August and DALL-E v2 in September. Language generation using large language models (LLMs) wasn’t far behind; ChatGPT launched in November 2022 based on GPT-3 and GPT-4 was released in March 2023.

Unsurprisingly financial services (FS) organisations are moving quickly to harness this potential as users or developers of such tools. Many are trying to understand what this technology really is, what it can and can’t do and how it might be useful for them.

For example, Morgan Stanley has been training a generative AI chatbot based on OpenAI’s latest LLM technology to support its financial advisors when advising wealth management clients and Bloomberg has been developing a generative AI tool as a potential add-on to its existing Bloomberg Terminal systems.

Life moves fast so legal teams need to move fast

If life is moving fast for generative AI technology, the legal landscape for generative AI is also moving fast. November 2022 saw a US class action against Co-Pilot claiming that its training process had breached open source licence terms. January then saw a US class action against three AI image generators alleging copyright violations. This was followed shortly by proceedings brought by Getty Images in the UK and US against the creators of Stable Diffusion. Between them these lawsuits raise questions regarding the use of training data protected by copyright to train AI systems and the relationship in, in copyright terms, between the training data and outputs from generative AI systems.

In the UK there have been some interesting FS developments following the publication of a variety of papers in relation to the application of AI in the FS sector.  For example, in October 2020 the Bank of England and the FCA established the AI Public-Private Forum (AIPPF) to foster dialogue on AI innovation and the safe implementation of AI in the FS sector. The AIPPF published its final report in 2022.  In response to that report (which made it clear that the private sector wanted regulators to have a role in supporting the safe implementation of AI in the FS sector) the Bank of England issued a Discussion Paper on AI and machine learning in October 2022.  The discussion paper raised some interesting points on the key question surrounding AI – how should it be regulated?  Can the UK rely on clarifications to the existing regulations or does a new approach need to be adopted for this new technology? It is clear that FS regulators are considering the extent to which AI tools may give rise to new or increased risks compared to traditional ICT solutions.

EU data protection authorities have also recently started to look at some generative AI providers. In March 2023, the Italian data protection authority (Garante) blocked ChatGPT’s processing of personal data (effectively blocking the service in Italy) until ChatGPT complies with certain remediations required by the authority. In April 2023, the Spanish data protection authority (AEPD) initiated its own investigation. It is likely other data protection authorities will follow – the European Data Protection Board (EDPB) has since launched a task force on ChatGPT. European data protection authorities are concerned with the use of personal data in AI systems, including to train it, and questions around lawful processing, transparency, data subject rights and data minimisation in particular.

Aside from copyright infringement issues and data protection, many organisations ask questions regarding the retention and reuse of inputs and outputs by generative AI tool providers, the accuracy and ownership of outputs, the scope of open source licence terms, the protection of confidential information, terms of use and their general liability exposure both as users and developers of generative AI technology. And that’s all before we get to considering emerging regulatory frameworks for AI technology such as the EU’s draft AI Act and sector specific regulations and codes of conduct, including as covered by the FS related papers discussed above.

How do you balance these risks with the massive opportunity presented by generative AI? The starting point must be understanding the potential risks, balancing them against the opportunities and developing appropriate policies and guidelines.

What if you do nothing?

With the publicity surrounding generative AI tools and free and easy access to a number of high-profile generative AI tools, some employees will inevitably start to experiment. For example, fund managers consulting these tools to help make informed decisions on investment strategies creates risks if such users are inputting potentially confidential information into them. This is because some AI tools permit the retention of such input data in order to fine-tune the AI (this is the process of taking the additional data provided in the input and using it to improve the AI model by adding it to the original training data).  This could result in the input information being replayed back to new users in the form of outputs which may be competitors of the fund manager. This could also lead to concentration risks if multiple users are making similar investment decisions or recommendations based upon a small number of technologies. For example, many automated advice tools in the FS sector are based on decision trees and are not traditional investment advice in the sense that they do not take into account the customer’s objectives and goals, and how their circumstances may change over time. The risk is that a significant volume of consumers could end up transacting in the same way in the same financial products and services, and therefore lead to a "herding" of risk. Not having a generative AI policy in place will start exposing the business to possibly unquantified and unmanaged risks. From another perspective, it also leaves the opportunity to harness the potential of generative AI to chance.   

How do you formulate appropriate generative AI policies and guidelines?

The simplest policy or guidance would be a total ban on use of generative AI in an organisation and blocking access to generative AI tool providers. Other companies may want to buy some breathing space to better assess and understand the risks and formulate better informed policy or guidelines.

But what if an FS organisation wants policy or guidelines which allow the business to start using generative AI in a controlled way, recognising the regulated sector in which they operate? The key lesson we have taken from working with clients on developing policies for the use of generative AI is that there is no one-size fits all approach. The nature and extent of the risks from generative AI tools varies depending on the context. Some of the relevant factors will be:

  • To what extent does the use of AI technology constitute a critical or important outsourcing or material outsourcing which will require the relevant FS organisation to consider specific outsourcing requirements to flow-down into the contract with the AI provider.
  • For UK authorised banks, building societies, insurers and PRA authorised investment firms, even if the AI technology is determined not to constitute an outsourcing to what extent does it impact third party risk meaning the relevant FS organisation needs to consider the PRA Supervisory Statement on outsourcing and third party risk management?
  • Has you considered the PRA’s paper on operational resilience prior to procuring the relevant technology.  For example, if the AI tool is being relied upon to make business decisions has the firm considered resilience risk issues such as downtime, service levels and business continuity planning.
  • Have you considered any general FCA or PRA rules which might be relevant to the use of AI? For example, how will you ensure that you have effective systems and controls to manage any risk arising from the use of AI?
  • Have you considered FCA rules specific to a particular sector (for example, regulatory rules applicable to consumer lending, insurance and asset management)?
  • How will you ensure that the use of AI is consistent with the FCA’s Consumer Duty requirements?
  • Both EU financial institutions and ICT providers servicing EU financial institutions will need to consider the extent to which their use of AI will fall within the new Digital Operational Resilience Regulations
  • Which technology are you using and what data is it trained on? Is the data protected by copyright or trade secrets? Will you be inputting ‘personal data’? Where will it be used?
  • Are you developing generative AI technology yourself or using technology developed by others? Are you building from the ground up or adapting pre-trained models?
  • Are you incorporating the technology or its outputs into your products?
  • Are you deploying the technology on your own hardware, or using a model-as-a-service provider? Are you using a private instance or a common model?
  • Will you be using the outputs from generative AI externally, or only within your organisation?
  • What purposes is the technology going to be used for? What types of data are going to be used as inputs and are they sensitive/protected in any way?
  • Will you or your customers be taking decisions based on the outputs? Will these be about individuals?
  • How are you going to present the outputs? How are you going to commercialise them?
  • What contractual arrangements are going to cover the use of the technology? Will this address any liability which arises from use of the technology?

How to move forward?

Developing and refining a generative AI policy or guidelines for an FS organisation is an iterative process which needs to consider sector-specific requirements given the regulated industry FS organisations operate in. As a result, it requires input from a range of different stakeholders. It needs to (i) align with the organisation’s overall strategy for generative AI and (ii) be kept under review as the legal landscape for generative AI tools develops.

Having burst onto the scene there is a lot to think about with generative AI. Engage with it and the opportunities are huge.

And speak to Bird & Bird to help you understand the risks and develop and refine your policy and guidelines.

 

The authors would like to thank Toby Bond, Gabriel Voisin, Alex Jameson and Louise O’Hara who published the original article on Generative AI tools that the authors tailored in order to publish this financial services sector focused article on AI policies.  The original article can be found here: Generative AI tools: What’s your policy going to be? - Bird & Bird (twobirds.com)

Related Content

Latest insights

More Insights
Carabiner

Update: Reform of product liability adopted - New liability and litigation risks for companies!

Nov 19 2024

Read More
Curiosity line yellow background

Something to Embrace: The scope and power of the court under 90-15 of the IPS (Corporations)

Nov 19 2024

Read More
The European Commission Modern office buildings in Brussels, Belgium.

VAT in the Digital Age (“ViDA”): prepare your business with Bird & Bird – 10 key insights for success

Nov 15 2024

Read More