Applications of AI in the Gambling Industry

It is no secret that the gambling industry is undergoing a period of prolonged and unprecedented regulatory change, with a combination of previously unregulated or restricted territories introducing domestic point of consumption regulation, and those pre-existing regulated territories increasingly implementing new requirements, standard and restrictions. In Great Britain, for example, we are still awaiting the full picture of what changes will be introduced following the publication of the Gambling White Paper in April 2023, and there remains a question as to whether the general election in July could affect the timeline for the Gambling White Paper’s proposals being implemented. Similar changes are also occurring on an international level, notably in the United States of America and the Nordics. 

This increasing complexity, cost and burden of operating in heavily regulated markets has catalysed interest across the gambling industry on how the revolutionary potential of Artificial Intelligence (AI) can be harnessed in the evolving industry landscape. Here we consider the potential applications of AI in the gambling industry, the steps that operators are taking to deploy AI and the legal implications that need to be considered.

Potential

Identifying Vulnerability: The Gambling Commission’s consultation response published in May 2024 confirmed that operators will need to conduct light touch ‘vulnerability checks’ on customers with a net deposit of £500 in a rolling 30 day period from 30 August 2024 (reducing to £150 from 28 February 2025). Additionally, larger operators will be involved in a 6-month pilot programme commencing on 31 August 2024 to assess whether enhanced checks for customers spending larger amounts can be conducted in a frictionless manner. If the pilot succeeds enhanced checks may be rolled out into the live environment in 2025. Whilst the level at which the enhanced checks will need to be conducted is not yet clear, the  Betting and Gaming Council’s (BGC) Industry Voluntary Code on Customer Checks and Documentation Requests Based on Spend requires that BGC members must take certain actions when customers wish to make net deposits of more than: (i) £5,000 in a rolling month; and (ii) £25,000 in a rolling 12-month period. Whether the Gambling Commission’s pilot results in the implementation of similar thresholds for the enhanced checks remains to be seen.

Not only can AI tools be used to automate financial risk checks and the decision-making response of the operator, such as account suspensions, but its potential can extend to offering pre-emptive protections to customers. AI models trained with historic customer data can identify customers exhibiting early signs of vulnerabilities, allowing operators to take action and offer support before the customer’s vulnerabilities have led to excessive gambling harm.

The requirements for identifying vulnerable customers, and then interacting with them to prevent gambling harm, can vary materially from territory to territory. AI models may also be useful here in allowing operators to programme different thresholds and markers of harm for customers in different markets, as well as tailored responses. This could reduce the manpower needed to service different territories and reduce the margin for human error. 

Whilst on the one hand AI models can reduce the margin for human error, they also present a risk of causing harm where the lack of human intervention enables errors to go unchecked. AI chatbots may also be less empathetic than their human counterparts. Operators will need to consider this carefully to ensure that more vulnerable customers are offered the support they require, potentially by streamlining processes to refer such customers to human customer support channels. 

Game Development: Many gambling operators have begun to use AI to produce more tailored gaming experiences for players. Generative AI can also be used to develop entire games. In addition to designing the game itself (such as menus, the game itself, or avatars), generative AI can be used to create or control game concepts. This way, developers can have the code checked for errors to further balance the games or close loopholes that players might exploit. Furthermore, models can also be used to analyse player data, determine which concepts are more likely to appeal to the player, and, in turn, prioritise these concepts in the player’s UX to generate a personalised and alluring game design. This is seen as a key avenue to increasing gaming revenues.

In difficult borderline cases between skill-based and chance-based games, AI can be used to integrate additional skill-enhancing elements into a game, thereby reducing the element of chance. AI can also be employed to determine whether a game relies more on luck or skill, which can help mitigate challenging classification issues and reduce regulatory uncertainty for those seeking to keep their games outside of gambling regulation.

Additionally, AI can also be used to examine player data to determine which marketing offers and incentives, such as bonuses, free bets and VIP schemes, are more likely to appeal to individual players. Ultimately, this will allow operators to develop more robust marketing strategies for their businesses.

Although AI may offer operators a significant opportunity to enhance the marketability of their products, there is a risk that these enhancements could exacerbate gambling harms. If game visuals and incentives are tailored to the individual player, this may drive gambling volume and could expose players to more significant loss risks.

In light of this risk, regulators will be keen to ensure that developments in this area are accompanied with sufficient regulatory protections. One key tool at the disposal of regulators will be ensuring operators develop their game designs and marketing strategies in a socially responsible manner. In Great Britain, operators must “ensure that products are designed responsibly and to minimise the likelihood that they exploit or encourage problem gambling behaviour” (RTS 14), a broad requirement having equivalent application to AI models used by operators. Security and Fraud: Deploying AI is a key tool at the disposal of operators in the battle against fraud. By analysing historic customer data, AI models can identify suspicious betting behaviours indicative of fraud at an earlier stage. Additionally, where this takes place AI is able to implement responses such as account blocks faster, and on a 24/7 basis, to prevent further fraudulent activity. As with financial risk checks, operators will need to ensure that AI fraud checks are frictionless and do not undermine player experience. However, they will need to be of a sufficient standard to enable operators to comply with their regulatory obligations.

In addition to account security breaches, AI tools can also be used to detect fraudulent playing activities, notably match fixing, by identifying patterns of suspicious betting conducted on an intra-operator and global basis. Whilst AI can promote game integrity, there is a risk that some players will use AI tools to cheat in peer-to-peer games. For example, in poker players can use AI to analyse an opponent’s playing style and assess their likely hand. Operators will need to ensure sufficient protections are in place to prevent such activities.

Predictions: Particularly with sports betting, predictions and odds-setting is key driver of operator success. AI can be used to assess historic game and betting data to make more accurate predictions and enhance the odds-setting process. Given the advent of in-play sports betting in recent years, it is important that these functions can operate quickly and in an agile manner. As AI models can generate decisions in an automated manner, AI is allowing operators to have more efficient prediction processes and cope with the increased demands that the in-play environment necessitates.

From the player’s perspective, AI models can also be used to analyse game or sports data and betting strategies can be based upon this. This may pose a challenge to operators who face an increasing number of players using AI assisted betting strategies, which could make players more capable of identifying more profitable bets. Such strategies do, however, pose a risk as reliance on AI tools may give players a false sense of confidence in relation to the outcome of their bet and therefore drive higher stake betting strategies which, if unsuccessful, could result in significant financial losses for players. 

Applications

Increasing numbers of operators have already begun to deploy AI. Examples include:

  • 888: 888’s ORBIT platform uses AI tools to enhance its customer experience, including through the use of AI chatbots, improved marketing functionality and enhanced game integrity. 
  • Entain: Entain’s ARC platform was launched in 2021. The platform uses AI technology to provide players with an enhanced gaming safety programme. The platform has been deployed to 22 international markets.
  • PokerStars has developed a tool to assist players in developing their game skills. The tool analyses the players’ gaming histories to provide tailored feedback on past hands and betting decisions. It also provides players with training tools, designed to assist with future performance.
  • Stats Perform offers AI tools which allow past sports data to be assessed and used by players to develop betting strategies. Stats Perform also offer tools which use AI to examine player trends, which can in turn be used by operators to make betting products more marketable.
  • Sportradar offers a ‘Fraud Detection System’ product to monitor and provide alerts in relation to suspicious betting patterns which indicate match fixing activities have taken place. 

This is just a handful of current use cases of AI in the gambling industry. As AI attracts more focus, the extent of its application will only grow.

Implications

Regulatory obligations: Deploying AI may allow operators to reduce or streamline their operational burden, but in a heavily regulated industry such as the gambling industry, operators will, first and foremost, need to continue to comply with their regulatory obligations or could risk regulatory sanctions. For example, in deploying AI tools to game development operators must ensure this is done in a socially responsible manner (as per RTS 14). As the concept of social responsibility is an ethical concern, complying with this obligation may require human input to a greater extent than other processes which can be more readily automated and could therefore limit the potential application of AI.

Nevertheless, AI may become more significant in solving regulatory matters. For instance, AI can be used to meet the regulatory requirement for early addiction detection in some jurisdictions. The German Interstate Treaty on Gambling requires automated systems for such detection measures (as per Sec. 6 lit. i). Therefore, AI assisted recognition tools can help derive insights from players' signals for an addiction. AI can also be utilized to identify and verify the age of players. There is already discussion about whether AI tools should assist in determining players' ages through facial recognition, making it more efficient to deny access to minors and prevent gambling advertisements from being displayed to them. However, these measures themselves may be subject to regulation, such as the upcoming EU AI Act. Thus, AI can potentially solve future regulatory problems on one hand, but its use can also increase regulatory complexity on the other (e.g., classification of AI emotion recognition for early addiction detection as a “high-risk” AI system under the EU AI Act). 

Furthermore, as AI becomes more widely applied in the gambling industry, regulators will undoubtedly look to amend regulation to make it fit for the digital age. Operators will need to ensure that they are aware of any regulatory updates and respond appropriately, as required by the terms of their licence.

Privacy and Data Protection: The applications of AI in the gambling industry rely heavily on data sharing and this inevitably gives rise to privacy concerns. The financial risk check pilot programme in the UK which will launch later this year is testament to this, as this will focus on data sharing channels rather than consumer safety controls more widely. In the context of financial risk checks, operators need to ensure that appropriate data sharing agreements are in place with third parties to allow their AI tools to access the data required to perform the checks. 

In other use cases, such as game development, operators will need to ensure that players consent to the operator’s data sharing practices, through appropriate privacy policies and terms of use. As AI activities expand and require a wider range of data, operators will need to ensure that they are regularly reviewing and updating their policies to cater for any expanded data processing purposes.

Contracts and liability: Operators will need to ensure that their contracts with third parties, either supplying finished AI models or training data for the operator to produce their internal models, offer comprehensive protections. For example, such contracts must include appropriate confidentiality and security obligations, as well as indemnities for breaches of data protection legislation and third party IP claims. From an IP perspective, operators will need to ensure that third parties have appropriate rights in the data or model they are providing and can seek protection through contractual warranties.

From a consumer perspective, operators will need to ensure that the policies customers agree to allow them to use AI in the manner envisaged by the operator. This may include extensive data rights in their privacy policies and terms of use. Such contracts will need to clearly define the player and operator’s rights concerning AI applications, and operators should ensure these contracts are future proofed though this may be challenging in such a quickly evolving area.

Latest insights

More Insights
Car by beach

Five Key Considerations for M&A in the Retail and Consumer Sector

Jun 28 2024

Read More
City skyline at dusk

Are you ready for Hong Kong’s Cybersecurity law?

Jun 28 2024

Read More
Curiosity line yellow background

Economic Crime and Corporate Transparency Act 2023 – Implications of the UK Act for Australian companies

Jun 28 2024

Read More