One of the key contributions of AI lies in optimising energy production, consumption and trading, thereby enhancing overall efficiency and sustainability. Advanced AI algorithms are employed to analyse massive datasets, enabling predictive maintenance of energy infrastructure, optimising power grid operations, dispatch of production, consumption and storage assets, in addition to identifying opportunities for energy savings.
There are many AI use cases across the entire energy value chain from time series forecasting, market analysis, optimised bid selection, anomaly detection, failure prevention and consumption pattern recognitions. AI technology plays a crucial role in minimising greenhouse gas emissions and addressing climate change. Systems powered by AI can pinpoint the most effective approaches for utilising captured carbon, model and predict climate change and weather patterns, and help design greener smarter cities and more energy-efficient buildings.
Whilst the transformative force of AI in the energy and utilities sector is clear, as with any disruptive technology, there are a wide range of regulatory and legal implications to consider. For example, the use of AI involves the collection and processing of large amounts of data, raising concerns regarding data privacy and security, and so a robust data strategy will be key. This is particularly important where data involves end consumers of energy in the form of personal data.
Another important legal consideration is the protection of critical systems from cyber-attacks. This is the subject of the EU Network Code on Cybersecurity, due to enter into force later this year. Possible exploitation routes include the AI that operates the network. Companies in the Energy and Utilities sector using open and public generative AI systems also need to consider the confidentiality issues.
Other impacts to the sector include the evolution of the regulatory framework for the use of AI, including specific guidance from energy regulators. The implementation of AI also gives rise to environmental, social and governance risks, including those resulting from the potential increase in energy consumption due to AI system processing power. Finally, the impact of AI on the sector workforce, considering possible influences/biases or errors in data, or miscorrelations due to insufficient training, data or coding mistakes also require consideration.
One specific legal issue being considered at the European level (through sandbox projects and consideration of the proposed AI Liability Directive) is the issue of liability where AI systems are used in energy-related activities, for example, the trading of energy. The revised Regulation on Wholesale Energy Market Integrity and Transparency (REMIT), the relevant parts of which took effect in May 2024, imposes additional reporting and governance obligations on those engaging in energy trading using algorithms.
Algorithms can also be vulnerable to being exploited, or can unintentionally lead to incorrect orders being placed, so ensuring supplier liability for regulatory enforcement in such cases is important. The complexities of these issues demand careful monitoring and comprehension of the evolving legislation and regulator guidance, which may shape the framework of AI governance.
For further information, contact Kathryn Parker, Michael Rudd and Peter Willis.
This article was published in the special AI edition of our monthly Connected newsletter, to view the full newsletter or to sign-up to receive future newsletters for the latest Regulatory & Public Affairs news and updates, see below: