AI Governance: Essential Insights for Organisations: Part I – Understanding Meaning, Challenges, Trends, and Best Practices in AI Governance

Written By

nils loelfing module
Dr. Nils Lölfing

Counsel
Germany

I am a counsel in our Technology & Communications Sector Group. I provide pragmatic and solution-driven advice to our clients on all issues around data and information technology law, with a strong focus on and experience with AI and machine learning projects.

This is a two-part article on AI Governance, which has rapidly become a top priority for organisations. Part I will explore the meaning, challenges, and trends in AI Governance, while Part II will offer essential insights into the practical implementation and best practices for AI Governance in 2025.

Part I – Meaning, Challenges and Trends in AI Governance

In today’s rapidly evolving technological landscape, AI has emerged as a transformative force across various sectors and business functions. The AI market size was estimated to be 184 billion USD in 2024 and is expected to triple by 2030. It has become a buzzword, bringing with it a lot of chatter from professionals and investors. And, it seems to have delivered- AI tech is becoming ubiquitous and perhaps most importantly, it generates great value for companies and consumers alike. 

However, there are significant concerns surrounding AI- from privacy and safety to its potential to cause disruption and negative impacts on individuals and businesses alike when prudent administration and oversight are left as an afterthought. This is where AI Governance comes in. 

AI Governance is not just a concern for individual organisations; it is essential for countries worldwide. The AI Action Summit in Paris marked the world’s first stab at shaping the future of AI Governance and development. The talks resulted in a non-binding Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet. The Summit recognised the paradigm shift AI has brought, and the importance of multi-stakeholder, inclusive, human-centric and above all, ethical AI. The founding members (Brazil, Chile, Finland, France, Germany, India, Kenya, Morocco, Nigeria, Slovenia.) declared to launch the Public Interest AI Platform and Incubator with the aim of reducing gaps between private and public AI initiatives, and the Summit recognised the importance of sustainable AI development, also with regard to energy usage.

These talks occurred amid an intensifying race for AI innovation, sparking debates on governance and safety. Notably, China's DeepSeek and its popular AI chatbot have recently become focal points in discussions on AI governance. 

What is AI Governance? 

AI Governance is not about mere compliance with laws regulating the development and use of AI systems, such as the EU AI Act. AI Governance is a more holistic approach to ensuring that AI systems and tools remain safe and ethical, fostering fairness and respect for human rights across the fields of AI research, development, and deployment. While the legal angle of AI Governance follows ‘hard requirements’, where failure to comply could result in a breach of law, the ethical angle includes ‘soft requirements’ which may not be covered by the law. For example, the labelling of AI-generated content is not typically a legal requirement but is crucial for transparency and ethical responsibility. In this light, AI Governance addresses stakeholder expectations, societal and business norms, and provides a solid framework to prevent misuse of the technology. It aims to increase trust in these systems and mitigate associated risks and reputational harm while facilitating responsible innovation. 

Research has also shown that appropriate AI Governance produced substantial value and that it does so in areas important to business strategy and competitiveness such as product quality, trustworthiness, and reducing regulatory risk. Beyond compliance, companies should therefore focus on AI Governance because done right, it can lead to increased customer loyalty and enhanced brand reputation, giving a competitive edge in the marketplace and ultimately driving long-term business success. However, this is more challenging than it seems.

What makes AI Governance so challenging? 

AI Governance is a multifaceted challenge for companies because it sits at the intersection of rapid technological evolution, legal uncertainty, ethical debates, and organisational complexity. Opacity in decision-making, model bias, privacy concerns in development, and risk management are well-referenced topics here. However, the core challenge in AI Governance is the uncertainty surrounding many risks, along with the rapidly evolving technological AI landscape. This issue is compounded by the broader governance challenge faced by all international organisations: navigating diverse global regulations and ethical considerations that vary significantly across cultures and industries. 

This calls for comprehensive risk assessment. However, AI risk assessment initiatives so far have either been disjointed or varied in scope, resulting in different categorisation frameworks using analogous terms, furthering inconsistency in understanding. There are few tools companies could use to identify risk— and to help mitigate the problem, MIT Future Tech Project has published an interdisciplinary AI risk repository cataloguing over 700 risks associated with AI from multiple sources (human and entity, for instance) at different stages (pre-deployment and post) cited in literature. 

Still, blind spots remain. For example, the International AI Safety Report, published recently by 100 leading AI researchers, highlights the uncertainty surrounding the societal impact of misinformation, noting limited evidence of its broader effects. The authors contextualise this by referencing the recent US election, where the influence of AI-generated content appeared to be less significant than initially feared. While this is a headline statement, the general point about unclear risks will apply to many other AI applications. 

Another intriguing example from the International AI Safety Report highlights how general purpose AI systems can learn to their obfuscate mistakes, with the system “intentionally” making it more difficult for human assessors to detect them. In other examples, AI systems learned to exploit its supervisors’ biases to receive positive feedback or alter its training environments to increase rewards, when that information was accessible to the system. These examples illustrate that, especially with GPAI, underlying risks exist, and fully understanding them can be challenging, even in everyday use of AI systems.

A more obvious example of the importance of AI Governance is Air Canada's chatbot incident. The bot incorrectly informed a customer about qualifying for bereavement fare discounts, while concurrently linking an Air Canada webpage that contradicted the same advice. The court ruled in favour of the customer, citing Air Canada's negligent misrepresentation and lack of reasonable care for not ensuring the chatbot's accuracy. This occurred after Air Canada's significant investment in AI to improve customer experience.

These risks are exacerbated as AI evolves and new technology graces markets. Agentic AI, the newest groundbreaking AI application capable of automating discrete tasks and workflows “replacing” human employees, was announced at CES 2025 to be a multitrillion dollar industry. As is obvious, such systems would have risk profiles that are different from commonplace LLMs that are employed as chatbots and assistants at enterprises, warranting a much-needed concrete governance structure. The AI Governance framework at organisations needs to be robust enough to handle these evolutions and new adoptions. 

The International AI Safety Report also underscores the challenges in assessing risks associated with the rapid advancements especially in Generative AI. As these models improve, the potential risks become more pronounced, emphasising the need for vigilant and adaptive AI Governance to address these evolving threats.

Even when risks are clearly identified, addressing them often requires a multifaceted approach, and solutions for one risk can sometimes conflict with those for another. A prime example for this would be promoting inclusivity and privacy. Inclusive growth, sustainable development and well-being is an OECD AI principle and calls for inclusion of underrepresented populations in AI systems— requiring the collection and processing of large amounts of personal (even sensitive) data from said, possibly vulnerable, populations. Obviously, this also presents significant privacy risks and would need to be mitigated, keeping in consideration the type of AI system and use case. 

Industry practice

While AI Governance has quickly become a top priority for organisations, rising from ninth place in 2022 to the second most important strategic focus in 2023 (see here), with the trend continuing in 2024 and 2025, each framework is different and has to be suited to the company’s needs and values. Organisations are at varying stages of maturity in their AI Governance efforts, a study from 2023 has shown (see here). Given the challenges it is unlikely that progress was made significantly: 10% have no guidelines, 30% are formulating policies, 40% are transforming internal structures, and 20% have advanced processes with clear responsibilities and tools in place. Larger organisations, particularly those generating over $60 billion annually, are leading the way, while 60% of companies with $1 billion in annual revenues plan to establish AI Governance functions within a year.

Despite increased awareness, most organisations remain in the early stages of AI Governance implementation, often relying on ad hoc processes rather than systematic frameworks. Approximately 42% use the U.S. National Institute of Standards and Technology's (NIST) AI Risk Management Framework, while 28% have developed in-house structures. Key activities include risk assessments, governance management structures, and the adoption of AI ethics principles. However, critical gaps remain, particularly in training employees on AI Governance and evaluating corporate or individual performance in this area. Most policies focus on high-level standards, with fewer addressing specific issues like explainability, model training, and value assurance systems. This is what Part 2 of this Article attempts to address.

InfoTech companies are at the forefront of responsible AI Governance by adopting a cross-sector, multi-disciplinary approach that involves technologists, data scientists, and legal, ethics, and policy teams. As most AI systems touch on more than one governance field, a multidisciplinary approach is crucial to the development of AI that is beneficial, safe, fair, transparent, and respectful of human rights and democratic values. The information technology sector is distinct from other industries, primarily because it is the epicentre of AI development. Companies in this sector must directly address the complexities and potential risks of AI, making AI Governance a critical aspect of their operations. Additionally, 'big tech' companies face constant public and regulatory scrutiny regarding their handling of user data, privacy, and ethical concerns, which are closely related to everyday AI issues.

Microsoft focuses on risk assessment and transparency, integrating its Responsible AI Standard and the U.S. NIST's AI Risk Management Framework into its strategy. Google employs a structured four-phase approach to AI Governance, aligning its technologies with AI Principles and integrating governance into enterprise risk management, in line with NIST. IBM uses a multi-tiered governance framework with roles like an AI ethics board and ethics focal points to ensure adherence to societal ethics and regulations. These companies demonstrate that while there is no one-size-fits-all solution, promoting ethical AI use is crucial to balancing regulatory scrutiny, stakeholder expectations, and reputation as AI continues to grow in business.

The established theory

The core components of effective AI Governance frameworks generally encompass human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity and non-discrimination, societal and environmental well-being, and accountability. In alignment with the principles established by the EU’s High-Level Expert Group on AI and the OECD’s AI Principles, the implementation of the aforementioned standards should be executed at three levels to ensure robustness: within a corporate AI strategy, at the product level, and operationalised across the organisation.

A close-up of a text

Description automatically generated

The concept of Human agency and oversight involves the implementation of effective oversight-mechanisms, including human-in-the-loop and human-in-command approaches to ensure that AI systems enhance human decision-making and respect fundamental rights. 

Technical robustness and safety ensure that AI systems remain resilient, secure, accurate, and reliable, while contingency plans minimise and prevent unintentional harm.

Appropriate privacy and data governance mechanisms ensure that AI systems uphold privacy and data protection as well as data quality, integrity, and legitimate access.

Transparency and traceability mechanisms are crucial for AI business models, systems and data. It is essential that AI systems’ decisions are clearly explained, and users are made aware of interactions with the system, its capabilities, and its limitations. 

In order to avoid bias, prevent discrimination, promote diversity, and ensure accessibility of AI systems, paying special attention to diversity, non-discrimination and fairness is imperative.

As AI systems should provide equal benefits to all humans and future generations, they should consider societal and environmental well-being through sustainability, eco-friendliness and responsibility for social and environmental impact. 

AI systems require mechanisms for responsibility and accountability, with auditability of algorithms, data, and design processes, especially in critical applications, and accessible redress options. 

The general policy principles are almost identical in most parts of the world, so these serve as good ball-park ideas. However, the specifics of AI Governance will depend on the organization’s industry, size, location on the AI value chain, and regulatory requirements. While these principles are a good starting point, robust practical implementation is what helps address risks. That is rule number 1 in AI Governance: there are universal principles, but there is no universal checklist. Companies must assess the specific risks of each AI system individually, rather than applying a one-size-fits-all approach. Every component of the system should be evaluated with a focus on the risks it may pose.

In terms of influence, AI Governance is specific to what organisations do with AI and where they sit in the AI value chain, as developer or deployer, their customers, their sector (e.g. whether already strongly regulated like finance or healthcare) and the use cases they are involved in. The framework must then be designed to proactively deal with these risks.

Unsurprisingly, looking at these very general principles, many organisations struggle to operationalize broad AI Governance consistently across their businesses. 

In Part II of this article, we address the practical implementation of AI Governance within organisations.

 

Article written by Dr. Nils Loelfing and Niranjan Nair Reghuvaran.

The authors would like to thank Luca Schmidt, research associate at Bird & Bird, for his expert support in drafting this article.

Latest insights

More Insights
featured image

In Cyberspace no one can hear you scream –Cybersecurity in the Media and Entertainment Sector

8 minutes Feb 20 2025

Read More
featured image

Belgium – Federal Government Agreement 2025-2029: Non-profit sector

3 minutes Feb 20 2025

Read More
Curiosity line pink background

Foreign Direct Investment in Australia: A Year in Review and Looking Ahead to 2025

Feb 20 2025

Read More