AI Governance: Essential Insights for Organisations: Part II – "Practical Implementation of AI Governance"

Written By

nils loelfing module
Dr. Nils Lölfing

Counsel
Germany

I am a counsel in our Technology & Communications Sector Group. I provide pragmatic and solution-driven advice to our clients on all issues around data and information technology law, with a strong focus on and experience with AI and machine learning projects.

This is Part II of the two-part article on AI Governance, which has rapidly become a top priority for organisations. While Part I explored the meaning, challenges, and trends in AI Governance, Part II offers essential insights into the practical implementation and best practices for AI Governance in 2025.

Part II – Practical Implementation of AI Governance

Traditional governance frameworks 

While traditional governance frameworks share some similarities with AI-specific ones, such as the need for risk assessments and supplier contracting processes seen in GDPR compliance, they fall short in addressing the unique challenges posed by AI technologies. Traditional frameworks often lack the agility and specificity required to manage novel AI risks, particularly those arising from complex, multi-agent AI systems. These systems can introduce unpredictable behaviours and interactions that traditional governance models are not equipped to handle, as discussed in Part I. As a result, existing processes, like those under GDPR, can partly be leveraged but require significant updates to effectively mitigate AI-specific risks.

Effective AI Governance must be comprehensive and proactive, integrating elements from global regulations and establishing best practices that are both practical and specifically targeted at the unique risks associated with AI applications. This involves not only updating existing governance structures but also developing new frameworks that can anticipate and respond to the dynamic nature of AI technologies. Companies must adopt a forward-thinking approach, incorporating continuous monitoring, ethical considerations, and close cross-disciplinary collaboration to ensure that AI systems are developed and deployed responsibly.AI Governance involves not only legal expertise but also technical skills. Legal professionals identify compliance requirements, while engineers and data scientists implement these through practical technical measures. 

However, this is easier said than done. We have discussed the challenges faced by organisations when integrating such dynamic frameworks into their existing governance structure in Part I. The primary objective of this Part II is to provide actionable steps, possible avenues to borrow ideas from and help define the approach to take when designing AI Governance frameworks. 

Specific actions to implement AI Governance

A close-up of a website

Description automatically generated

  1. Take inventory: Understand where and by whom AI systems are or could be used within the organisation. Such understanding forms the foundation of tailored AI Governance policies and procedures that address specific needs and challenges of AI deployment within and organisation. That is the primary focus for organisations.
    • Mapping: it is essential to ensure that the systems in use are fully comprehended with regard to all aspects of each system, including its intended purpose, effectiveness and importance within its function, the cost of the system (including maintenance), stakeholder involvement, and stakeholder perception. This should include all business functions, both employee and client interactions, and third-party services or platforms that integrate AI technology.
    • Initial prioritisation: The map of the AI landscape allows organisations to gain an initial view of potential risk factors and assist with the prioritisation for resource allocation.  This approach may reveal that an AI system used, for instance, to aid HR in employee management requires greater attention than an LLM model used to send product updates to B2B clients.
  2. Establish clear policies and guidelines: To effectively govern the use of AI within an organisation, it is essential to develop comprehensive policies and guidelines that articulate ethical principles, compliance requirements, and operational standards. These policies should be tailored to the priorities of the organisation and must provide a framework for the risk assessments that follow later. The Dutch AI Impact Assessment Version 2.0 could serve as a foundational reference but needs to be adapted for practical implementation.

Key tweaks include: 

  • Ethical principles and priorities: Define ethical principles and priorities to guide AI development and deployment. Considering stakeholder views is a must here, and hence, these priorities tend to be different for different organisations. Organisations should establish these early on to prevent the integration of systems that fall below acceptable thresholds into business operations.
    • For example, a marketing company would likely have higher ethical considerations for using generative AI outputs in its deliverables. This is because any risks involved are public-facing and could significantly impact revenue and reputation. In contrast, an access monitoring system used within company premises might not receive the same level of concern. However, the same could not be said for a law firm, as it deals with highly confidential information stored in its lockers and servers, and access control is a top priority. The key here is assessing possible damage to business functions, reputation, regulatory responsibilities, and revenue. 
    • AI procurement policies: Establish policies for procuring AI from third parties, incorporating vetting processes and supplier questionnaires. These should address data sources, privacy protection by design, accuracy, reliability, and bias. The UK Government has also provided guidelines for such procurement, with useful advice such as avoiding black box algorithms and vendor lock-ins, assessing the vendor model’s data quality and bias on established metrices, and evaluating knowledge transfer from the vendor thoroughly. Make sure that the system being procured is in line with regulations (if in the EU, there should be particular emphasis on not procuring prohibited AI systems in line with the EC’s Guidelines). For customers, it is important to include data quality and bias testing provisions as well, to ensure fairness and reduce liability. Third-party datasets should be vetted and even then, reduced data quality may affect accuracy and fairness. Do not forget that fairness is not an abstract philosophical notion but a legal concept that organisations must take active steps to ensure during the entire lifecycle of the system.
  • Contracting guidelines: Significance of AI contracting has appreciably increased, and it aims to resolve common issues such as output liability and consistent model performance.
    • Depending on your level of involvement in buying or providing AI solutions, consider developing contract templates and policies or guidelines for contracting, whether you are acting as a supplier or a customer.
    • To make negotiations more efficient, establish playbooks that could cover key aspects such as prior consent for AI feature integration, data ownership and usage rights, input/output regulations, model training and improvement considerations, and other measures to ensure responsible and ethical AI use. This includes promoting transparency, mitigating bias, and ensuring fairness and accountability in all AI applications. Remember that none of these are one-offs and that contractual provisions should reflect ongoing commitment.
    • As a general observation, whilst organisations are beginning to proactively think about having their own set of terms (both as customers, in receipt of AI services, and providers of AI services), it is still an area that remains very much in its infancy, and as such, market trends are still evolving. Where necessary, terms and playbooks should therefore be updated in line with these developments.
  • Risk management policies: Specify internal risk standards and management policies, detailing types and levels of risks and the requirements for addressing them. This includes establishing approaches, personnel, and documentation to periodically identify and monitor existing, unanticipated, and emergent risks based on intended and actual performance. It is key to establish human oversight policies here, making sure that adequate human-machine interface tools are available and that the personnel involved is appropriately trained to notice, flag and address any relevant issues.
  • Developing or making bespoke: Providers should establish policies when developing AI models or turning a third-party model bespoke:
    • Policies for compliance-by-design approaches, which are already recognised in the privacy sector. Data quality, setting compliant testing and validation criteria and performance monitoring during development are a given. These should also include policies for guardrails as per the risk criteria identified. Set out what types of guardrails are needed for different use cases and risk levels. The goal here is to have very few unclear priorities during actual development.
    • Set out when to engage legal counsel in the development of AI. Legal counsels are often forced to work around what has already been created and laid down, severely affecting regulatory compliance. This is also important to foster interdisciplinary approach between law, engineering and data science. 
    • Set out policies for acceptable use terms by the deployers; different regulations (like the EU AI Act) may put in obligations on providers to monitor how the AI system is being used and whether the system breaks norms post deployment. Deployers must ensure there are no feedback loops that affect fairness of the system and create biases, if there are any, active engagement with the providers may be necessary to address them. 
  • Data governance and quality: Create new or adapt existing data governance, data quality policies to ensure that diversity and representativeness are not compromised. Set standards for model training, particularly in defining the failure modes of the system, which plays a part in the risk assessment.

3. Conduct risk assessments: With mapping and prioritisation, including establishing policies as guardrails, done, the next step is measurement. Risk assessments. NIST’s AI RMF Playbook and Future of Privacy Forum’s AI Governance Behind the Scenes provide a comprehensive list of suggestions here which could tweaked for different industries. Both providers and deployers must consider doing these.

  • Risk assessments are conducted through standard risk matrices, with multiplying likelihood of risk with severity of impact, with the resulted risk being placed on a risk scale. This needs to be done for different risk factors associated with each AI system, including potential interactions between AI projects, including angles such as privacy, security, and bias. This is the traditional RAG (red-amber-green) method; more advanced analyses could entail simulations or even econometric approaches.
  • While there is no uniform path for AI risk assessments, organisations typically conduct their AI risk assessment in four steps: (1) Initiate the assessment; (2) gather model and system information, with possible risk factors and inter-system interactions; (3) complete the risk assessment; and (4) identify and test risk management strategies. 
    • Initiate risk assessments: Begin risk assessments by considering factors such as regulatory requirements, the system's development stage, and organisational risk considerations. Conduct these assessments at various stages of an AI system's life cycle. Integrate continuous risk assessments into broader organisational risk management strategies to ensure compliance and effective AI Governance. 
    • Gather comprehensive information: Collect detailed information about the AI model or system, including training data, system capabilities, and potential use cases. This helps in understanding the system's functionality and creation process. Overcome challenges in obtaining this information, especially from third-party systems, by ensuring at least a baseline of relevant data is collected. This includes details about the supporting platform, tools, team, system capabilities and limitations, training data, and intended use cases. Data quality is paramount here, especially to understand any biases exhibited by the system. To avoid ambiguity, deployers must make sure that there are contractual provisions in place for providers to furnish this information. 
    • Anticipate and analyze risks: Risk analysis should follow a broad approach, utilising existing frameworks for guidance while considering AI-specific challenges along with sector specific risk factors.
      • The MIT AI Risk Repo is a good resource to get a landscape of possible risks that could materialise based on entity, intent and timing— attempt to identify scenarios applicable to your organisation.
      • Analysis must be done for each risk factor and across different systems to account for interactions. For higher risk scenarios, red teaming with adversarial testing under stress conditions is an effective way to understand failure modes. Implement risk-benefit matrices to categorise AI use cases and establish dedicated internal teams to monitor AI system risks. Providers must also account for any possible evolution that the system might undergo, and take technical measures against such unintended change and duly warn deployers. 
    • Evaluate and develop management strategies: Thoroughly evaluate the information gathered during the risk assessment process to develop effective management strategies. Align these strategies with specific risks, involve stakeholders, and continuously test and refine approaches to ensure risks remain within acceptable levels. Consider non-AI alternatives that match the intended purpose and check viability. 

4. Assign responsibilities: Consider creating an AI Governance Committee. Allocate responsibilities to existing governance committees, such as privacy teams, to effectively leverage their expertise.

  • Integrate AI Governance measures: Incorporate AI Governance measures into those existing organisational structures to better address AI-specific challenges, including data protection, broader regulatory compliance, and ethical considerations.
  • Build an AI sub-team: Consider creating a dedicated AI team within the main governance structure to tackle the emerging and complex challenges associated with AI system deployment in organisations Deciding whether to develop these distinct resources depends on your level of involvement with AI. It is generally more suitable for tech companies with a focus on AI development than for those simply deploying common AI tools. Personnel involved with human oversight should be members or have links to this team, making sure that they are able to communicate any discrepancies or unaccounted behaviours to the team. 
  • Specialised knowledge: Recognise that traditional governance teams may lack the specialised knowledge needed to effectively oversee AI systems. An AI-specific governance team should include professionals with expertise in AI-related regulatory, ethical, and operational considerations.
  • Involve stakeholders: Engage various stakeholders, including legal experts, ethicists, and community representatives, in AI Governance discussions to ensure diverse perspectives and comprehensive oversight. While this may seem like going too far, we must remember that AI usage is hardly ever unidimensional and tends to involve multiple disciplines. This also goes a long way in ensuring accountability.

5. Implement, monitor and adapt: Implement risk mitigation strategies developed and monitor AI systems for performance, fairness, and compliance. Conduct regular audits to identify and address any issues or deviations from governance policies. Adapt the system or policies depending on the changes or evolutions to the system.

  • Implement: Implement the risk mitigation strategies identified for each AI system as per resources, priorities and risk tolerance levels. Test and verify if the strategies are as effective as planned.
  • Monitor: Create processes and mechanisms for monitoring the system throughout its lifecycle, addressing system performance, functionality and compliance with standards and legal norms. Human oversight and agency is crucial here, and it must not be mechanical; the personnel must be able to proactively identify any emergent risks or deviations. They must be qualified enough to understand the system(s) at play and the impact of any decisions made by the AI. 
  • Adapt: Systematic and periodic reviews must be scheduled. AI Governance Committee or Team should have a lead role in this, along with data scientists and engineers. If there are any emergent risks or deviations, any surrounding strategies must be adapted to sufficiently address those, and any changes must be clearly documented.
  • Traditional frameworks: Traditional governance frameworks, such as those for privacy, generally do not require the same level of continuous monitoring or technical detail as AI Governance. However, as AI systems are highly dynamic and can behave unpredictably over time, it is essential to establish ongoing monitoring and adaptation mechanisms. Regular oversight helps identify and address emerging issues, reducing the risk of unintended consequences. By proactively adjusting strategies, organisations can better manage AI-related challenges and ensure their systems operate responsibly as well as effectively. 

6. Transparency: Transparency plays a crucial role in the development and deployment of AI systems in organisations, making it clear how a system is built, but also ensuring a deeper understanding of its decision making and data sources to mitigate risk. It is not a one-off activity but a continuing iterative process and tool to demonstrate accountability, both to regulators and stakeholders.

  • Disclosure: Depending on the jurisdiction, disclosure requirements may vary; however, these are the ball-park best practices.
    • AI usage should be disclosed to the end-user, regardless of who the individual is. Attempt to fulfil this even if the usage is “obvious” to the individual, providing this additional disclosure can help build trust with end-users.
    • AI generated output like audio, image, video, or text should be marked as such. It doesn’t have to hinder the purpose of the output; organisations should take necessary steps to make sure that individuals don’t automatically take an AI generated output for a human generated one.
  • Explainability: Organisations should be capable of explaining the rationale behind a decision made by an AI system to the end-user to whom the information pertains to. This is more of an imperative to the deployer; for a provider or developer, it is ideal to include obligations to provide explanations in their vendor contracts. Disclosure is the “what” of the equation, while explainability is the “how”. This should be done through explainable models, post-hoc explanations of decisions, and audit logs. Transparency-by-design helps in this process.
  • Transparency-by-design: Implementing certain approaches during development of the model significantly augments explainability and understanding hallucinations. This is a shift away from model-agnostic approaches to developing AI with explaining in mind right from the get-go. These methods include, but not limited to, setting stringent decision-making criteria for the model, clearly mapping the link between the data sources and inferences made by the model and critically reflecting on this. The links should then be tested against already established metrices to ensure consistency. Additionally, using inherently explainable approaches. where possible decreases the burden on the provider. This is primarily aimed at providers, as deployers would have limited options in the matter.
  • Developing AI: When developing or making an AI model bespoke, make sure that transparency is a priority right from the development phase and throughout the system’s lifecycle. Automated logging capabilities that record, categorise and document significant decisions and incident, sharing them with relevant personnel trained and capable to understanding them. These should go hand in hand with the monitoring and adaptation system established. 

7. Provide training and awareness programs: Educate employees on AI Governance principles, ethical considerations, and compliance requirements to foster a culture of responsibility. Without a well-informed workforce, AI Governance processes are unlikely to function effectively in practice. Employees who lack the necessary knowledge may inadvertently mishandle AI systems, leading to potential ethical breaches, compliance issues, or operational failures. Comprehensive training ensures that employees understand the importance of responsible AI usage and are equipped to implement governance frameworks effectively.

  • Contextual training: Train employees and managers on the correct and responsible use of AI systems and tools, as well as education on the organisation's AI Governance framework, within the context of their duties and responsibilities. Couple of things to keep in mind: 
    • Bite-sized: Do not make the trainings text heavy. Instead, convert chunks into bite-sized pieces of information to make sure that the objectives are reached without overwhelming the workforce. 
    • Visual aid and cues: Use interactive platforms and visual aid to increase immersion and information retention. 
    • Tailoring: Training programmes should be tailored to different roles and levels. The content and frequency of training should be aligned with employees' specific responsibilities, roles and individual needs, as well as their role within the AI Governance framework.
  • AI literacy: More than just a compliance obligation from Article 4 of the EU AI Act, AI literacy is the goal that any organisation should strive to achieve. when implementing AI systems within its functions. The goes beyond just contextual training related to employee roles; it encompasses awareness, judicial use and a holistic understanding around AI and its risks. The EU AI Office has developed a living repository of ongoing practices among AI Pact pledgers, showcasing examples of AI literacy initiatives that provide valuable guidance. This non-exhaustive list will be regularly updated with new practices (see here).

Overarching Principles 

While following the steps above would ensure that your organisation has a solid foundation regarding AI Governance, there are certain things which are ever-present and perhaps more important than any one step.

Accountability: It is ever-present and almost impossible to avoid when it comes to effective governance. Each and every step taken by your organisation must be documented, and the AI Governance Committee or Team must periodically review it. Make sure that this is not mechanical; the point is to be proactive regarding risks, so the Committee must use the information presented to account for future use of AI and correct any deviations. A proper AI management system could go miles in keeping all this in an easily accessible format. Documentation is the most important tool to show accountability to regulators and stakeholders. But mere documentation is not enough. The organisation must demonstrate that they made use of the information available to them to proactively respond to threats and problems posed by the use of AI. 

FairnessOrganisations must ensure that AI systems do not produce discriminatory outcomes and that their development and deployment respect human rights and ethical standards. This is to be ensured throughout the lifecycle of the system. No model is bias-free, but the inherent point here is to remedy known biases and establish filters to catch any biases before it creates discriminatory outcomes. One of the best tools available to ensure fairness is non-mechanical, meaningful human oversight that adequately counters automation bias. The AI Governance Committee or Team should periodically review the systems in question and make necessary adaptations to ensure fairness. This is initially more relevant to parties providing AI systems directly to B2B or B2C end-users, as they face the greatest risk exposure due to their direct interaction with these users. However, any backlash could quickly extend to the original developer of the underlying AI model.. If systems create fairness issues, organisations may face backlash from stakeholders and the media, it could lead to investigations from regulators and legal conundrums. 

Avoid groupthink: Groupthink is the enemy of any well-established system and should be avoided at all costs when it comes to dynamic settings like AI. Make sure that there is a diverse set of eyes looking at problems from an interdisciplinary angle. In particular, legal and ethical requirements must be translated by AI engineers, data scientists, and AI Governance experts into actionable technical measures. If necessary, hire external experts who can red team the system to find risks and other problems which may have slipped through the cracks and document their output.

Different organisation, different AI Governance: No two organisations are the same, and no two organisations have the same priorities. AI Governance should be tailored to your organisation, and the framework must meet the unique risks posed. Learn from other companies in the industry, but the frameworks must be adapted. 

Unique risk and continuous monitoringThe risks posed by AI systems are unique to each organisation, making them difficult to detect. However, comprehensive risk repositories (as mentioned in 'Anticipate and Analyse Risk') can help identify a general taxonomy. This allows organisations to find similar or analogous elements and apply established policies, which may require adaptation. A key component of this approach is continuous monitoring based on a broad risk landscape. Without it, risk factors may go unidentified and become compounded. Continuous learning, a hallmark of AI systems, can create negative feedback loops, and few mitigation strategies exist that do not rely on continuous monitoring.

General Outlook

In conclusion, the integration of AI into organisational frameworks necessitates a shift from traditional governance models to more dynamic, AI-specific approaches. While existing structures provide a foundation, they require significant adaptation to address the unique challenges posed by AI technologies. Effective AI Governance demands a proactive, comprehensive strategy that incorporates global regulations, ethical considerations, and cross-disciplinary collaboration. By taking inventory of AI systems, establishing clear policies, conducting thorough risk assessments, and fostering transparency and accountability, organisations can navigate the complex AI landscape responsibly. Continuous monitoring and adaptation are crucial to managing AI-related risks and ensuring systems operate effectively and ethically. Ultimately, AI Governance must be tailored to the specific needs and priorities of each organisation considering its position in the AI value chain and specific applications, emphasising fairness, accountability, and a commitment to ongoing improvement.

Regulatory scrutiny is expected to intensify due to growing concerns over the technology, prompting organisations to act swiftly to ensure compliance and maintain public trust.

 

Article written by Dr. Nils Loelfing and Niranjan Nair Reghuvaran.

*The authors would like to thank Luca Schmidt, research associate at Bird & Bird, for his expert support in drafting this article.

Latest insights

More Insights
Curiosity line blue background

Hello there regulation! Implications operators of German self-consumption facilities must now deal with following the latest ECJ judgement

5 minutes Feb 24 2025

Read More
featured image

The Danish Supreme Court – Share options granted after January 1, 2019, are subject to the new rules, even if they relate to a program from before 2019

4 minutes Feb 24 2025

Read More
Curiosity line green background

Key Milestones & Best Practices in the EU’s Public Digitalisation Journey: Building Blocks to Bridge the Gap – Here’s How You Do It!

28 minutes Feb 24 2025

Read More