AI as a digital asset

Hong Kong: Ethical Artificial Intelligence Framework

Latest developments

Currently, there are no AI-specific laws or regulations in Hong Kong (except measures adopted to ban certain AI products involving personal safety such as autonomous driving AI). Local regulators have also issued high-level guidance on AI and AI products, including the Hong Kong Monetary Authority’s High-level Principles on AI and Consumer Protection in respect of Use of Big Data Analytics and AI by Authorised Institutions (both published in November 2019), as well as the Privacy Commissioner for Personal Data’s Guidance on Ethical Development and Use of AI (published in August 2021). Most recently, the Hong Kong Government’s Office of the Government Chief Information Officer (OGCIO) developed an Ethical Artificial Intelligence Framework (Ethical AI Framework) originally designed for internal use, and an adapted version was published publicly in August 2023 to assist organisations in incorporating AI and big data analytics into IT projects whilst considering the ethical implications.

Summary

The Ethical AI Framework consists of:

  • A tailored framework, providing a set of Ethical AI Principles, an AI Governance Structure, an AI Lifecycle and an AI Practice Guide for ethical use of AI and big data analytics when implementing IT projects (Tailored AI Framework); and
  • An ‘AI Application Impact Assessment’ template setting out questions to be answered by organisations across different stages of the AI Lifecycle, to assess the impact of AI applications, as well as to ensure Ethical AI Principles have been considered (AI Assessment).

The term “AI” is broadly defined in the Ethical AI Framework as a collective term for computer systems that can sense their environment, think, learn and take actions in response to the gathered data, with the ultimate goal of fulfilling their design objectives. “AI Systems” is defined as a collection of interrelated technologies used to help solve problems autonomously and perform tasks to achieve defined objectives without explicit guidance from a human being. Whereas “AI Applications” is used to refer to a collective set of applications whose actions, decisions or predictions are empowered by AI models, such as IT projects which have prediction functionality, or model development involving training data.

Key Actions to Adopt the Ethical AI Framework: In order to adopt the Ethical AI Framework, the OGCIO recommends the following Key Actions: (i) considering all Ethical AI Principles throughout a project lifecycle; (ii) reviewing any existing project management governance structures to ensure alignment with the AI Governance Structure, and setting up an optional ‘Ethical AI Committee’ if necessary; and (iii) following the AI Practice Guide as well as completing the AI Application Impact Assessment.

Tailored AI Framework

Twelve Ethical AI Principles

The Ethical AI Principles are rules to be followed when designing and developing AI Applications. The Ethical AI Framework defines two of the twelve principles, (1) Transparency and Interpretability and (2) Reliability, Robustness and Security, as “performance principles” which are fundamental principles which must be achieved to create a foundation for the execution of other principles.

The remaining principles are categorised as “general principles”, derived from the United Nations’ Universal Declaration of Human Rights and the Hong Kong Ordinances.

The twelve Ethical AI Principles are as follows:

  1. Transparency and Interpretability: Organisations should be able to explain AI applications' decision-making processes to humans in an understandable manner.
  2. Reliability, Robustness, and Security: AI should operate reliably over time, robustly by providing consistent results and being capable of handling errors, and remain secure against cyber-attacks, complying with legal and industry frameworks.
  3. Fairness: AI should treat similar groups fairly, without causing harm or discrimination. This entails not using datasets that contain discriminatory biases.
  4. Diversity and Inclusion: Inclusion and diverse usership through the AI Application should be promoted by understanding and respect the interests of all stakeholders impacted.
  5. Human Oversight: The degree of human intervention required in AI Applications’ decision-making and operations should be dictated by the severity of ethical issues.
  6. Lawfulness and Compliance: Organisations responsible for an AI Application should always act in accordance with the law and regulations and relevant regulatory regimes.
  7. Data Privacy: In accordance with DPP 1-5 of the PDPO, individuals have the right to have their personal data collected in a lawful and fair manner, for a specific and legitimate purpose, and not excessively. The data should only be used for its original purpose or any directly related purposes unless explicit and voluntary consent is given. The data should be accurate, securely kept, and not retained longer than necessary. Moreover, individuals should be informed about personal data policies, the kinds of personal data held, and the main purposes for which it is used.
  8. Safety: Throughout their operational lifetime, AI Applications should not compromise the physical safety or mental integrity of mankind.
  9. Accountability: Organisations are responsible for the moral implications of their AI Applications' use and misuse. There should be a clearly identifiable accountable party.
  10. Beneficial AI: The development of AI should promote the common good.
  11. Cooperation and Openness: A culture of multi-stakeholder open cooperation in the AI ecosystem should be fostered.
  12. Sustainability and Just Transition: The AI development should ensure that mitigation strategies are in place to manage any potential societal and environmental system impacts.

AI Governance Structure

AI Governance” is defined as the practices and direction by which AI projects and applications are managed and controlled. It defines standard structures and roles and responsibilities over the adoption process of AI against practices set out in the Ethical AI Framework.

The three lines of defense model is adopted:

Line of defence Roles and responsibilities 
First line The project team, responsible for AI Application development, risk evaluation, execution of actions to mitigate identified risks, and completing the AI Assessment.
Second line A project steering committee and project assurance team, responsible for ensuring project quality, defining acceptance criteria for AI Applications, providing independent review and approving AI Applications.
Third line The third line of defence is responsible for the reviewing, advising and monitoring of high-risk AI Applications. It comprises of an IT board or chief information officer, and is optionally supported by an ethical AI committee of external advisors, whose purpose is to provide advice and strengthen organisations’ existing competency in AI adoption.

 

AI Lifecycle

The Ethical AI Framework guides the ethical use of AI in organisations by providing a description of activities to be covered throughout all stages of the AI Lifecycle, a structure to be followed when executing AI projects, and the corresponding capabilities required to apply ethical AI.

The 6 stages of an AI Lifecycle are:

  1. Project strategy;
  2. Project planning;
  3. Project ecosystem;
  4. Project development;
  5. System deployment; and
  6. System operation and monitoring

The development process of an AI application places a significant emphasis on data, as the quality of data often dictates the quality of the AI model. Data sourcing and preparation is therefore a continuous exercise, as AI models can often benefit from more or better data for iterative model training during the development process. As such, the AI Lifecycle often involves a continual feedback loop between the stages of project development, system deployment, and system operation and monitoring for iterative improvements, differentiating it from a traditional software development lifecycle.

AI Practice Guide

The AI Practice Guide provides practical guidelines for organisations to observe and apply, in various practice areas corresponding to stages in the AI Lifecycle, when incorporating AI in IT projects to ensure ethical adoption. These practice areas are assessed as part of the AI Application Impact Assessment.

AI Assessment

The AI Assessment enables organisations to assess, identify, analyse, and evaluate the benefits, impact, and risks of AI applications over a set of practical considerations for implementing ethical AI. This ensures organisations are meeting the intent of Ethical AI Principles and helps determine the appropriate mitigation measures required to control any negative impacts within an acceptable level.

The AI Assessment consists of the following components:

  • AI Application Impact Assessment, which should be conducted at different stages of the AI Lifecycle. It introduces a systematic thinking process for organisations to understand the different aspects of individual applications for their associated benefits and risks, and also highlights the need for additional governance activities and identifies follow-up actions to ensure necessary measures and controls required for implementing ethical AI.
  • Risk Gating Criteria, which are a set of questions used to distinguish high-risk AI applications. These questions should be completed at the beginning of a proposed AI project or upon conditions of the AI application are changed. AI applications which are considered high-risk would subsequently require review and approval by IT Board/CIO.
  • AI Application Impact Assessment Questions, which ensure that the impact of the AI Application is identified and managed across the AI Lifecycle stages and that related Ethical AI Principles have been considered.
  • Impact Considerations, which are a set of questions about beneficial and negative impacts on stakeholders included in the AI Application Impact Assessment template. This guides the evaluation of the need for further mitigating actions from the organisation.
    In terms of frequency, the Ethical AI Framework recommends that the AI Application Impact Assessment be conducted regularly (e.g. annually or when major changes occur) both as AI projects progress, as well as during the operation of the AI application.

How could it be relevant for you?

Organisations may make use of the Ethical AI Framework when adopting AI in their IT projects or services. The Ethical AI Framework is designed not only to serve as a reference or guide for the project team during the development and maintenance process of an AI application, but also to provide guideline governance structures, enabling organisations to demonstrate accountability in building trust with the public upon the adoption of AI, by evaluating its impact, safeguarding the public interest and facilitating innovation.

Next Steps

The AI landscape in Hong Kong is rapidly evolving. However, these rapid changes may also come with ethical concerns. With the Ethical AI Framework in mind, organisations should remain alert to potential changes in the regulatory environment. Currently, AI regulation in Hong Kong primarily derives from the existing rules on intellectual property rights in relation to AI systems and AI-generated IP, data protection and privacy. While no specific legislation for AI systems has been proposed yet, it is anticipated that the regulatory framework could be readily adapted to address emerging challenges. The Government plans to establish a special task force to recommend the most effective approach in dealing with the revolutionary impact of Large Language Models such as ChatGPT, with future legislation being a possibility.

*Information is accurate up to 27 November 2023

AI as a digital asset - Explore further sections

Explore other chapters in the guide

Data as a key digital asset

Crypto assets

AI as a digital asset

Privacy & Data Protection

Cybersecurity

Digital Identity and Trust Services

Consumer