Gen AI at work: Hong Kong Privacy Commissioner publishes further AI guidance on the use of Gen AI by employees

Written By

wilfred ng Module
Wilfred Ng

Partner
China

I am a partner in our Commercial Department based in Hong Kong. As a technology, media, telecoms and data protection lawyer, I am experienced in advising on all aspects of commercial, transactional and regulatory matters in the TMT space.

hweeyong neo module
Hwee Yong Neo

Senior Managing Associate
China

I am a Technology, Media and Telecoms lawyer in our Commercial department in Hong Kong.

On 31 March 2025, the Office of the Privacy Commissioner for Personal Data (“PCPD”) published a checklist (the “Checklist”) to help organisations develop internal policies or guidelines for employees using Generative Artificial Intelligence (“Gen AI”) tools at work for the purpose of compliance with the Personal Data (Privacy) Ordinance (“PDPO”).

Who should be aware?

The Checklist should be read in conjunction with the "Artificial Intelligence: Model Personal Data Protection Framework" (the “Model Framework”) published by the PCPD in June 2024.  The Model Framework (as discussed in our earlier thoughts here) is addressed to organisations for facilitating their PDPO compliance when procuring AI solutions from third parties and processing personal data in their operation or customisation of AI system.

Accordingly, the Checklist should be seen as a further elaboration of PCPD’s expected best practices for data users to comply with the PDPO when procuring and implementing AI systems at an organisational level. The Checklist provides guidance on how PDPO obligations can be complied with through devising internal AI strategy and policies, which is a suggested measure described in the Model Framework on ensuring accountability and good AI governance. In practice, such measures are not only legally necessary but also integral to cultivating consumer trust and upholding ethical principles in AI deployment.

Summary of the Checklist

We summarise below key points raised by the PCPD for compiling such internal policies and guidelines:

Subject

Recommendations

Scope

  • Specify permitted tools (e.g. publicly available or internally developed GenAI tools).
  • Specify permissible use cases (e.g. drafting and summarising information).
  • Specify applicability (e.g. departments, ranks, whole organisation).

Data Protection Measures

  • Establish rules on what information can be used as input into Gen AI tools. Consider from IP, DP, confidentiality perspectives.
  • Where personal data is involved, develop clear rules on anonymisation or cleansing before input.
  • Establish clear rules on use of AI-generated outputs.
  • Specify storage or deletion procedures for Gen AI-generated data, aligning with the organisation’s data retention policies.
  • Ensure alignment with existing data handling and security policies.

Lawful and Ethical Use, and Prevention of Bias

  • Prohibit unlawful or harmful usage of Gen AI.
  • Emphasise the importance of human review for AI-generated output to prevent errors, bias, or discriminatory content.
  • Require verification (e.g. proofreading, fact-checking) of AI-generated content.
  • Provide instructions on watermarking or labelling AI outputs.

Security and Access Control

  • Specify the devices (e.g. office computers, work mobile phones) on which employees can access Gen AI tools.
  • Limit usage to authorised personnel with relevant training.
  • Require robust user credentials, such as strong passwords and multifactor authentication.
  • Mandate strict security settings (e.g. disallow saving prompts) to minimise data security risks and behavioural profiling.

Incident Response Plan

  • Instruct employees to report AI incidents (i.e. data breaches or unauthorised inputs) according to the organisation’s AI Incident Response Plan.
  • Include procedures for handling abnormal or potentially unlawful AI outputs.
  • Refer to established guidelines on data breach handling and notification.

 

Observations

Unique data protection risks: The data protection considerations involved in deploying Gen AI tools are often challenging and necessitate a separate risk assessment exercise. For example:

  • The use of Gen AI tools will often constitute a new processing purpose and whether this is consistent with the original purpose of collection must be assessed as part of the risk assessment process. If this constitutes a new purpose, the potential need to obtain appropriate consent from data subjects prior to any use pursuant to the Data Protection Principle 3 of the PDPO should be considered.
  • The Model Framework reminds organisations of the need to monitor AI systems continuously because risk factors related to their use will change over time. While the employee’s use of Gen AI tools should be reviewed to capture any misuse (particularly to monitor and log any input to Gen AI tools), organisations should take note whether this will constitute another form of employee monitoring and the corresponding data protection obligations such as ensuring transparency of such internal audit practice.

To address these risks, organisations should carefully consider the need to conduct a Privacy Impact Assessment which entails a critical examination of the six (6) Data Protection Principles under the PDPO to ensure its compliance throughout the entire data processing cycle when implementing AI solutions. This is particularly pertinent if the Gen AI tools involve large-scale processing or processing of sensitive information.

Data Security remains a key theme: As highlighted in both the Model Framework and the Checklist,  data security risks arising from an AI incident should be addressed specifically.  The Checklist highlights the need for the organisation’s AI Incident Response Plan to require employees to report any unauthorised input of personal data into Gen AI tools or abnormal output results.  In practice, these risk points are not easy to identify and will require higher vigilance of care from the employees through regular training. Notably, under Data Protection Principle 4 of the PDPO, data users are required to take all practicable steps to ensure unauthorised access is protected against having regard to ‘measures taken for ensuring the integrity, prudence and competence of persons having access to the data’. This means data users are required to ensure their employees using Gen AI tools are equipped with the relevant technical awareness to sufficiently identify risks of a data breach and the ability to report and trigger the appropriate next steps in accordance with their AI incident response plan.

Please get in touch with us if you would like any assistance in preparing AI policies, conducting internal trainings and workshops or other AI-related regulatory and compliance matters.  For more information, please visit our Generative AI page or our Bird & Bird AI Legal Services page.

Latest insights

More Insights
featured image

Greening Electronics: How the ESPR will affect electronics and household appliances

4 minutes Apr 03 2025

Read More
featured image

Applications for provisional measures before the UPC – what may be granted?

5 minutes Mar 31 2025

Read More
featured image

German Court Rules That Data Protection Breaches Can Be Prosecuted In Civil Courts Under The Unfair Competition Law

3 minutes Mar 31 2025

Read More