On 31 March 2025, the Office of the Privacy Commissioner for Personal Data (“PCPD”) published a checklist (the “Checklist”) to help organisations develop internal policies or guidelines for employees using Generative Artificial Intelligence (“Gen AI”) tools at work for the purpose of compliance with the Personal Data (Privacy) Ordinance (“PDPO”).
The Checklist should be read in conjunction with the "Artificial Intelligence: Model Personal Data Protection Framework" (the “Model Framework”) published by the PCPD in June 2024. The Model Framework (as discussed in our earlier thoughts here) is addressed to organisations for facilitating their PDPO compliance when procuring AI solutions from third parties and processing personal data in their operation or customisation of AI system.
Accordingly, the Checklist should be seen as a further elaboration of PCPD’s expected best practices for data users to comply with the PDPO when procuring and implementing AI systems at an organisational level. The Checklist provides guidance on how PDPO obligations can be complied with through devising internal AI strategy and policies, which is a suggested measure described in the Model Framework on ensuring accountability and good AI governance. In practice, such measures are not only legally necessary but also integral to cultivating consumer trust and upholding ethical principles in AI deployment.
We summarise below key points raised by the PCPD for compiling such internal policies and guidelines:
Subject |
Recommendations |
Scope |
|
Data Protection Measures |
|
Lawful and Ethical Use, and Prevention of Bias |
|
Security and Access Control |
|
Incident Response Plan |
|
Unique data protection risks: The data protection considerations involved in deploying Gen AI tools are often challenging and necessitate a separate risk assessment exercise. For example:
To address these risks, organisations should carefully consider the need to conduct a Privacy Impact Assessment which entails a critical examination of the six (6) Data Protection Principles under the PDPO to ensure its compliance throughout the entire data processing cycle when implementing AI solutions. This is particularly pertinent if the Gen AI tools involve large-scale processing or processing of sensitive information.
Data Security remains a key theme: As highlighted in both the Model Framework and the Checklist, data security risks arising from an AI incident should be addressed specifically. The Checklist highlights the need for the organisation’s AI Incident Response Plan to require employees to report any unauthorised input of personal data into Gen AI tools or abnormal output results. In practice, these risk points are not easy to identify and will require higher vigilance of care from the employees through regular training. Notably, under Data Protection Principle 4 of the PDPO, data users are required to take all practicable steps to ensure unauthorised access is protected against having regard to ‘measures taken for ensuring the integrity, prudence and competence of persons having access to the data’. This means data users are required to ensure their employees using Gen AI tools are equipped with the relevant technical awareness to sufficiently identify risks of a data breach and the ability to report and trigger the appropriate next steps in accordance with their AI incident response plan.
Please get in touch with us if you would like any assistance in preparing AI policies, conducting internal trainings and workshops or other AI-related regulatory and compliance matters. For more information, please visit our Generative AI page or our Bird & Bird AI Legal Services page.