I am dedicated to helping clients thrive in the digital era and I strive to deliver innovative legal solutions that align with my client's business objectives. I am the managing partner of our Singapore office and co-head Bird & Bird's International Privacy and Data Protection Group.
I am a commercial lawyer with a particular focus on helping clients navigate the converging technology and media sectors across the Asia-Pacific region.
The Proposed Guidelines focus on clarifying how the PDPA applies to the collection and use of personal data by organisations to develop and deploy systems that embed machine learning (ML) models (“AI Systems”) which are used to make decisions autonomously or to assist a human decision-maker through recommendations and predictions.
Broadly, the Proposed Guidelines explain when it may be appropriate for organisations to rely on certain exceptions under the PDPA when using personal data to develop AI Systems, and sets out recommended data handling and accountability measures when deploying AI Systems.
Background:
On 18 July 2023, the PDPC published the Proposed Guidelines with a view to provide greater clarity on the situations where the design and/or deployment of AI Systems involve the use of personal data.
The Proposed Guidelines are Singapore’s latest addition to its toolbox of AI frameworks and best-practice guides, including the Model AI Governance Framework and A.I. Verify framework. It also follows in the wake of a rise in regulations on AI in APAC and worldwide, such as China’s 2021 regulation on recommendation algorithms, the 2022 rules for deep synthesis and the 2023 draft rules on generative AI, Thailand’s draft Royal Decree on Artificial Intelligence System Service Business, and Vietnam’s draft National Standard on Artificial Intelligence and Big Data.
Who is impacted?
Any organisation which intends to collect and/or use personal data to develop and deploy AI Systems will be impacted by the Proposed Guidelines when they come into effect.
What do the Proposed Guidelines say?
The Proposed Guidelines cover five main areas:
Business Improvement Exception and Research Exception.The Proposed Guidelines give some illustrations of when organisations need not obtain consent to collect or use personal data in AI Systems. These include where the collection or use of personal data is for business improvement purposes, per Part 5 of the First Schedule of the PDPA (“the Business Improvement Exception”), or for research purposes, per Part 2 of the Second Schedule of the PDPA (“the Research Exception”).
The Business Improvement Exception may be relevant when the organisation’s collection or use of personal data is to develop or enhance AI Systems with the aim of improving operational efficiency by supporting decision-making, or to offer new or more personalised products and/or services. Examples include developing or enhancing recommendation engines in social media services to offer targeted content to users based on their browsing history, or job assignment systems which automatically assigns jobs to platform workers.
The Research Exception may be relevant when conducting commercial research to advance science and engineering without a product development aim.
Data Protection Considerations.The Proposed Guidelines advise organisations to implement the appropriate legal, technical and process controls when handling personal data to develop or enhance AI Systems. For example, organisations should only use personal data containing attributes required to train and improve the AI System or ML model, and only use the volume of personal data necessary to train the AI System or ML model based on relevant time periods and any other relevant filters. Organisations should also anonymise datasets as far as possible instead of using personal data.
Consent and Notification.Where organisations deploy AI Systems to provide recommendations, predictions or decisions based off individuals’ personal data, the Proposed Guidelines reiterate that they must comply with the consent and notification obligations under the PDPA unless relevant exceptions apply. The Proposed Guidelines outlines what information individuals should be provided with and how the information should be provided.
Accountability. The Proposed Guidelines recommend that organisations deploying AI Systems should ensure that they provide individuals with information on the responsible use of their personal data. On request from individuals, organisations should provide information about their data-handling policies and practices that a reasonable person would consider appropriate in the circumstances. For example, where personal data is used in deployed AI Systems, the organisation’s written policies may include the “behind-the-scenes measures” implemented to ensure that the use of personal data is safe and trustworthy.
Procurement of AI Systems.For organisations that engage service providers who provide professional services for the development and deployment of bespoke or fully customisable AI Systems, the Proposed Guidelines set out what practices service providers should adopt.
What’s next?
The PDPC is currently inviting comments from the public on the Proposed Guidelines. The public consultation period ends on 31 August 2023.
In line with the focus on the safe use of technology, the government has also recently published the Code of Practice for Online Safety in Singapore, which came into effect on 18 July 2023 and imposes new obligations on designated social media services to curb the spread of harmful content on their services. Read more about the new Code of Practice here.
This article is produced by our Singapore office, Bird & Bird ATMD LLP. It does not constitute legal advice and is intended to provide general information only. Information in this article is accurate as of 19 July 2023.