I have deep experience acting for and advising clients on digital transformation projects and complex commercial transactions, including those involving procurement, the design and implementation of complex IT systems, business process outsourcing arrangements and the commercialisation of technology services and system. I also advise clients on data protection and cyber-security related matters, including advice on regulatory compliance with privacy and cyber laws, and data incident responses.
On 21 October 2024, the Office of the Australian Information Commissioner (OAIC) published two new guidelines on privacy and artificial intelligence.
Authors: Nick Boyle, Evelyn Park, Tia Khan
Introduction
On 21 October 2024, the Office of the Australian Information Commissioner (OAIC) published two new guidelines on privacy and artificial intelligence (AI), being:
Guidance on privacy and the use of commercially available AI products (Guidance 1) – which explains organisations’ obligations when using personal information from commercially available AI products, such as chatbots, content-generation tools, productivity assistants that augment writing, coding, note-taking, and transcription.
Guidance on privacy and developing and training generative AI models (Guidance 2) – which targets regulated entities using personal information to train or fine-tune generative AI models.
These guidelines highlight the OAIC’s key privacy considerations and reinforce the requirements of the Australian Privacy Principles (APPs) businesses should have in mind when selecting and using AI products, from a privacy lens.[1] In this article, we summarise the key takeaways from each guidance and what businesses need to do to ensure compliance.
Guidance 1
Guidance 1 targets organisations who deploy ‘AI systems that were built with, collect, store, use or disclose personal information’ (Deployers). The OAIC adopts a broad definition of ‘deployment’, including deployment by customers or individuals who may not be actual deployers of the system, but deploy the AI tool for internal purposes or externally, to impact others. In simple terms, ‘any individual or organisation that supplies or uses an AI system to provide a product or service’, including internally within the business is considered a Deployer. Common types of commercially available AI products include, chatbots, content-generation tools, productivity assistants that augment writing, coding, note-taking, and transcription.
The Guidance is a timely reminder to businesses, that the Privacy Act 1988 (Cth) (Privacy Act) and the APPs apply to all users of AI involving personal information, including where information is used to train, test or use an AI system. Some of the significant privacy risks identified by the OAIC include the risk of individuals losing control over their personal information, where personal information may be collected without their knowledge and consent, and the spread of errors or false information via AI outputs which appear credible. Referring to a hypothetical example of an AI chatbot regurgitating personal information without consent, the guidance warns against the potential privacy compliance and ethical risks when AI systems are not properly trained, or trained on limited data which may be relevant to a prompt.
What do businesses (Deployers) need to consider when selecting an AI product?
Here are 6 key takeaways from Guidance 1 for businesses who are currently deploying AI products (or are considering the deployment of AI products in the future):
Conduct due diligence to ensure the product is suitable to its intended uses - consider whether the product has been tested for its intended use, how human oversight can be embedded into processes, any potential privacy and security risks (are there any limitations of the system? If so, what are they?). Importantly, businesses should consider who will have access to personal information input or generated by the entity when using the product. Regular audits or reviews is recommended (following which, the business should consider if it intends to fine-tune an existing AI system for specific purposes).
Update their privacy policies and notifications – consider whether these are clear and transparent about the use of AI.
Check compliance with APP 3 - If AI systems are/will be used to generate or infer personal information, is this being done by lawful and fair means? (see below).
Check compliance with APP6 - If personal information is being/will be input into an AI system, is this used or disclosed for the primary purpose for which it was collected? If no - is it being done by consent, or can a secondary use that would be reasonably expected by the individual be established? (see below).
As general best practice, avoid entering personal information - particularly, sensitive information including but not limited to health or financial or identification information, into publicly available generative AI tools given the significant complex privacy risks involved.
Complete Privacy Impact Assessments – before a new AI system is introduced to assist your business understand the impact of the use of a particular AI product may have on the privacy of individuals and identify ways to manage, minimise or eliminate those impacts.
A comprehensive checklist for businesses is available at the end the OAIC’s Guidance 1, or a shortened version for selecting and using AI products is available here and here.
Requirements of relevant APPs
Businesses are reminded that privacy obligations will apply to any personal information input into an AI system, as well as output data generated by AI (where it contains personal information). These include:
APP 3, which requires the collection of personal information to be reasonably necessary for your entity’s functions or activities and to be carried out by lawful and fair means:
Inferred, incorrect or artificially generated information produced by AI models (such as hallucinations and deepfakes), where it is about an identified or reasonably identifiable individual, constitutes personal information and must be handled in accordance with the APPs.
If you are using AI to generate or infer sensitive information about a person, you will also need to obtain that person’s consent to do so (unless an exception applies).
Note, general personal information through AI attracts other APP obligations including providing notice to individuals (APP 5) and ensuring the accuracy of personal information (APP 10).
APP 6, which requires entities to only use or disclose the information for the primary purpose for which it was collected:
Alternatively, if it is a secondary use or disclosure, such secondary use may be within an individual’s reasonable expectations if it was expressly outlined in a notice at the time of collection and in your business’s privacy policy.
A full overview of the relevant APPs is available in the OAIC’s guide.
Guidance 2
Guidance 2 is intended for developers of generative AI models or systems, broadly defined as any organisation who designs, builds, trains, adapts or combines AI models and applications (Developers). This guidance also applies to organisations that provide personal information to a Developer in order to develop or fine-tune a generative AI model.
What do businesses (Developers) need to consider?
Here are 5 key takeaways from Guidance 2 for businesses who are developing (or are considering developing AI products in the future):
Take a privacy-by-design approach early in the planning process(check compliance with APP 1) – consider if the personal information of individuals will be managed in an open and transparent way. Conduct a PIA to identity the impact that the project may have on the privacy of individuals (if required, consider taking steps to manage, minimise and eliminate impact) (see below).
Ensure reasonable steps are taken to achieve accuracy in generative AI models (check compliance with APP 10) – is the AI product trained on large amounts of data sourced from across the internet? Consider using high quality datasets, undertaking appropriate testing and using disclaimers to signal where AI models may require additional safeguards against high privacy risk uses (see below). Is there a process to ensure that a generative AI system can be updated if they become aware the information used for training or being output is incorrect or out of date? Will the result content be tagged (watermarked) as being AI-generated?
Check before using publicly available data to train or fine-tune generative AI models (check compliance with APP 3) – consider whether data intended for use contains personal information and whether it complies with privacy obligations (particularly for dataset collected by third parties, including licensed datasets collected by data brokers, datasets compiled by universities and freely available datasets compiled from scraped or crawled information).
Take particular care with sensitive information, which generally requires consent to be collected(check compliance with APP 5) – is notice being given to individuals in the context of data scraping (does it clearly indicate and explain the use of data for AI training purposes? Are steps being taken to notify affected individuals that their data has been collected)? Developers should consider if individual notification is not practicable, what other mechanisms are available to provide transparency to affected individuals, such as making information publicly available in an accessible manner. Further, Developers should consider if there are photographs and recordings of individuals which contain sensitive information and may not be able to be scraped from web data or collected from a third party without consent.
When seeking to use personal information held for the purposes of training an AI model (and this was not a primary purpose of collection), carefully consider privacy obligations(check compliance with APP 6) – without consent, Developers must be able to establish that the secondary use would be reasonably expected by the individual and that it is related to the primary purpose. Where Developers cannot clearly establish that a secondary use for an AI-related purpose was within reasonable expectations and related to a primary purpose, seek consent for that use and/or offer individuals a meaningful and informed ability to opt-out of such a use.
The OAIC has provided a detailed checklist of privacy issues and considerations for Developers at the end of Guidance 2 (or a shortened version which is available here).
Requirements of relevant APPs
Guidance 2 follows the practice that Developers of generative AI are APP entities and must comply with privacy obligations in the context of planning and designing generative AI. These obligations include:
APP 1, which requires open and transparent management of personal information:
Mitigate risks by understanding them – complete a PIA to reduce privacy risks. PIAs should be an ongoing process and is important for developers.
Where Developers build general purpose AI systems or structure their AI systems in a way that places the obligation on downstream users of the system to consider privacy risks, the OAIC suggests they provide any information or access necessary for the downstream user to assess this risk in a way that enables all entities to comply with their privacy obligations. However, as a matter of best practice, developers should err on the side of caution and assume the Privacy Act applies to avoid regulatory risk.
APP 3 (defined above):
Actively consider whether the dataset it intends to use for training a generative AI model is likely to contain personal information. Consider the data in totality (including the data, the associated metadata, and any annotations, labels or other descriptions attributed to the data as part of its processing) against the collection obligations of APP 3.
APP 6 (defined above):
Developers may wish to use personal information they already hold to train a generative AI model, such as information they collected through operating a service, but should consider this against the obligations of APP 6.
APP 5, which requires APP entities that collects personal information about an individual to take reasonable steps either to notify the individual of certain matters or to ensure the individual is aware of certain matters:
Developers should consider if it provided information about use for training generative AI models through an APP 5 notice, or in its APP privacy policy, or if a secondary purpose was a well understood internal business practice.
As such, an APP entity must have a clear and up-to-date privacy policy about their management of personal information and take reasonable steps to notify or otherwise ensure the individual is aware before or as soon as practicable after they collect the personal information.
APP 10, which requires Developers to take reasonable steps to ensure that the personal information it collects is accurate, up-to-date and complete, and the personal information it uses and discloses is accurate, up-to-date, complete and relevant, having regard to the purpose of the use or disclosure:
Reasonable steps depend on circumstances including the sensitivity of personal information, nature of the APP entity, possible adverse consequences for an individual if the quality of personal information is not ensured and practicality including time and cost involved.
Conclusion
The new AI guidelines clarify the application of Australia’s privacy laws to Al products and the positive obligations on businesses to ensure personal information is handled with best practice. The guides are an important reminder for businesses to be mindful when selecting a particular AI product and the potential privacy risks to individuals and identify ways to manage, minimise or eliminate these impacts early, particularly when it concerns vulnerable individuals such as children and persons experiencing vulnerabilities.
We highly encourage businesses to adopt the comprehensive checklists provided by the OAIC (in each guide).
If you have any question relating to the practical application of these guidelines, or any of action items suggested by the OAIC (including Privacy Impact Assessments, compliance with APPs), please do not hesitate to reach out to our Bird & Bird AI Working Group experts and the Australian Data Protection team.
[1] N.B. The guidelines do not address considerations from other regulatory regimes that may apply to the use of AI systems.