Generative AI in retail: Opportunities, Risks and Regulation in Australia

Written By

shariqa mestroni module
Shariqa Mestroni

Special Counsel
Australia

As an intellectual property lawyer, I enjoy working with clients to build and protect their brands, manage their IP portfolios and enforce their copyright, trade marks and patents.

james hoy Module
James Hoy

Special Counsel
Australia

I am a Special Counsel in our Sydney office and I specialise in media and technology disputes and advice with a particular focus on privacy and data protection matters.

jessica laverty module
Jessica Laverty

Senior Associate
Australia

I am a senior associate in our Dispute Resolution Group in Sydney. My practice focuses on assisting clients to resolve commercial disputes, particularly for those clients in the technology sector.

The rapid advancement of generative AI means that it is high on the agenda in Australia and globally. Like any disruptive technology, generative AI has unleashed a wave of opportunities and challenges for individuals and businesses in the retail industry.

As we continue to witness the integration of AI in the retail sector, we see businesses capitalising on the use of AI including in:

  • product development and innovation (writing briefs for a product collection, or generating mood boards and visual imagery);
  • supply chain and logistics (augmenting real-time demand forecasting);
  • marketing (hyper-personalisation for loyalty programmes);
  • customise user experiences (including online shopping assistance);
  • writing descriptions for customer sites;
  • forecasting trends;
  • creating marketing projects;
  • designing products and service offerings;
  • overhauling stock management and supply chains;
  • assisting with store operations; and
  • improving customer support.

But the opportunities need to be balanced against the risks, including copyright infringement issues and data protection concerns (as addressed further below).

Taking one subsector of retail as an example, the State of Fashion 2024 survey of global fashion executives published by McKinsey found that 73% of respondents said generative AI will be an important priority for their businesses in 2024.  However, only 5% of those surveyed believed their businesses had the capability to fully leverage generative AI, which suggest fashion companies are not yet capturing its value in the creative process.[i]

In this article, we explore the gap between opportunities and risks posed by AI in the Australian retail sector.  We highlight some of the key legal and regulatory issues applicable to AI (including generative AI), being:

  • Regulation: the regulatory environment for AI is developing at different rates globally. What is the status of regulation in Australia?
  • Intellectual Property: How do you protect generative AI systems and their outputs? What is the position in Australia?
  • Data Protection: How do you ensure compliance with the data protection laws and regulations that apply to your use and development of generative AI tools. What regulation should you be aware of?

While this article focuses on the use of generative AI in Australia, for a global perspective, see our firm's extensive coverage on the issue here.

Regulation of AI in Australia

AI (including generative AI) is regulated in Australia, but not with AI-specific legislation. Instead, it is governed by existing legislation (including consumer, data protection, competition and copyright law).

The Australian Government’s position is generally that it supports the safe and responsible deployment and adoption of AI across the government and private sector. However, it does not have in place an overall national AI strategy.

However, the Australian Government is taking steps which indicate that it is seriously considering how best to regulate AI in Australia, including:

  1. The government is encouraging organisations to follow Australia’s voluntary AI Ethics Framework which is intended to complement existing AI regulations and practices.
  2. In its recent budget, the Federal Government announced that it will provide $39.9 million over five years from 2023–24 for the development of policies and capability to support the adoption and use of AI technology in a safe and responsible manner.
  3. On 26 March 2024, the Senate passed a resolution that a Select Committee known as – “the Select Committee on Adopting Artificial Intelligence (AI)” – be established. The Committee’s purpose is to inquire into and report on the opportunities and impacts for Australia arising out of the uptake of AI technologies in Australia and present a final report to Parliament by 19 September 2024.
  4. In January 2024, the Federal Government published its interim response to its consultation “Supporting responsible AI: discussion paper”.

There is also, of course, a keen interest in Australia to keep track of international developments on the regulation of AI globally, including the Europe’s AI Act and the Digital Services Act.

IP implications of AI

There are myriad issues involving AI and intellectual property law.  Our discussion below focuses on generative AI and the IP implications of using large data sets for training and ownership of AI-generated works.

Use of training data for AI models

Content creators are becoming increasingly concerned about potential misuse of their content by generative AI tools, particularly when those tools access, copy or “scrape” online content for training purposes.  Where retail businesses are deploying AI tools, there are risks around what training data has been used to train the AI model.  For example, if it is data about a competitor’s product or service offering, has this data been obtained by the AI tool through legitimate means?  How can a business protect itself from being on the receiving end of an infringement claim?

While there are currently no cases before Australian courts where third parties have alleged IP infringement by AI systems, the influx of court proceedings overseas, namely in the United States, United Kingdom and China (which we reported on here) provide some insight into how Australian courts may be expected to grapple with these issues in the near future. 

Broadly, the cases have involved artists, companies or software developers suing AI providers for copyright infringement on the basis of:

  • unauthorised reproduction of images, text and metadata in the datasets used to train AI models;
  • the outputs generated by AI being unauthorised reproductions or derivative works and the unauthorised publication of such works; and
  • unauthorised copying of software code to create AI models.

In most of these cases, AI companies have attempted to argue that use of copyright works to train AI falls under a “fair use” exception under copyright law.  It may be some time before we see jurisprudence emerging from courts on whether this defence is tenable.  However, such jurisprudence will have limited application in Australia where there is no “fair use” defence to copyright infringement.  In its place is the much narrower concept of ‘fair dealing’ where infringement can be exempt if certain factors are satisfied and the use is for the primary purpose of research or study, criticism or review, parody or satire, reporting the news, reproduction for professional advice or judicial proceedings or enabling a person with a disability to access material.  It is difficult to envisage a situation where training AI tools with unauthorised third party IP falls into one of these exceptions. 

In addition to copyright infringement, there is also the issue of potentially infringing an author’s moral rights in the unauthorised reproduction of copyright work for AI-training purposes, i.e. does the AI training data correctly attribute the author of the work, avoid false attribution and, if taking parts of a work, respect the integrity of an author’s work? 

As employees increasingly have access to, and the propensity to use, AI tools in the course of their work, employers and businesses are forced to confront the issues it creates.  By way of example, a marketing team may deploy AI tools to create campaigns for a new product launch or designers may use AI images to create a base product from which they develop their designs.  As a general rule in Australia, copyright works created in the course of employment are owned by the employer.  The situation becomes muddy around an employer’s potential exposure to an infringement claim by a third party if the AI tool used by the employee has used that third party’s work in training data.  Employers should ensure they have sufficient controls around their employee’s use of AI tools and pay close attention to indemnity clauses of the AI tools they have permitted employees to use, as many of these tools (particularly the ones that are free) disclaim responsibility for IP infringement.

Ownership of AI-generated IP

There is a spectrum of AI involvement in the creation of IP: at one end, AI involvement is minimal and humans are using AI as a tool to assist in the development of an invention or creation of a work.  At the other end of the spectrum, there is minimal human involvement.  Somewhere in between is AI responding to prompts generated by humans.  Where the lines between AI and human-creation are blurred, there is a question about whether humans have made sufficient contributions to be considered an ‘inventor’ in the case of a patent, or an author in the case of a copyright work.

Currently in Australia, it is not possible for AI to be considered the author or owner of a copyright work.  Under the Copyright Act 1968 (Cth) an “author” must be a qualified person at the time the work was made, namely Australian citizen or resident or a body corporate incorporated under Commonwealth or state laws, and they must have exerted “independent intellectual effort” in creating the work.  Copyright ownership is therefore connected with the concept of authorship, where AI does not neatly fit in. 

The question of the named inventor of a patent application involving AI has been conclusively decided (at least for now).  In 2021, a single judge of the Federal Court of Australia allowed an AI-system, DARBUS, to be named as the sole inventor on a patent application.  However, on appeal to a five judge bench of the Full Federal Court, the decision was overturned (reported here) and special leave to appeal to the High Court was refused (reported here).  The position in Australia is now in line with other jurisdictions (save for South Africa where patents are not examined before grant) where patent applications have been rejected because DARBUS was named as the sole inventor: the European Patent Office, New Zealand, United States and the United Kingdom (reported here).  This is good news for businesses where AI has been used as a tool in the inventive process: provided that the invention meets other patentability requirements, patents for products or methods involving AI can be obtained if a human inventor is listed.  However, this may not be as straightforward in practice as there are usually multiple actors where AI is involved: the person who developed the training algorithm, the person who presented the prompt, the AI model developer and the owner of the data, could all potentially have a claim to inventorship.

On 13 February 2024, the USPTO has released Draft Guidance for AI-Assisted Inventions on how it proposes to analyse inventorship issues.  The guidelines explain that while AI-assisted inventions are not categorically unpatentable, the inventorship analysis should focus on the natural persons that provided a “significant contribution” to the invention. IP Australia is yet to follow suit in terms of releasing any guidance material.  However, it did publish an exploratory paper that set out a series of “provocations” in July 2023, highlighting the complexity of developing a AI framework for IP rights.  What is clear for businesses deploying or developing AI tools for now, is the crucial need to keep a paper trail of all human involvement and processes, in the event of an ownership or inventorship challenge.

Data protection

Global regulatory environment

The swift rise of generative AI has also given rise to new challenges from a privacy perspective, with privacy regulators around the world already showing a high degree of interest in ChatGPT, and its privacy implications.

In March 2023, the Italian data protection authority temporarily banned ChatGPT in Italy and opened an investigation into the privacy practices of OpenAI, citing the following reasons in a public statement:

  • a data breach affecting ChatGPT’s user conversations and payment information;
  • lack of information provided to users and data subjects whose personal data was collected;
  • lack of legal basis for the mass collection and processing of personal data in order to “train” the algorithms;
  • inaccuracy of personal data processed by the platform; and
  • lack of an age verification mechanism.

The temporary ban was subsequently lifted, in April 2023, subject to a number of conditions imposed by the Italian data protection authority, after OpenAI expressed willingness to put in place concrete measures to protect individual privacy.

ChatGPT has since been the subject of various other investigations by privacy regulators across Europe and elsewhere and the European Data Protection Board has also established a dedicated task force on ChatGPT.

Australian compliance requirements

So what are some of the key things that you should be thinking about when assessing the use of a generative AI system through the lens of the Australian Privacy Principles (APPs)? 

We would suggest turning your mind to the following five areas: transparency, collection, use and disclosure, integrity and individual rights. 

From a transparency perspective, ask whether any collection, use and disclosure of personal information by the generative AI system is disclosed in your privacy policy?

From a collection perspective, ask:

  • Is the collection of personal information by the generative AI system reasonably necessary for the functions or activities of your organisation?
  • Is any sensitive information collected by the generative AI system and, if so, have individuals consented to this? 
  • Is the collection of personal information by the generative AI system lawful and fair and only from the individuals themselves unless unreasonable or impracticable to do so?
  • Is your organisation taking reasonable steps to notify individuals whose personal information is collected by the generative AI system of the matters specified in APP 5?

In terms of use and disclosure, ask:

  • Is personal information used or disclosed by the generative AI system for a secondary purpose and, if so, have individuals consented to that or would they reasonably expect it?
  • Does the generative AI system involve any use or disclosure of personal information for the purpose of direct marketing and, if so, does this comply with APP 7?
  • Does the generative AI system involve any cross-border disclosure of personal information and, if so, has your organisation taken reasonable steps to ensure compliance with the APPs?

In terms of integrity, ask:

  • Is your organisation taking reasonable steps to ensure that any personal information handled by the generative AI system is accurate, up-to-date, complete and relevant?
  • Is your organisation taking reasonable steps to protect the security of personal information held in the generative AI system and to destroy or de-identify the information when required?

And, finally, from an individual rights perspective, ask whether there is a mechanism for dealing with requests by individuals for access to, or correction of, personal information held in the AI system.

Regulator activity in Australia

We are yet to see any case law in Australia, or Privacy Commissioner investigations, concerning the application of the APPs to ChatGPT or other generative AI systems.

However, in July 2023, it was announced by the Digital Platform Regulators Forum, of which the OAIC is a member, that its strategic priorities for the 2023-2024 financial year will include a new focus on understanding and assessing the benefits, risks and harms of generative AI.

While not specific to generative AI, recent decisions by the Privacy Commissioner, and the Administrative Appeals Tribunal, have involved consideration of privacy law issues arising in relation to AI systems generally in the context of facial recognition tools which involve the use of machine learning algorithms.

In particular, in late 2021, the Privacy Commissioner found that 7-Eleven had breached the privacy of its customers by collecting biometric information through a facial recognition tool and that Clearview AI had breached the privacy of Australians by scraping their biometric information from the web, and disclosing it, through a similar tool.   

Clearview AI sought a review of this decision in the Administrative Appeals Tribunal and the Tribunal found, in May 2023, that Clearview AI had collected sensitive information about individuals without consent and, consequently, had not taken reasonable steps to implement practices, procedures and systems to ensure compliance with the APPs.

Proposed reforms to the Privacy Act

Automated decision making is a topic that is being addressed as part of the Privacy Act Review.

In particular, the Australian Government has agreed to proposals that:

  • privacy policies should set out the types of personal information that will be used in substantially automated decisions which have a legal or similarly significant effect on an individual’s rights (Proposal 19.1);
  • high level indicators of the types of decisions with a legal or similarly significant effect on an individual’s rights should be included in the Privacy Act and this should be supplemented by OAIC guidance (Proposal 19.2); and
  • a right for individuals to request meaningful information about how substantially automated decisions with legal or similarly significant effect are made should be introduced and entities should be required to include information in privacy policies about the use of personal information to make substantially automated decisions with legal or similarly significant effect (Proposal 19.3).

The Australian Government has also relevantly agreed ‘in-principle’ to a proposal that there be a requirement to provide individuals with information about targeting, including clear information about the use of algorithms and profiling to recommend content (Proposal 20.9).

Key takeaways

  1. Businesses and employers should protect themselves by establishing robust AI policies for their employees with clear prohibitions and guardrails.Employees should have regular training around the use of AI tools in line with that policy.
  2. When adopting AI tools for use in business, carefully reviewing the terms and conditions of the AI tool to be aware of, and mitigate, any legal risks. You may be interested in reading our article about Generative AI and Machine Learning Contracts.
  3. When dealing with suppliers, ensure that there are sufficient AI warranties and indemnity clauses in contracts.Before engaging new suppliers, conducting sufficient due diligence around their policies.
  4. Ensuring good record-keeping – the use of AI should be documented, including in relation to the natural persons involved and the AI prompts.
  5. Businesses should ensure that any generative AI tools adopted comply with the applicable data protection laws in Australia including in relation to transparency, collection, use and disclosure, integrity and individual rights.

[i]Business of Fashion - McKinsey State of Fashion 2024 Survey, reported inThe State of Fashion 2024 report | McKinsey, accessed 30.05.24.

Latest insights

More Insights
Curiosity line blue background

ASIC’s 2025 enforcement priorities – what’s on the corporate regulator’s mind?

Nov 21 2024

Read More
featured image

Bird & Bird marks World Children’s Day by announcing its forthcoming Global Comparative Guide to Children in the Digital World

7 minutes Nov 20 2024

Read More
featured image

Understanding the Impact of the Transposition of the CER Directive into Irish Law

5 minutes Nov 19 2024

Read More