AI: key issues affecting Australian businesses in 2025

Written By

rebecca currey module
Rebecca Currey

Partner
Australia

I am a partner in our Intellectual Property Group, based in Sydney. My experience spans the breadth and depth of IP issues, but my specialty is complex IP litigation and disputes including contentious patent, trade mark, copyright, and confidential information and consumer protection/passing-off matters.

nick boyle Module
Nick Boyle

Partner
Australia

I have deep experience acting for and advising clients on digital transformation projects and complex commercial transactions, including those involving procurement, the design and implementation of complex IT systems, business process outsourcing arrangements and the commercialisation of technology services and system. I also advise clients on data protection and cyber-security related matters, including advice on regulatory compliance with privacy and cyber laws, and data incident responses.

1. Copyright ownership of AI generated content

In Australia, copyright protection is automatic (provided certain criteria are met) upon reduction to a material form, without the need for registration. However, for copyright to subsist in a literary, dramatic, musical or artistic work, one of the requirements is that there is a human author. 

For businesses that generate content using AI, there is a risk that copyright does not subsist in the output, and that such works can be freely used by third parties, without capacity to stop that use.

Consideration should therefore be given by businesses as to the trade-off between the speed and ease of producing material using AI, and the value to the business that is provided by copyright protection, noting that copyright may subsist in AI generated works if the output is original, and there has been sufficient human intellectual effort.

Similar considerations arise for business in a patent context - AI may also be used to invent new products or processes.  In Australia, AI cannot be listed as an inventor, because it is not a natural person.

If a business uses AI in the generation of works, it is good practice for business to acknowledge that use per the non-binding AI Ethics Framework.

2. Infringement by AI generated works

The Copyright Act 1968 (Cth) gives the owner a bundle of exclusive rights in respect of their works, for example, for literary and artistic works, it includes the right to:

  • reproduce the work;
  • publish the work;
  • communicate it to the public.
A person infringes the copyright in a work by, without the licence or authority of the owner of the copyright, doing any act comprised in the copyright, such as those set out above.

Businesses needs to bear in mind that, in terms of the use of AI, copyright infringement could occur at the point of training the AI, and also at the point of AI generated output. 

For example, a business may use third party data to train AI, including data that has been scrapped from the internet.  If copyright subsists in the material used to train the AI, that business may find itself on the receiving end of a lawsuit for copyright infringement, on the basis that the works have been reproduced for the purpose of training the AI. 

In terms of AI output, AI may also generate works that are reproductions of a substantial part of the material in which copyright subsists.

There are a number of cases in which allegations of infringement in an AI context have been made in the UK and the USA, and it is only a matter of time before such a case arises in an Australian context.

3. Governance and oversight of the use of AI

While Australia has not yet implemented AI-specific laws and regulations (in contrast to Europe’s AI Act), there are existing laws and requirements that can affect and apply to the use of AI, including IP-related laws, privacy laws and industry-specific requirements (such as ASIC’s regulatory guides for regulated financial services licensees and practice rules for legal professionals). As such, it is important that businesses develop and implement governance and accountability processes to consider and review proposed and actual uses of AI technologies to ensure that such use is safe, responsible and identifies, monitors and responds to risks and impacts of that use of AI.

Similarly, given potential risks associated  with the use of AI, particularly where it is being implemented by an organisation in a particular process for the first time, it is important to ensure that there is appropriate human oversight and control or intervention mechanisms implemented across the AI system. Human oversight enables the organisation to validate the outputs and actions of AI, and allow for intervention where necessary to reduce the potential for unintended consequences and harms. This is important for ensuring AI is used safely and responsibly, while also developing and preserving trust around businesses’ use of AI.

4. Confidentiality, privacy and cybersecurity risks and issues

AI tools, like any other technology platform or systems, are potential susceptible to cyber security incidents and therefore it is essential that businesses apply appropriate cyber security assessments and controls to their assessment, implementation and ongoing use of AI. Depending on the nature of particular AI systems and how they are used by businesses, there may be particular risks arising around the potential for misuse of those AI systems or the data processed by those systems that should be addressed.

Businesses should also consider whether their contractual arrangements with customers, suppliers and other stakeholders allow them to process information and data relating to those third parties through any AI systems. As AI capabilities are increasingly built into productivity tools and common IT systems, like email platforms, CRM systems and the like, there are risks that businesses may even be inadvertently breaching their contractual obligations.

From a privacy perspective, the use of AI also carries potential risks around the use and processing of personal information in a way that isn’t contemplated by a business’ privacy policy and that may not be permitted under the Australian Privacy Principles. For example, it may be that a business wants to use personal information of customers to train an AI model but that wasn’t disclosed at the time the personal information was collected – it is not a clear cut proposition that personal information can be used for any and all new use cases.

With all these issues in mind, businesses should develop and implement processes and systems around data governance, privacy and cybersecurity, including to:

  • assess the risks arising from AI systems use of and interaction with data;
  • manage data usage rights for AI systems:
  • consider and ensure that the Australian Privacy Principles are applied and complied with for each AI system in use, including those developed or provided by third parties.

Contact Us:

Our expert team at Bird & Bird is equipped to assist with any questions. For queries, please contact: Rebecca Currey, Partner at [email protected] or Nick Boyle, Partner [email protected]

Latest insights

More Insights
Curiosity line green background

Face value: Retailers must carefully manage Privacy risks related to facial recognition technology

Mar 17 2025

Read More
beach

Mass claims across borders: a deep dive into Poland and Belgium

Mar 14 2025

Read More
game controllers on orange background

New definition and new regime in France for "games with monetisable digital objects", known as "JONUM" ( "jeux à objets numériques monétisables")

Mar 14 2025

Read More