Artificial intelligence (AI) and its partner in crime, automated decision making (ADM) are creeping into everyday life in ways we do not necessarily expect or understand. For example, the Australian Border Force uses ADM systems to perform facial recognition on people to check their identity at airport passport controls and Transport for NSW uses them to detect illegal mobile phone use by drivers.
As AI and ADM becomes more advanced and ubiquitous, regulators and lawmakers around the world are trying to find the right balance between regulation and innovation for these cutting-edge technologies.
In Australia, a myriad of laws and law reform proposals and guidelines are already in place or underway. Some of these will shine a light on the direction Australia is likely to take, some could lead to significant changes in how we develop and interact with these technologies in our daily lives.
In an attempt to understand AI and ADM and its future in Australia, this article looks to some of the existing and proposed changes.
AI is a system that generates predictive outputs for a given set of human-defined objectives or parameters and can be designed to operate with varying levels of automation. Among the various applications of AI, an area of recent development and rising concern is the use of automated AI systems in the decision-making process, also known as ADM.It is now not uncommon for various sectors of the economy to use AI and ADM in their daily operation. As noted above, Australian Border Force and Transport for NSW already use ADM systems to perform their public duties. AI tools have also been used in hospitals to consolidate large amounts of patient data and analyse medical images as well as by engineers to evaluate and optimise designs to improve building safety. However, like many advances, with the benefits come risks.
One serious risk is algorithmic bias. This bias involves systematic or repeated decisions that privilege one group over another due to the use of small or incomprehensive datasets by the ADM system. Another risk is AI hallucinations, which can be defined as misleading and erroneous outputs arising from faulty algorithms or dataset that is out-of-date.
As these risks become more prevalent, regulators and individuals have taken legal actions against companies that use AI and ADM in their business, resulting in some successful actions under existing consumer protection laws. In April 2022, Trivago was ordered to pay a penalty of AUD$44.7 million for misleading consumers that its website identified the cheapest rate for a given hotel room when in reality the algorithm they used listed hotel rooms based on which hotel booking site paid Trivago the highest fee. More recently, in February 2024, Air Canada was held liable for information provided by its chatbot, which negligently misrepresented to a consumer that he could get a discount on his flight ticket.
While the above cases suggest that existing laws may address some of the risks arising from AI and ADM, the lack of an AI-specific legislation has left many other serious risks unattended. For instance, existing laws arguably do not cover AI-generated deepfakes used to spread false information online. One prominent example is a deep faked image of Taylor Swift endorsing Donald Trump at the Grammy Awards. The lack of AI-specific legislation has also caused much public concern over AI-generated election misinformation as the world is about to witness a record number of national elections in 2024.
As with privacy and data protection, the EU has once again taken the lead by introducing one of the world’s first comprehensive AI-specific legislation, the EU AI Act, which is expected to enter into force the second/third quarter of 2024.
Significantly, the EU AI Act seeks to govern AI via a risk-based approach, that is, imposing different responsibilities on AI developers and users based on the level of risks associated with certain types of AI technology. The EU AI Act also establishes a European AI Office to oversee the enforcement and implementation of the Act.
Australia currently has neither a single AI regulator nor any AI-specific legislation. Instead, AI is governed by a diverse group of regulators and a wide range of legislation, including consumer, privacy, competition and copyright laws. While it is unlikely that Australia will overcome such fragmented AI-regulation anytime soon, recently introduced law reforms could potentially address some of the risks arising from AI and ADM.
In September 2023, the Australian Government confirmed in its response to the Privacy Act Review Report that it “agrees” to implement the following proposals:
These proposals are similar to the existing ADM laws under the EU’s General Data Protection Regulation and California’s 2023 draft automated decision-making technology regulations, particularly in regards to the right to access information about ADM.
In January 2023, the Department of Infrastructure, Transport, Regional Development, Communications and the Arts released the draft Communications Legislation Amendment (Combating Misinformation and Disinformation) Bill 2023 for public consultation (Draft Bill).
The Draft Bill provides the Australian Communications and Media Authority (ACMA) with new powers to combat online misinformation and disinformation, including misinformation and disinformation on digital platforms generated by AI technology. Among other provisions, a registered industry code or ACMA standard may be established to require digital platforms to self-regulate bots that disseminate such information.
While public consultation has closed on August 2023, the Draft Bill has not been tabled in Parliament yet.
In January 2024, the Government published its interim response to the Safe and Responsible AI in Australia Consultation and articulated the following next steps, although it is currently unclear how the Government intends to implement these promises in the future – the Government will:
The above developments build upon Australia’s AI Ethics Principles, the eSafety Commissioner’s Safety by Design initiative and the Digital Platform Regulators Forum’s recently released working papers on algorithms and large language models.
To stay ahead of the curve, businesses are encouraged to:
The Bird & Bird Privacy & Data Protection team are supporting clients in navigating key changes in privacy reform around AI and automated decision-making. Please do not hesitate to contact the contributors if you would like to discuss AI and ADM regulation in Australia and its likely impact on your business.