By: Stephen Morrissey, Christy Mellifont and Kayleigh Maxwell


At a glance

  • AI technology faces growing scrutiny over data privacy and cybersecurity concerns, with the Australian Government introducing voluntary safety standards and exploring mandatory guardrails for high-risk AI applications.
  • Generative AI is raising legal questions about copyright infringement, ownership of AI-generated works, and liability under Australia’s Copyright Act, with global lawsuits setting important precedents.
  • Determining accountability for AI-related errors is complicated by the involvement of multiple stakeholders, “black box” transparency issues, and emerging regulatory discussions on consumer protections under Australian Consumer Law.


The rapid development of Artificial Intelligence (AI) is revolutionising industries and reshaping the way we interact with technology. However, alongside opportunities for transformation, AI poses new challenges and risks for technology providers and users.

In this article, we look at recent examples of how the widespread use of AI technology is impacting on the liability landscape in Australia and what we can learn from developments overseas.

This article has been released alongside Issue 10 of our Cyber, Privacy & Technology Report.

Privacy & cybersecurity

  • OpenAI, the company responsible for ChatGPT, has reported that it has disrupted twenty cyberattacks on its platform since the start of the year, with threat actors attempting to harvest sensitive data or use the AI model for malicious ends.
  • In Australia, Bunnings’ facial recognition technology recently came under fire for interfering with the privacy of hundreds of thousands of customers.

One of the core issues for providers of AI technology is keeping customer data safe and ensuring compliance with privacy laws.

The Australian Government has released a proposal to introduce mandatory guardrails for AI in high-risk settings. In the meantime, a Voluntary AI Safety Standard has been developed to help organisations develop and deploy AI systems in Australia safely and reliably. The standard consists of ten voluntary guardrails that apply to all organisations across the AI supply chain.

For organisations and individuals using AI technology, the Office of the Australian Information Commissioner (OAIC) has published guidance to help reduce the security risks for users of AI systems.

Intellectual Property

  • The New York Times sued OpenAI and Microsoft for copyright infringement last year, after its content was used to ‘train’ generative AI and large language model systems.
  • A group of artists in the US has sued multiple generative AI platforms for using protected works of art to train AI to produce allegedly derivative works.

In Australia, data mining for images may constitute a violation of the Copyright Act 1968 (Cth), if it involved the copying, digitisation, or reformatting of copyright material without permission. It has been speculated that the owner of the AI platform or the end users could be at risk of copyright infringement.

Who owns the copyright in images created using generative AI technology? The question has not been tested in Australia, but users of AI technology may struggle to show that the work attracts copyright because it originated with a human author exercising independent intellectual effort.1

What happens when AI gets it ‘wrong’?

  • Air Canada was successfully sued by a passenger after its website chatbot gave incorrect information about its fare procedures.
  • Zillow faced a shareholder class action after its AI model overestimated the market value of homes purchased by the company by more than USD $500 million.
  • Wells Fargo & Co, one of the largest banks in the US, faced a class action lawsuit after its credit scoring algorithm allegedly discriminated against loan applicants based on ethnicity.
  • In New York, a lawyer was fined for misleading the court by relying on non-existent judicial opinions and citations, after he used ChatGPT to conduct legal research.

It is conceivable that the developers of offending AI technology could be joined to such proceedings or be the subject of later proceedings seeking indemnity (or similar) for liabilities incurred by those using their technology.

The decision-making rationale of AI systems, especially those powered by machine learning, often remains unknown even to the designers of those systems, which can make it difficult to detect errors as they occur. This lack of transparency in AI has been called the “black box problem” and it poses challenges for determining who is responsible for errors caused by AI.

Determining liability for AI errors also becomes complex due to involvement of multiple stakeholders, such as developers, manufacturers, data providers and end users. There can be design and manufacturing defects, such as errors with design or implementation of AI algorithms, for which developers of AI technologies may be held liable by Australian courts.

In Australia, standard vendor terms often attempt to allocate risk to the end user of the AI product. Arguably, this approach may help ensure that AI customers are taking a vested interest in the responsible use of AI technology. However, the enforceability of these clauses is yet to be tested and may turn on the factual context, including the potential operation of the Unfair Contract Terms regime.

The Australian Government is also looking at whether more protections are needed for consumers of AI. It has released a discussion paper on issues including:

  • the appropriateness of existing consumer protections under the Australian Consumer Law (ACL) for consumers of AI‑enabled goods and services,
  • the application of existing ACL provisions to new and emerging AI‑enabled goods and services, and
  • remedies for consumers and liability for suppliers and manufacturers of AI‑enabled goods and services where things go wrong.

Conclusion

As the demand for AI-enabled goods and services grows, new forms of liability are emerging for technology providers and their customers.

Currently, liability for AI-related harm will be informed by existing Australian legal frameworks for privacy, intellectual property, product liability and negligence.

Like other jurisdictions around the world, however, Australia is looking at whether additional regulation is needed to address risks for suppliers, manufacturers and consumers of AI.  


1 IceTV Pty Limited v Nine Network Australia Pty Limited (2009) 239 CLR 458; Telstra Corp Ltd v Phone Directories Co Pty Ltd [2010] FCAFC 149.