By: Antony Holden

Jones v Family Court at Whangārei [2026] NZSC 1


The New Zealand Supreme Court’s decision in Jones v Family Court at Whangārei [2026] NZSC 1 is a sharp reminder that artificial intelligence, left unchecked, can turn Court submissions into a professional liability event. Although the case involved a self-represented litigant, the Court’s observations apply with equal – indeed greater – force to lawyers, whose duties of candor are codified in the Lawyers and Conveyancers Act (Lawyers: Conduct and Client Care) Rules 2008.

What happened?

Mr Jones filed submissions in a leave application citing several authorities that “appear to have been hallucinated by an Artificial Intelligence (AI) application“. The Court identified specific examples: citations such as “Teddy v Police [2015] NZSC 62″ and “Baird v R [2013] NZSC 120″ combined real case names with incorrect citations, while the genuine cases bearing those names were of no direct relevance to the application. Further, Mr Jones’s submissions misattributed certain propositions of law to a further four genuine but erroneously cited cases.

The risks this case illustrates

First, hallucinated citations undermine the Court’s ability to assess the merits and waste judicial resources.

Second, the Court warned that the misuse of AI in legal proceedings has serious implications for the administration of justice and public confidence in the justice system.

Third, reliance on unverified AI outputs may in serious cases amount to obstruction of justice or contempt of Court. For lawyers, these risks are magnified by Rule 13.1 of the Conduct and Client Care Rules, which requires a practitioner to take reasonable care to ensure that statements made to the Court are accurate and not misleading. A breach may attract disciplinary proceedings, costs orders, and reputational damage that cascades to insurers and their insureds.

Practical safeguards

The Court endorsed the New Zealand Courts’ guideline for generative AI, which states:

“You are responsible for ensuring that all information you provide to the Court/tribunal is accurate. You must check the accuracy of any information you get from a [generative AI] chatbot before using that information in Court/tribunal proceedings.

That responsibility rests with the person filing, not the tool.

Three things to do now:

  • Verify every citation independently: Before filing, confirm each authority exists, bears the neutral citation attributed, and actually supports the proposition for which it is cited—using official databases, not the AI itself.
  • Preserve the evidentiary trail: Retain AI-generated drafts alongside your verified final submissions so that, if challenged, you can demonstrate the human review that occurred.
  • Treat AI as a research assistant, never as counsel: Use generative tools to accelerate drafting or brainstorming, but apply the same scrutiny you would to work product from a junior who has never appeared in Court.