Issue 13 of our Cyber, Privacy and Technology Report is here! Covering key developments and insights for insurers, brokers, and their customers operating in the cyber, privacy, and technology sectors.

This issue highlights major updates in Australia, including a landmark Privacy Act penalty and ongoing OAIC enforcement actions against Optus, Kmart, and Qantas. The statutory tort of privacy is now being tested in the courts, while AI oversight continues with investigations into medical imaging and enforcement against international AI providers.

Additionally, proposals from the Productivity Commission and commentary from the Attorney-General signal the next phase of privacy reform. Cyber and digital updates include the Australian Cyber Security Strategy (Horizon 2), a youth social media ban, and new Freedom of Information legislation, alongside emerging technology liability and AI-related risks.

In New Zealand, the Biometrics Processing Privacy Code comes into force on 3 November, CERT NZ has merged with the NCSC to streamline cyber incident reporting, and the Privacy Amendment Act 2025 introduces a new Information Privacy Principle 3A from 1 May 2026, alongside other important regulatory updates.

We hope you find this edition both insightful and practical in navigating the ever-evolving cyber and technology landscape.

If you’d like to discuss any of the topics covered, please don’t hesitate to reach out to a member of our team or click here to find out more.

24/7 Cyber Hotline

Wotton Kearney operate a cyber incident response hotline that is monitored 24/7 by our dedicated team of breach response lawyers. By using a lawyer as an incident manager, we can immediately protect key reports and other sensitive communications with your customer and other vendors under legal professional privilege.

To access our hotline, please click here.

What we’ve been seeing – Q1 (July to Sept) 2025

Healthcare Provider Wotton Kearney were engaged to lead the incident response for a multinational medical provider that was impacted by a ransomware attack, resulting in the exfiltration of nearly 1TB of sensitive data. In this role, we assisted with the engagement of a panel of expert vendors including IT forensics and crisis communications.

As the number of affected individuals rose to over 300,000, we managed a multi-phase notification campaign using Wotton Kearney’s purpose-built notification service (b)REACH.

We took care of the entire notification process, issuing postal and email notifications on the client’s behalf, managing a call-centre hotline and enquiries mailbox handling extensive patient enquiries.

Our involvement ensured the provider could continue with day-to-day operations, given it did not have the internal resources or capability to manage a large notification campaign, and ensured patient enquiries were swiftly responded to, mitigating the risk of third party claims.

Retail Wotton Kearney provided comprehensive legal and strategic support throughout the forensic investigation of a suspected cyber incident. Fortunately, the incident was promptly identified by the client and containment actions were swiftly implemented, which severed the unauthorised access and restricted lateral movement.

Our involvement and strategic advice ensured that the response was aligned to the threat posed, and that sufficient time was assigned to the IT forensic investigation in order to fully investigate the source and extent of the incident. The investigation fortunately identified limited unauthorised access over a short period of time (and critically no access to sensitive PII or business data).

Accordingly, we advised the incident was not an eligible data breach under the Privacy Act, and notifications were not required. We also coordinated with the client’s crisis management team and reviewed internal communications to ensure staff were appropriately informed and supported while certain systems were offline as the incident was investigated.

We also assisted the client to report the incident to the ACSC and respond to the ACSC’s enquiries.

Hospital Wotton Kearney were engaged to support the response to a business email compromise of a hospital staff member’s email account. We worked closely with the hospital and appointed forensic experts to assess the scope of the breach and ensure appropriate containment measures were implemented.

Our team provided legal and strategic guidance throughout the investigation. Following a detailed review of the mailbox for PII by Wotton Kearney (using specialist eDiscovery tool, Canopy), approximately 2,500 individuals were identified as impacted.

Wotton Kearney then managed and coordinated the notification campaign to all affected individuals, which prompted enquiries from patients, impacted parties and media, which we assisted in responding to.

We also corresponded with the OAIC and advised on obligations to state regulators in the context of the hospital’s public operations, ensuring compliance and mitigating reputational risk throughout the process.

Websites and Apps

Wotton Kearney is defending a claim against a developer who was engaged to build an e-commerce app for the fitness industry (App). After substantial progress had been made developing the app, the customer requested major design changes which resulted in a revised scope and estimated completion date. Later, the customer alleged that the developer failed to deliver the App on time. The customer commenced legal proceedings, alleging that the developer’s representations regarding the estimated completion date were misleading or deceptive, and that it had suffered a loss of opportunity to earn millions in profit due to the delay in completing the App.

The key issues for the Court to determine are whether a failure to deliver the App on time has deprived the customer of an opportunity to derive profit (and how much) and whether the customer failed to mitigate its loss. This matter illustrates a tendency for tech companies (particularly start-ups) to bring large claims based on overly optimistic revenue projections, which overlook key risks such as market volatility and competition.

FinTech

Wotton Kearney is acting for a payment card company which was engaged to supply and manage gift cards programs in Australia and overseas. After the contract was signed, the customer wanted to change and expand the contractual scope of works, which the company said it could not provide. The customer terminated the contract and brought proceedings against the company, alleging that it had misrepresented the nature and scope of its services prior to entering the contract. The customer says that without the company’s misrepresentations, it would not have signed the contract and would have instead developed its own capabilities to manage the gift card programs.

The key issues for the Court to decide are whether the company’s conduct was misleading or deceptive; whether the customer would have developed its own capabilities at an earlier point in time if it was not misled; and how the customer’s alleged loss and damage should be assessed. Claims for loss based on hypothetical scenarios are inherently complex and may give rise to causation and reliance arguments.

Software

In 2023, a company was engaged to redesign the customer’s financial consolidation process using third-party software, with an expected delivery date and outlined benefits. The customer later made multiple change requests and failed to respond promptly to queries, leading to delays. In 2024, the customer alleged breach of the master services agreement and misrepresentation, claiming nearly $500,000 in damages for paid fees and future subscription costs.

The key issues were whether the company had reasonable grounds for its representations, whether the customer contributed to its own loss by delaying instructions, and whether liability could be shared with the third-party software provider. Claims involving misleading conduct increasingly span multiple parties such as developers, consultants, and vendors, with defendants seeking to apportion liability through proportionate mechanisms. They often argue that claimants contributed to their own losses by delaying instructions, overlooking risks, or failing to mitigate damages.

Australia

The Federal Court has declared that Australian Clinical Labs Limited (ACL) breached the Privacy Act 1988 (Cth) (Privacy Act) stemming from a February 2022 cyberattack on its subsidiary, Medlab Pathology. In a landmark decision signifying the first civil penalty to be handed down under the Privacy Act , ACL has been ordered to pay $5.8 million plus costs – Australian Information Commissioner v Australian Clinical Labs Limited (No 2) [2025] FCA 1224.

The breach

The incident, attributed to the Quantum Threat Actor group, resulted in the theft and dark web publication of 86GB of data, including passport numbers, credit card details and sensitive health data of over 223,000 individuals.

The failures

Between 26 May 2021 and 29 September 2022, ACL “seriously interfered” with the privacy of 21.5 million people by failing to take reasonable steps to safeguard their personal information.

These included:

  • Systemic failures in ACL’s cybersecurity posture: including no dedicated cybersecurity team, inadequate IT governance despite nearly $1 billion in revenue, a 2022 cybersecurity budget of just $350,000, and a failure to assess Medlab’s IT risks prior to acquisition, despite known gaps in testing and audits. The judgment highlighted that ACL’s reliance on external cyber consultants did not absolve it of its own responsibilities under the Privacy Act 1988 (Cth). (Source)
  • Failure to investigate and delay in notification: ACL’s initial investigation monitored only 3 of 121 affected computers, and failed to detect the exfiltration. Despite alerts from the Australian Cyber Security Centre in March and June 2022, ACL delayed reporting the breach until July. The OAIC alleges that ACL failed to conduct a reasonable assessment in determining whether the cyber incident constituted an eligible data breach under the Privacy Act.

The penalty

The Federal Court had regard to 223,000 instances of contraventions, that being a contravention for each individual. The Court emphasised that the objects of the Privacy Act are focussed on the protection of the privacy of individuals, rather than a single act or practice that may be in breach.

The penalty can be understood as comprising three components:

  • ACL’s failure to take reasonable steps to protect the personal information of Medlab Pathology’s customers ($4.2 million).
  • ACL’s failure to conduct a timely and adequate assessment of whether an eligible data breach had occurred, in contravention of section 26WH(2) of the Privacy Act ($800,000).
  • ACL’s failure to notify the OAIC as soon as practicable after forming reasonable grounds to believe an eligible data breach had occurred, in breach of section 26WK(2) ($800,000).

Note the increased penalty cap of $50 million, introduced in December 2022, doesn’t apply here as the conduct occurred in prior to introduction of amendments to the civil penalty regime under the Privacy Act. This case demonstrates that Courts may lean into assessing penalties based on overall conduct rather than the ‘per-person’ contraventions, if the proceedings are being brought under the pre-December 2022 legislation.

The Federal Court emphasised the importance of timely notification and thorough forensic investigation, the expectation of robust cybersecurity governance; particularly for entities tasked with handling sensitive personal information, and a budding shift in enforcement strategy to focus more closely on systemic conduct rather than individual contraventions.

On 8 August 2025, the Office of the Australian Information Commissioner (OAIC) announced it has commenced proceedings against Optus in the Federal Court of Australia. This action stems from a significant cyber attack in 2022, which resulted in the theft of personal information from millions of current and former Optus customers.

The personal information exposed in the breach included contact details, government-related identifiers, like passports and drivers license numbers, along with Medicare card details and various forms of other identification. It has been reported that the Optus data breach led to over 300,000 fraud attempts using the impacted data. (Source)

Following the breach, the OAIC launched an investigation into Optus’ privacy practices. The investigation was not merely about the breach itself but focused on whether the telecommunications giant had taken reasonable steps to protect the personal information in its care from misuse, unauthorised access or disclosure.

The OAIC alleges that from on or around 17 October 2019 to 20 September 2022, Optus seriously interfered with the privacy of approximately 9.5 million Australians by failing to take reasonable steps to protect their personal information from misuse, interference and loss, and from unauthorised access, modification or disclosure, in breach of the Privacy Act 1988 as per APP 11.1. The OAIC recognises that reasonable steps will depend on the size of the affected organization, its resources, as well as the nature and volume of the personal information it held.

The proceedings will likely produce authoritative guidance on what constitutes ‘reasonable steps’ to secure and protect personal information. In practice, ‘reasonable steps’ means the controls that Optus allegedly failed to maintain or implement, which left customer personal information exposed. From previous proceedings and recent OAIC guidance, we know that the following can assist in meeting the requirement, particularly for a well-resourced organisation:

  • Implementing procedures that ensure clear ownership and responsibility over internet-facing domains.
  • Ensure that requests for customers’ personal information are authorized to access that information.
  • Layer security controls to avoid a single point of failure.
  • Implement robust security monitoring processes and procedures to ensure that any vulnerabilities are detected and that any incidents are responded to in a timely manner.
  • Appropriately resource privacy and cyber security, including when outsourced to third party providers.
  • Regularly review practices and systems, including actively assessing critical sensitive infrastructure, and act on areas of improvement in a timely manner.

The OAIC alleges 9.5 million contraventions, one for each individual whose privacy was allegedly compromised by Optus. That puts the theoretical maximum penalty at a staggering $21.09 trillion. While such a figure is unlikely, and the final penalty remains at the court’s discretion, the case will test how severely Optus is punished and whether the courts are willing to impose a tougher regulatory stance going forward. An increased penalty regime was introduced in December 2022, though cannot apply as the alleged breaches occurred prior.

Regardless of the final outcome, the action signals a clear shift: regulators are prepared to take decisive action against organisations that fail to safeguard personal information.

In September 2025, the Privacy Commissioner found that Kmart Australia Limited (Kmart) breached Australians’ privacy by collecting their personal and sensitive information through facial recognition technology (FRT) across several of its stores in Australia. (Source)

Kmart implemented FRT across 28 of its stores between June 2020 and July 2022, with the technology designed to capture the faces and identities of every individual who entered the store and presented returns, in an attempt to identify people committing refund fraud.

The Privacy Commissioner found that Kmart had not notified individuals or sought their consent to collect images of their faces, a form of biometric information, upon entering each store.

Kmart argued it was not required to obtain consent, relying on an exemption under the Privacy Act 1988 (Cth) (Privacy Act) that allows for the collection of information without consent where an entity “reasonably believes they need to collect personal information to tackle unlawful activity or serious misconduct.”

Biometric information is one if the most sensitive types of personal information, and requires higher protection under the Privacy Act. The Privacy Commissioner assessed whether Kmart met the conditions to rely upon the exemption, and ultimately found that:

  • the biometric information was indiscriminately collected;
  • Kmart could have implemented other less intrusive methods to address refund fraud;
  • the use of FRT was not proportionate to the risk, given many individuals were not suspected of fraud, yet had their biometric data collected.

The OAIC has issued guidance on FRT and the potential privacy risks and implications, encouraging entities to consider the associated and potential risks prior to implementation. The OAIC guidance can be found here.

On 17 July 2026, representative complaint was made to the Office of the Australian Information Commissioner (the OAIC) alleging Qantas breached the Privacy Act 1988 (Cth) (the Privacy Act) failing to adequately protect the personal information of 5.7 million customers.

Qantas confirmed that in June 2025, ‘unusual activity’ was detected on a third-party platform used by a Qantas call centre in Manila. In a statement, Qantas revealed that the compromised data was limited to contact details, Qantas Frequent Flyer details, and/or gender and meal preference information.

Qantas obtained an interim injunction in the NSW Supreme Court “… to prevent the data being accessed, viewed, released, used, transmitted or published by anyone including any third parties.” Qantas maintains that there is no evidence of personal data being released—there was no impact to Qantas Frequent Flyer accounts; passwords, PINS and log in details were not access or compromised; and the data compromised was not enough to gain access to Frequent Flyer accounts.

Section 36 of the Privacy Act establishes that an individual (or any individual in the case of an interference with the privacy of 2 or more individuals) may complain to the OAIC about an act or practice that may be an interference with the privacy of the individual. A representative complaint may be lodged if class members have complaints against the same person/entity, all complaints are in respect of or arise out of the same, similar or related circumstances, and all complaints give rise to a substantial common issue of law or fact.

On 1 August 2025, the OAIC announced it had closed its preliminary inquiries into the diagnostic imaging network I-MED Radiology Network Limited’s (I-MED) disclosure of medical imaging scans to Annalise.ai, a former joint venture between I-MED and Harrison.ai, a healthcare artificial intelligence company.

Between 2020 and 2022, I-MED submitted Annalise.ai with patient data in order to train and develop their AI model and enhance diagnostic imagining support.

According to findings from preliminary inquiries, the Commissioner determined that the patient data provided to Annalise.ai was adequately de-identified, with no personal information disclosed under the Privacy Act.

The Commissioner has ceased the inquiries and will not be pursuing regulatory action at this time.

The recent decision in Kurraba Group & Anor v Williams [2025] NSWDC 396 provides the first instance where an application for relief has been brought under the recently enacted provisions concerning the tort of serious invasion of privacy.

One of the claims in the matter relies specifically on clause 7, Part 2 of Schedule 2 of the Privacy Act 1988 (Cth). This section of the Privacy Act was intended to establish a flexible framework enabling individuals to seek protection and compensation for a broader range of privacy complaints.

In granting an urgent interim injunction under the new provision, Gibson DCJ determined that there was a “serious question to be tried” concerning this new tort of privacy.

The Court accepted that the District Court of NSW has the necessary jurisdiction under the Privacy Act to grant relief for the tort of serious invasion of privacy, confirming its power to do so for the purposes of interlocutory relief. In this case the underlying claim involved the misuse of the Second Plaintiff’s private photographs, and the Court granted injunctions requiring the Defendant to remove all documents relating to the Plaintiffs from the internet.

This decision marks the Court’s first step into navigating new legislative territory by engaging with the newly enacted tort of serious invasion of privacy. Although, for now, its application can be extended to the grant of injunctive relief.

Australia’s first decision on the new statutory tort for serious invasion of privacy has arrived – a pivotal moment in the development of Australia’s privacy law framework.

In Kurraba Group Pty Ltd & Anor v Williams [2025] NSWDC 396, the Court granted interlocutory relief in response to what it described as a “campaign of extortion”, marking the first judicial consideration of the Privacy Act 1988 (Cth) amendments introducing a cause of action for serious invasion of privacy.

Read the article here.

In July 2025, the OAIC released its regulatory action priorities for 2025-26, detailing the key areas it will concentrate its regulatory activities throughout this year and into the next.

Australian Information Commissioner, Elizabeth Tydd, said in a statement, “In announcing our priorities, we want to ensure that the community is aware of the harms that we are focused on and why they are important. We also want to signal to industry and government the practices that they should focus on to ensure that they are upholding their obligations”.

The OAIC is aiming to increase public trust and confidence in the protection of personal information and access to government-held information. Fostering community confidence and trust is essential for maintaining a robust democracy and generating favourable outcomes for the economy.

The OAIC has decided to focus on the following four key areas:

Rebalancing power and information asymmetries.

The OAIC has identified several sectors and technologies where power and information imbalances exist, including:

  • Rental and property, credit reporting and data brokerage sectors.
  • Advertising and technology such as pixel tracking
  • Practices that erode information access and privacy rights in the application of artificial intelligence.
  • Excessive collection and retention of personal information.
  • Systemic failures to enable timely access to government information.

Rights preservation in new and emerging technologies.

The OAIC is prioritising privacy and information access rights when dealing with new and emerging technologies, including:

  • Facial recognition technology and biometric scanning.
  • New surveillance technologies, like data tracking in apps, cars and other devices.
  • The protection of privacy and the right to access information in the context of government use of artificial intelligence and automated decision-making.

Strengthening the information governance of the Australian Public Service.

The OAIC aims to bolster information governance and integrity in the Australian Public Service by:

  • Identifying unsatisfactory information handling practices and inappropriate data management, including how requests for access work under the Freedom of Information Act 1982 and the Privacy Act 1988.
  • Establishing guidance to streamline administrative decision-making in the Public Sector.
  • Assessing integrity risks to government stemming from information management practices that may affect public trust, such as inadequate disclosure procedures.

Ensuring timely access to government information.

The OAIC seeks to promote and uphold compliance with the Freedom of Information Act 1982 by systematically monitoring agency performance, including response times and refusal rates, conducting complaint investigations, and addressing patterns of systemic underperformance.

Overall, the OAIC is placing its focus on increasing the public’s trust in government operations, and reinforcing privacy security measures regarding new and emerging technologies.

On 8 September 2025, Australia’s eSafety Commissioner announced enforcement action against a UK-based AI provider whose “nudify” services were used to generate deepfake pornography of Australian school children. The Commissioner issued a formal warning to the company for enabling the creation of child sexual exploitation material, breaching an industry standard under the Online Safety Act.

This highlights the growing legal risks for technology providers whose platforms or services facilitate harmful or unlawful content. Providers may face regulatory action and face exposure to civil or criminal liability if they fail to implement safeguards against misuse, irrespective of which jurisdiction they’re based.

The Productivity Commission (PC)’s Harnessing data and digital technology was released on 5 August 2025. The PC is an independent Australian Government body that researches and advises on economic, social, and environmental issues to support long-term policy decisions for Australians’ welfare.

The interim report recommends creating an “alternative compliance pathway” in the Privacy Act 1988 (Cth) so businesses can choose to comply either via:

  1. the current controls‑based Australian Privacy Principles (APPs) regime (notice/consent, APP processes), or
  2. a new outcomes‑based pathway where compliance is judged against specified privacy outcomes rather than prescriptive APP controls

Under the current Privacy Act framework, businesses demonstrate compliance by following set APPs and procedural controls, focusing on specific steps like notices, consent, and security measures. In the interim report, the PC proposed a dual-track within the Privacy Act where corporations could elect the alternative outcomes-based track, where they would still be subject to the Privacy Act, OAIC oversight, penalties and enforcements, but they would demonstrate compliance through outcomes such as auditable tests including data minimisation, user understanding, and proportionality. The focus would be more on whether privacy outcomes were achieved, rather than whether the privacy regulations were put into place.

The PC sees the current regime as costly and without delivering meaningful protections whilst an outcome-based pathway could improve protections especially where entities can prove they act in users’ interests.

On 5 September 2025, the OAIC released submissions reiterating the importance of a risk based, strong governance, and calibrated safeguards. The submissions do not endorse the dual track outcomes pathway nor set out how a “best interests” test should be measured. Instead, it only references PC findings regarding recalibrating privacy safeguards and emphasises trust, enforceability, and measurable standards.

Submissions on the interim report closed on 15 September 2025 with the final report due to be released in December 2025.

A short remark from the Attorney-General Michelle Rowland in a recent television interview has attracted attention, highlighting the growing anticipation around the next stage of Australia’s privacy reforms. She stated that Australians are “sick and tired of their personal data being exploited” and stressed that “we will not have our privacy reforms dictated by multinational tech giants”. These comments appear to refer to the Tranche 2 reforms recommended in the Privacy Act Review Report released in February 2023, which set out the blueprint for both the Tranche 1 amendments and the broader package of future reform.

There has been increasing commentary on the pace and direction of reform, with the Productivity Commission’s August interim Harnessing Data and Digital Technology report adding to the discussion. While advisory in nature, the Commission’s report has drawn attention for proposing a novel “dual-track” compliance framework, under which entities could either follow prescriptive requirements such as the Australian Privacy Principles or demonstrate compliance through outcomes-based obligations – for example, by showing they had acted in individuals’ best interests by identifying and mitigating potential privacy risks.

The OAIC has responded constructively, noting that “a greater focus on outcomes as opposed to controls can improve privacy rights in Australia”, while cautioning against measures that impose unnecessary regulatory burden. It also pointed to the proposed “fair and reasonable” test in Tranche 2 as a clear example of outcomes-based reform already underway.

Whether or not the Attorney-General’s remarks were intended as a direct signal on Tranche 2, they have reinforced expectations that the Government will move forward with the next stage of reform, with attention now centring on the timing and scope of that process.

A new model that could significantly alter how artificial intelligence (AI) systems access online content has been introduced. Known as Pay-Per-Crawl, the initiative allows website owners to charge AI bots for accessing their content – marking a shift from the longstanding practice of free and largely unregulated web scraping.

What is Pay-Per-Crawl

Most AI systems rely on web crawlers, being automated bots that scan and copy publicly available content from websites. These crawlers are used to gather data for various purposes, including training large language models, powering search engines, and generating summaries or answers. Traditionally, this process has occurred without direct engagement with the content or website owners, and without compensation.

The Pay-Per-Crawl model introduces a significant change to this dynamic. It allows website operators to:

  • Block AI crawlers by default, unless explicit permission is granted;
  • Charge for access, effectively placing a price on each crawl request; and
  • Set terms of use, including licensing conditions and usage restrictions.

This is achieved through technical enforcement mechanisms embedded in infrastructure, which handles a substantial portion of global internet traffic. (In its 2024 Year in Review Blog Post, Cloudflare reports handling 63 million web requests (HTTP(s)) and 42 million DNS requests per second, on average, in 2024. Some sources report Cloudflare handles traffic for approximately 19.8% of all website via its reverse proxy)

Notably, the model revives the HTTP 402 status code, a rarely used signal that payment is required to access a resource. The model essential creates a digital tollgate for AI bots.

Legal Considerations

From a legal standpoint, this model shifts the framework from passive copyright protection to active contractual control. Organisations can now determine how their content is accessed and used. This has implications for intellectual property management, data governance, and commercial licensing strategies, particularly for enterprises whose business models depend on the monetisation of digital content, online visibility, and web traffic.

Pay-Per-Crawl represents the first practical, proactive and enforceable framework for digital copyright protection in an AI-driven environment, shifting control back to content owners through contractual and technical means. This solution may also signal the future direction of AI crawling regulation, particularly as multiple lawsuits in the United States and European Union challenge the legality of large-scale scraping and AI training under existing copyright frameworks. In contrast to litigation, Pay-Per-Crawl offers a proactive, enforceable alternative grounded in consent and contract.

Industry Support

The Pay-Per-Crawl initiative has received early and notable support from several digital publishers, signalling a growing consensus around the need for enforceable frameworks to manage AI access to online content.

The endorsement reflects growing concern over the unregulated use of online content by AI systems, particularly for model training.

Criticisms

The announcement of Pay-per-Crawl is not without its criticisms, as it represents one approach among potentially many, and its effectiveness depends on adoption across both publishers and AI developers. As legal challenges to AI scraping continue to unfold in jurisdictions such as the United States and the European Union, it is likely that a broader ecosystem of solutions will emerge, combining contractual, technical, and legislative mechanisms to address the complex intersection of copyright, consent, and AI innovation.

Critics have also noted that the model assumes a uniform value across all scraped content, which may not reflect the diverse commercial and editorial significance of different websites. Moreover, technical enforcement can be circumvented. This highlights the limitations of relying solely on infrastructure-level controls and underscores the need for complementary legal, technical, and regulatory safeguards.

Conclusion

In summary, Pay-Per-Crawl marks a pivotal shift in the regulation of AI-driven web scraping, empowering content owners with enforceable contractual and technical controls. While the model offers a proactive alternative and has garnered support from major publishers, its ultimate effectiveness will depend on widespread adoption and the evolution of complementary legal and technical safeguards. As the legal landscape continues to develop, Pay-Per-Crawl may serve as a blueprint for balancing innovation in AI with the protection of digital rights and commercial interests.

On 10 October 2025, the Office of the Australian Information Commissioner (OAIC) published regulatory guidance for social media platforms subject to the age-restrictions on compliance with the privacy provisions for the Social Media Minimum Age (SMMA) scheme.

The guidance outlines several important actions for entities to take including ensuring that age-assurance methods under the SMMA scheme are necessary and proportionate, making sure to assess the privacy impacts associated with each method and minimise the use of personal and sensitive information. Any further use of collected data must be optional, based on clear user consent, and in alignment with the Privacy Act 1988 and the Australian Privacy Principles.

The OAIC will shortly release further resources to help Australians understand how their personal information is going to be handled as well as educational resources for children and families to help them navigate the changes and support conversations about children’s privacy online. (Source)

The Australian under-16 social media ban is scheduled to commence on 10 December 2025. Under the new framework, social media platforms such as TikTok and Snapchat are obliged to take “reasonable steps” to remove users who are underage. However, there is no legal requirement for these companies to verify every account or ensure their age-checking measures meet strict accuracy standards. This gap has prompted concerns about how consistently and transparently the rules will be enforced across different platforms. Restrictions will apply to social media platforms that meet three specific conditions (unless otherwise excluded based on legislative rules set in July 2025):

  • the sole purpose, or a significant purpose, of the service is to enable online social interaction between two or more end-users;
  • the service allows end-users to link to, or interact with, some or all of the other end-users; and
  • the service allows end-users to post material on the service.

The eSafety Commissioner is expected to release details of the ‘reasonable steps’ and related requirements for social media platforms in the coming weeks, ahead of the ban’s commencement.

The Senate Environment and Communications References Committee commenced a public inquiry into implementing regulations aimed at protecting children and young people online on 27 August 2025. A public hearing is scheduled for 13 October 2025 to hear from expert witnesses about the social media ban and the Internet Search Engine Services Online Safety Code including:

  • privacy and data protection implications of age verification;
  • the expansion of corporate data collection and user profiling capabilities enabled by code compliance requirements;
  • the technical implementation and efficacy of age verification and content filtering mechanisms;
  • alternative technical approaches to online safety for all users, including young people;
  • appropriate oversight mechanisms for online safety codes;
  • global experience and best practice; and
  • any other related matters.

Following consultation, the Committee is due to release a report on 14 November 2025.

In November 2023 the Australian Government released the 2023-2030 Cyber Security Strategy, with the vision of becoming a world leader in cyber security by the end of 2030. The strategy was designed to address the increasing cyber threats faced by Australian businesses and citizens by improving national cyber resilience through a comprehensive approach that includes community engagement, public-private partnerships and law reform.

On 29 November 2024 the Cyber Security Act 2024 received Royal Assent and became part of Australian law. The aim of the act was to pull Australia in line with global cyber security best practices, addressing critical gaps in our legal framework. Some of the important initiatives implemented under this Act are:

  • A minimum cyber security standard for smart devices.
  • A mandatory ransomware and cyber extortion reporting obligation for certain businesses to report ransom payments.
  • The introduction of a ‘limited use’ obligation for the National Cyber Security Coordinator to encourage industry engagement with the government following cyber incidents.
  • The establishment of a Cyber Incident Review Board.

The government announced there would be three phases to this approach, calling each phase a ‘horizon’. The objective of Horizon 1 (2023-2025) focused on strengthening the foundational aspects of cyber security and addressing critical gaps. Now that we are nearing the end of 2025, Horizon 1 is coming to a close, and Horizon 2 is fast approaching. Horizon 2 (2026-2028) aims to enhance cyber security maturity across various sectors of the economy.

The Government has released a Policy Discussion Paper for public consideration on Horizon Two, which builds on the foundational reforms of Horizon 1. It focuses on enhancing cyber maturity across the economy, with targeted improvements across six strategic “cyber shields”.

  • Strong Businesses and Citizens
  • Safe Technology
  • World-Class Threat Sharing and Blocking
  • Protected Critical Infrastructure
  • Sovereign Capabilities
  • Strong region and global leadership

The Freedom of Information Bill 2025 (FOI Amendment Bill) was introduced to Parliament on 3 September 2025. This bill proposes significant changes to the Freedom of Information Act 1982 (FOI Act) and the Australian Information Commissioner Act 2010 (AIC Act), marking the biggest changes to the FOI Act in over a decade.

Key amendments include:

  • No Anonymous Requests: The bill introduces a requirement that FOI requests cannot be made anonymously or under a pseudonym. This requirement aims to ensure that vexatious applicant declarations are effective and that personal information is only disclosed in appropriate circumstances.
  • Processing Cap: A discretionary 40-hour processing cap for FOI requests will be introduced. The processing cap is intended to balance an applicant’s access rights with the administrative burden on agencies. A discretionary 40-hour processing cap means that agencies will have a maximum of 40 hours to process each FOI request.
  • Information Commissioner’s Power: The Information Commissioner will have the power to remit IC review applications with directions to decision-makers for further consideration, thus reducing the potential burden on the Information Commissioner’s office and expediting the review process.
  • Application Fees: The bill will enable an application fee to be to be specified in the regulations for FOI requests, internal reviews, and Information Commissioner reviews, excluding requests for an individual’s own personal information.
  • Ministerial Response: Amendments will allow an FOI request or related review proceeding to be responded to by an agency or other Minister if the original Ministerial respondent ceases to hold the relevant Office.

The FOI Act primarily relates to Federal Government departments and agencies. These entities are required to process and respond to FOI requests, providing access to government-held information. The FOI Act does not generally apply to private or public organisations outside of the federal government. However, it can impact these organisations indirectly if they interact with federal government entities or if their information is held by such entities.

Key takeaways from the changes include:

Transparency vs Efficiency

The bill aims to modernize the FOI framework and address abuses of process, but it has raised concerns about prioritising government efficiency over transparency. The prohibition on anonymous requests and the introduction of application fees may restrict access to information for some individuals. Applicants who may prefer to remain anonymous, like a whistleblower or those concerned about privacy and potential repercussions, may be less inclined to request access to information if they cannot do so covertly. Additionally, the prohibition on anonymous requests and the introduction of application fees may also impact on the overall transparency and accountability of government entities. By making it more difficult for individuals to access information, these changes could reduce the public’s ability to scrutinise government actions and hold officials accountable.

Administrative Burden

The processing cap and the new powers for the Information Commissioner are designed to reduce the administrative burden on agencies. By introducing a processing cap, agencies can manage their resources more effectively and ensure that they are not dedicating excessive time to a single request. This reduced burden can help prevent backlogs and improve the overall efficiency of the FOI process. The new powers of the Information Commissioner also allow for a more efficient review process, as the Information Commissioner can provide guidance to decision-makers on how to handle specific requests, reducing the need for lengthy reviews and appeals.

On 22 August 2025, Alice Linacre was appointed as the new the Freedom of Information Commissioner.

Ms Linacre has been appointed for a 5-year term from September 2025. The Australian Freedom of Information Commissioner promotes awareness, reviews agency decisions, handles complaints, monitors compliance, and advises on policy to ensure government transparency and accountability.

On 27 June 2025, the Australian Government listed “Terrorgram” as a terrorist organisation under the Criminal Code Act 1995 (the Criminal Code).

Terrorgram primarily operates on the decentralised and encrypted platform Telegram. The group creates and distributes propaganda inspiring terrorist attacks on minority groups, critical infrastructure and specific individuals. Telegram has been used to provide instructions on how to conduct a terrorist attack, with Terrorgram successfully inspiring terrorist attacks in the United States, Europe and Asia.

“Online radicalisation is a growing threat but the government has tools at its disposal and we will use every one of them to keep Australians safe”.

– Tony Burke MP, Minister for Home Affairs and Cyber Security

Listing Terrorgram as a terrorist organisation under the Criminal Code complements counter-terrorism financing sanctions imposed on 3 February 2025 under Part 4 of the Charter of the United Nations Act 1945, giving effect to Australia’s international obligations under UN SC Resolution 1373.

Telegram’s use as a platform facilitating the group’s operation raises further questions about telecommunication providers and their responsibility for enabling or facilitating criminal conduct. Further consideration must also be had on the listing of a telco-affiliated organisation on concepts like due diligence or D&O obligations. Parallels can be drawn to the implication of RaaS associated financial sanctions on the due diligence steps which victims must take when confronted with the decision on whether to make a ransomware payment.

Terrorgram’s listing ultimately prompts the question of whether other Telegram-associated cyber threats, communications or entities triggers broader cyber security, privacy or data protection obligations.

Software providers frequently make forward-looking representations about system performance, scalability, and reliability, but legal consequences can be significant when those claims fail to materialise.

The Victorian Supreme Court of Appeal’s decision in Kytec Pty Ltd v ProLearn Corporation Pty Ltd [2024] VSCA 23 (“Kytec”) underscores the legal standard for representations about future matters under the Australian Consumer Law (ACL), and clarifies the evidentiary burden imposed on parties alleged to have made such representations.

Background

Pro Learn Corporation Pty Ltd (ProLearn) engaged Kytec, an agent of Telstra Group Limited (Telstra), to assist with the development of a new call centre system. Kytec and Telstra made several representations in its ‘final proposal document’ (Final Proposal) including that the new call centre system would:

  • automatically create personal call back lists for operators;
  • support 83 agents; and
  • enhance the productivity of ProLearn’s call centre.

The solution went live on 1 September 2015, but it did not go to plan. Kytec characterised the post implementation experience as ‘normal teething problems’. ProLearn claimed the new system did not perform as promised, resulting in significant business losses, in the vicinity of $9.1 million.

Supreme Court Findings

ProLearn brought claims against Telstra for breach of contract and misleading conduct and Kytec for misleading conduct and negligence under Section 4 of the ACL.

Section 4 of the ACL provides that if a person makes a representation about a future matter without a reasonable basis, the representation can be taken to be misleading. This presumption is rebuttable, but it places an evidentiary burden on the party making the future representation to adduce evidence demonstrating that such grounds existed when the representation was made. Notably, the mere production of evidence does not discharge the burden, the party making the representation must establish that the grounds were objectively reasonable and sufficient to support the representation. Once such evidence is adduced it leaves the ultimate burden on the Claimant to prove that the representation was in fact misleading (Sykes v Reserve Bank of Australia 1998 | [60]).

The trial judge found both Telstra and Kytec liable for misleading and deceptive conduct. Critically, the trial judge held that Telstra could not rely on Kytec’s expertise as its agent, as Telstra had not sufficiently considered or analysed the Final Proposal nor formed a view as to whether the solution could meet ProLearn’s requirements.

Appeal Findings

Kytec and Telstra both appealed the judgment, on the basis that the Supreme Court erred in finding that they lack reasonable grounds for the representations made about the new system.

The Court of Appeal upheld Telstra’s appeal and found that it had reasonable grounds to make the representations as a principal relying on Kytec’s expertise. Specifically, the Court of Appeal disagreed with the trial judge to emphasise that reasonable reliance on an expert may, in some circumstances, constitute reasonable grounds, particularly where the relying party lacks the capacity to independently verify the claims.

Conversely, the Court dismissed Kytec’s appeal and held that it did not have reasonable grounds for its representations about the system’s capabilities. Specifically, the Court of Appeal:

  • concluded that Kytec’s reliance on a third-party telephone system did not absolve it from its obligation to verify whether the system could meet ProLearn’s specific needs;
  • found Kytec’s evidence to have been general and not specific to ProLearn’s needs. In particular, the Court agreed with the trial judge’s finding that the Cisco system required bespoke modification and Kytec failed to demonstrate that it could deliver those modifications contrary to its representations;
  • agreed with the trial judge that post-implementation events could be analysed to assess whether reasonable grounds existed at the time the representations were made;
  • found that such post-implementation issues were not minor or expected (teething problems), but unforeseen and serious, undermining the credibility of Kytec’s performance forecasts; and
  • concluded that Kytec’s reliance on external systems and its failure to conduct sufficient due diligence meant its performance claims were not grounded in fact, and therefore misleading under the ACL.

Key Takeaways

Kytec is a cautionary tale for software providers and technology consultants. It reinforces that forward-looking claims must be supported by concrete, contemporaneous evidence. Reputation, experience, and reliance on third-party systems are not enough. Providers must demonstrate that their representations were based on reasonable, informed, and context-specific grounds. This case also highlights the importance of documenting reliance and ensuring that delegation to experts is justified and reasonable.

Several key takeaways can be taken from this case for software and technology providers when making performance related representations:

  • using a well-known or reputable software product does not automatically provide reasonable grounds for performance claims. Providers must assess whether the product is fit for the specific purpose and context in which it is being deployed;
  • due diligence is essential. Providers must undertake technical analysis, testing, or validation to support their claims. General experience or past success is not a substitute for case-specific evidence;
  • customisation risks must be addressed. Where a system is being tailored or integrated into a unique environment, providers must ensure that their representations are based on realistic assessments, not assumptions; and
  • documented evidence is helpful. Detailed proposals, technical specifications, and risk assessments can help establish reasonable grounds. The absence of such documentation may weaken a provider’s defence.

At a glance

  • AI technology gives rise to a diverse range of potential risks, depending on its functionality and the environment in which it is deployed.
  • Insurers are responding by moving away from a broad, ‘one size fits all’ approach to AI-related risks.
  • The market for affirmative AI cover is expected to expand as companies seek to transfer their risk to insurers.

We are witnessing an increasing demand for the development and supply of Artificial Intelligence (AI) models as businesses and governments seek to leverage the benefits of AI technology. Insurance plays a key role in managing the risks associated with the deployment of AI, both through traditional policy lines and AI-specific insurance products.

This article explores some of the challenges faced by insurers navigating AI-related risks, including the potential hidden exposure of ‘silent’ AI.

Types of AI risks

AI models perform tasks requiring human intelligence, such as reasoning and problem solving. Tech companies supplying AI models may customise their functionality, train the algorithm using proprietary data, or indeed embed AI in their own business operations.

The European Union’s Artificial Intelligence Act (EU AI Act) established a system for classifying AI risks, which acts as a useful guide when considering how the functions of an AI model and its intended use may interact to pose a threat of harm to end-users:

Risk Level AI Applications
Unacceptable Risk The highest risk tier for AI systems incompatible with EU values and fundamental rights.

  • Subliminal manipulation
  • Exploitation of vulnerabilities of persons resulting in harmful behaviour
  • Biometric categorisation of persons based on sensitive characteristics
  • General purpose social scoring
  • Real-time remote biometric identification (in remote places)
  • Assessing the emotional state of a person
  • Predictive policing
  • Scraping facial images
High Risk Where failure or misuse potentially causes significant harm to negatively affect health, safety or fundamental rights.

  • Safety components of already regulated products
  • AI in critical infrastructure, recruitment, essential services or law (amongst others)
Limited Risk AI systems with a risk of manipulation or deceit.

  • Interacting with chatbots
  • Deep fakes
Minimal Risk All other AI systems that do not fall under the above categories.

  • Spam filters

This framework highlights the range of different exposures which may be encountered by tech companies developing or supplying AI systems.

The nature of risk and resulting loss depends heavily on the environment in which the AI model operates. For example, tech companies training their AI models using public datasets may be exposed to claims for intellectual property infringement, discriminatory outcomes due to in-built bias, or inaccurate outputs.

For a more in-depth look at AI’s impact on the tech liability landscape, see our recent article on emerging risks for tech professionals here.

‘Silent’ AI use

As AI becomes deeply embedded in business operations, tech providers and their insurers will be alert to the risks posed by ‘silent’ AI. ‘Silent’ AI refers to systems operating within a business (e.g. in supply chains or automated processes) or its outsourced services without the user’s awareness. These AI systems may remain undetected until an error reveals their use.

This hidden layer of risk is particularly problematic given the speed at which businesses are adopting AI tools, often without fully understanding how these tools interact with internal systems or external service providers.

Implications for Insurers

Underwriting AI Risks

Clear regulatory guidance for the use of AI has not yet materialised in Australia, therefore the burden remains on insurers to proactively assess and mitigate the evolving, and often opaque, exposures that AI presents. Theoretically, a single AI failure can trigger a domino effect of claims across jurisdictions, clients and/or sectors.

As the risks depend to a significant extent on the context in which AI is used, many insurers are abandoning a ‘one size fits all’ model to tailor cover for tech companies developing and supplying AI. Underwriting frameworks are adapting to capture AI’s layered and sometimes invisible nature. This process may encompass:

  • AI-specific proposal questions, including with respect to the use of ‘silent’ AI;
  • technical audits;
  • incorporation of industry certifications (e.g. ISO/IEC 42001); and/or
  • adoption of the Voluntary AI Safety Standard.

Existing Cover

Even where traditional liability policies do not expressly cover AI, they may respond to AI-related losses. The availability of cover will depend on the nature of the loss or the context or environment in which AI was used. For example:

  • tech liability policies may respond to claims arising from the provision of AI-assisted services or products;
  • cyber insurance may respond to privacy breaches or system failures arising from the use of AI;
  • failures in risk oversight or management of AI systems may give rise to D&O claims; and
  • property damage and business interruption may be triggered if AI causes physical or operational harm.

As AI systems increasingly blur the lines between human and machine agency, determining the source of liability (and the responsive insurance policy) becomes more complex. For that reason, we expect the market for affirmative AI cover to expand as companies seek to transfer their risk to insurers.

Key Takeaways

As AI continues to re-characterise the way businesses are conducted, insurers can expect to grapple with risks lying beneath the surface.

This industry is responding by adapting its underwriting approach and rethinking the role of traditional policy lines in responding to AI-related harm.

Ultimately, the insurability of AI hinges on tech liability insurer’s ability to stay flexible, informed, and proactive.

Wotton Kearney’s Technology Disputes Team uniquely combines expertise in insurance, technology and disputes to provide end-to-end support for insurers, technology providers, corporates and government entities. As the only dedicated technology disputes practice in Australia, the team provides a full-service technology capability spanning regulatory risks and investigations, recoveries, disputes and third-party claims under one roof.

For more information, please contact:

New Zealand

On 21 July 2025, the Office of the Privacy Commissioner (OPC) issued the Biometric Processing Privacy Code 2025 (the Code). We have written about the draft Code and consultation process in previous bulletins (here and here).

The Code, issued under the Privacy Act, regulates how organisations collect, hold and use biometric information for the purposes of biometric processing. Biometrics include the use of facial recognition, fingerprint scanning or other means of technology to collect and process an individual’s biometric information to identifying or classify them.

The final Code took into account over 100 submissions received during the latest round of consultation. Following the consultation period several changes were made to the final test of the Code, including the following:

  • Increased the compliance period from 9 to 12 months for organisations already using biometrics.
  • Clarified the “necessity” test. Organisations are required to consider whether there is an alternative method (of biometric processing) with a lower privacy risk, including an assessment of how effectively the alternative will achieve their purpose.
  • Broadened the trial provision by allowing organisations conducting trials to defer compliance with the effectiveness requirement of the necessity test. This gives organisations time to learn the results of the trial to promote accuracy but are still required to maintain compliance with the other aspects of the Code during such time.
  • Improved clarity of the exclusion regarding consumer devices or services that are solely for proving the user with their own personal information or an immersive or entertainment experience.
  • Strengthen safeguards for biometric attention-tracking by limiting the use to safety purposes.

The Code will come on 3 November 2025. If you want to discuss the Code, what it might mean for your organisation, and how you can implement biometric technologies safety please do reach out to a member of our team.

In 2023, the New Zealand Government announced its plans to integrate the NCSC (National Cyber Security Centre) and CERT NZ (Computer Emergency Response Team) to form a cohesive lead cyber security agency in New Zealand.

As of July 2025, this merger is now complete. Reference to CERT and its branding will no longer be used, as all functions are now integrated with the NCSC.

This integration makes the reporting of cyber incidents more straightforward, as there is now a single website for information which includes a “streamlined incident reporting function”. The Deputy Director-General Cyber Security at the Government Communications Security Bureau (GCSB) observed that this merger and the centralising of information to the NCSC website will also provide a “better understanding of the cyber threat landscape across the New Zealand Economy”. This will assist with the identification of trends in cyber security and help businesses to anticipate what threats may arise.

The NCSC New Zealand-based number should now be used for contacting the NCSC about cyber incidents (0800 114 115).

As the impact and pervasiveness of cybercrime in New Zealand continues to rise it remains to be seen what the NCSC will do next having taken on a more public facing role. We will be sure to report on any developments in future bulletins.

The Privacy Amendment Act 2025 received royal assent on 23 September 2025 The Amendment Act introduces a new Information Privacy Principle 3A to the Privacy Act 2020. This new IPP is meant to “increase transparency and help New Zealanders to better exercise their privacy rights” and is “in line with other countries like Australia, the UK and Europe” (Office of the Privacy Commissioner | Privacy Amendment Act passes).

IPP 3A comes into force of 1 May 2026. After this date, any information collected indirectly on or after this date will incur additional notification requirements. Where agencies collect personal information from a source other than the individual in question they will need to take “reasonable steps” to ensure the individual receives various information typically round in agencies’ privacy policy (prior to this, IPP3 only imposed notification requirements on collection of personal information directly from the individual in question).

As with IPP3, there are exceptions to the new IPP 3A notification requirements. These include where the information is publicly available, the individual has already been made aware of the information having been collected, notification would prejudice the maintenance of the law, notification would undermine the purpose of the collection, and/or notification would cause serious threat to public health or safety, or to the health or safety of another individual.

All agencies, including insurers, will need to update or develop their privacy policies to deal with the new requirements of IPP 3A and be clear on when any of the exceptions apply. We are awaiting the Office of the Privacy Commissioner’s final guidance on the implementation of IPP 3A later this year. In the meantime, the OPC published its drafted guidance in April.

At the Global Privacy Assembly in Seoul this year, 18 international Data Protection Agencies (including New Zealand’s Office of the Privacy Commissioner) issued a Joint Statement (Statement) on data governance to encourage development of AI in a privacy protective manner.

In summary:

  • The statement acknowledges the opportunities that AI brings for human development, while also highlighting the risks AI presents to data protection rights, and noting that individuals and businesses need answers and legal certainty to allow development and use of AI within trustworthy data governance frameworks.
  • The GPA advocates prioritising “privacy-by-design principles [in] AI systems and frameworks” and implementing robust internal data governance frameworks. The statement suggests this should include using technical and procedural safeguards for effective risk mitigation throughout the AI system.
  • It is suggested that clear standards and requirements should be developed to ensure that AI training data is processed lawfully, whether based on consent, contractual necessity, legitimate interest, or other legal justifications.
  • The statement provides that any lawful development of governmental or regulatory AI Policy should be grounded in public trust and consistent with the principles of privacy and data protection. Furthermore, that Data Protection Agencies should play a key role in monitoring the technical and societal implications of AI as well as encouraging collaboration between regulators such as privacy, consumer protection, intellectual property regulators to encourage consistency and synergy in regulatory frameworks. Especially when it comes to discussion of AI Policy and regulatory development to encourage and police AI development and use.

New Zealand’s Privacy Commissioner noting the value in this joint statement is that developers and users of AI tools are “using information in new ways, including personal information” and “…Most of the development of popular AI systems happens outside New Zealand”. Therefore, a joint international statement can be seen as a stronger international commitment between privacy regulators to encouraging privacy-centric AI practices.

The adoption of the statement by the OPC is interesting given the Government’s official position that New Zealand’s regulatory framework is “fit for purpose” when it comes to AI. This seems to jar with the approach advocated for in the statement, which suggests legal certainty through clear, specific regulation is important to public trust in AI.

The New Zealand Security Intelligence Service (NZSIS) recently released its 2025 report into New Zealand’s security threat environment. While no just covering cyber, the assessment provides useful insights into the challenges facing New Zealand from a security perspective. In NZSIS’s assessment, New Zealand faces its most complex security environment in recent times.

The NZSIS makes six key assessments about New Zealand’s threat environment in 2025, many of which reflect the difficulties posed by radicalisation online:

  • Online radicalisation remains the most plausible and likely cause of violent extremist threats. The largest risk is posed by loan actors who may not be known to the intelligence community ahead of time.
  • Grievances and polarising issues in the online information space are almost certainly driving support for a range of violent extremist ideologies within New Zealand. No one ideology currently stands out as presenting a greater threat.
  • Young and vulnerable people in New Zealand are particularly at risk of radicalisation, especially while online.
  • Foreign interference activities continue in New Zealand with several states responsible. This includes activities regarded as transnational repression that often target diaspora communities.
  • It is almost certain there is undetected espionage activity that is harming New Zealand’s national interests. The NZSIS has had some success disrupting this activity, but foreign states continue to target New Zealand’s critical organisations, infrastructure and technology to steal sensitive information.
  • Some foreign states have attempted to exploit people inside public and private sector organisations in a deceptive, corruptive, or coercive manner, to gain influence and further their interests.

The NZSIS emphasises the importance of knowing signs of these threats, developing robust security practices, and reporting suspicious activities. The report highlights the need for ongoing awareness and collaboration across government, organisations, and communities to manage and mitigate these evolving threats.

Cyber and data risks – both in terms of radicalisation of individuals and direct compromises of New Zealand organisations – remain an ongoing risk for kiwis. If you want to discuss the issues raised in NZSIS’s report and what they might mean for your organisation please do reach out to a member of our team.

Register for Wotton Kearney’s Cyber, Privacy and Technology updates below.

    Contacts