Issue 11 of our quarterly Cyber, Privacy & Technology Report is out now. Packed with the latest news, it’s the quarterly go-to for insurers, brokers, and their customers operating in cyber, privacy, and technology sectors across Australia and New Zealand.
We hope you find this edition both insightful and practical in navigating the ever-evolving cyber and technology landscape.
If you’d like to discuss any of the topics covered, please don’t hesitate to reach out to a member of our team or click here to find out more.
24/7 Cyber Hotline
Wotton Kearney operate a cyber incident response hotline that is monitored 24/7 by our dedicated team of breach response lawyers. By using a lawyer as an incident manager, we can immediately protect key reports and other sensitive communications with your customer and other vendors under legal professional privilege.
To access our hotline, please click here.
Australia
The Privacy and Other Legislation Amendments Act 2024 (Cth) came into force on 10 December 2024, introducing important legal obligations for all businesses.
Most amendments commenced the day after the Act came into force, however some have been suspended until a later date. A breakdown of these key dates is set out below:
Amendment | Description | Effective Date |
APP Codes | Enhancements to the process and ability to prepare APP codes. | 11 December 2024 (now in force) |
Emergency Declarations | The ability to make emergency declarations in an emergency or disaster situation. | |
Children’s Privacy | The obligation to develop a Children’s Online Privacy Code (regarding how the APPs should be applied to the children) to be published by 10 December 2026. | |
Security, Retention and Destruction | Requirement to take reasonable technical and organisational measures to protect personal information. | |
Oversea Data Flows | ‘Whitelist’ mechanism for disclosing personal information overseas to recipients who are in approved countries. | |
Eligible Data Breaches | The ability to make eligible data breach declarations enabling the handling of personal information in a manner which would not otherwise be permissible. | |
Federal Court Orders | The empowerment of Federal Courts to make additional orders where civil penalty provisions have been contravened. | |
Commissioner to Conduct Public Inquiries | The ability to conduct public inquiries for specified privacy matters. | |
Determinations Following Investigations | Powers to make determinations after an investigation. | |
Annual Reports | Additional information to be included in annual reports. | |
External Dispute Resolution | The ability to not investigate a complaint due to its resolution by an external dispute resolution scheme. | |
Monitoring and Investigation | Additional and enhanced powers granted to the Information Commissioner. | |
Doxxing Offences | It’s now illegal to share personal information with the intent to harm, punishable by up to 7 years imprisonment. | |
Statutory Cause of Action for Serious Invasions of Privacy | Individuals can take legal action for serious invasions of privacy, including misuse of personal information. | 10 June 2025 (unless an earlier date is proclaimed) |
Automated Decision Making | Businesses must disclose when automated processes are used to make decisions. | 10 December 2026 |
These changes reflect the increasing importance of robust cyber security and privacy practices. Make sure your business is compliant with these new laws to avoid penalties and protect your stakeholders’ data.
For more information about the changes introduced by the Privacy Amendment Act, view our ‘What You Need to Know: The Privacy and Other Legislation Amendment Bill, 2024’ summary.
On the 29 November 2024, the Cyber Security Act 2024 (Cth) commenced and became law, with the aim to implement specific measures in line with the 2023-2030 Australian Cyber Security Strategy.
This Act includes 4 key measures:
- The establishment of a National Cyber Security Coordinator, who will lead and advise the government in the coordination and triaging of action in response to significant cyber security incidents.
- A mandatory ransomware payment reporting obligation.
- The establishment of the Cyber Incident Review Board, an independent statutory advisory body to conduct no-fault, post-incident reviews of significant cyber incidents.
- The creation of mandatory security standards for manufacturers and suppliers of ‘relevant connectable products’ or smart devices intended for ‘personal, domestic, or household use’, also known as Internet of Things (IoT) devices. The Rules do not apply to smart devices which are already manufactured and or in market.
Rules are required to give effect to some measures contained in the Act. The Department of Home Affairs sought public consultation on the following draft rules in February 2025:
- the Cyber Security (Security Standards for Smart Devices) Rules 2024 which would require that passwords for use are unique per product and defined by the user of the product, information on how a consumer is to report security issues in relation to the product is published, and the defined support period for security updates for the product is published.
- Cyber Security (Ransomware Reporting) Rules 2024, which proposes that entities with an annual turnover of more than $3 million must report ransomware payments to the Australian Government within 72 hours of any payments being made.
- Cyber Security (Cyber Incident Review Board) Rules 2024.
- Security of Critical Infrastructure (Critical infrastructure risk management program) Amendment (Data Storage Systems) Rules 2024.
- Security of Critical Infrastructure (Telecommunications Security and Risk Management Program) Rules 2024.
- Security of Critical Infrastructure (Application) Amendment (Critical Telecommunications Assets) Rules 2024.
These rules along with their explanatory documents can be accessed at https://www.homeaffairs.gov.au/reports-and-publications/submissions-and-discussion-papers/consultation-on-subordinate-cyber-security-legislation.
A breakdown of the commencement dates is set out below – noting, however, that the consultation results could impact the obligations that arise.
Amendment | Description | Effective Date |
Ransomware Reporting Obligations | Businesses meeting the reporting threshold will need to report cyber extortion payments. | 30 May 2025 (unless an earlier date is proclaimed) |
Cyber Incident Review Board | Establishes a National Cyber Security Coordinator to oversee significant cyber incidents. | |
Security standards for smart devices | Manufacturers and suppliers of ‘relevant connectable products’ (smart devices) for ‘personal, domestic, or household use’ must ensure compliance with mandatory security standards. | 1 December 2025 (unless an earlier date is proclaimed) |
On 17 December 2024, the Office of the Australian Information Commissioner (OAIC) resolved its ongoing action against Meta Platforms, Inc., formerly Facebook, (Meta) over alleged breaches of the Privacy Act 1988 (Cth) (Privacy Act) and Australian Privacy Principles (APPs) in relation to the 2018 Cambridge Analytica incident.
Following court-ordered mediation, Meta provided an enforceable undertaking on a without prejudice basis and without admitting liability, agreeing to certain corrective actions outlined by the OAIC. As part of the undertaking, Meta agreed to a $50 million payment, the largest of its kind in Australia, to compensate Australian users whose data may have been improperly accessed. As a result, the OAIC has withdrawn the civil penalty proceedings initiated against Meta in the Federal Court.1
Background
The Cambridge Analytica controversy came to light in early 2018 after whistleblowers and journalists exposed the company’s unauthorised harvesting of personal data from millions of Meta users.
User data was harvested through the third-party app “This Is Your Digital Life”, a Cambridge University professor, under the guise of a simple personality quiz. The app collected data from up to 87 million users by exploiting Meta’s data-sharing policy at the time, which allowed developers to access not only data from app users but also from their “friend” network without consent. This data was then shared with Cambridge Analytica, a British political consulting firm, which allegedly used it to create targeted political ads based on voters’ psychographic profiles, aiming to influence swing voters in the 2016 US presidential election and the Brexit referendum.
The OAIC launched an investigation into Meta’s data practices, alleging Meta violated section 13G of the Privacy Act through serious or repeated violations of:
- APP 6.1, which requires organisations to use and disclose personal information only in ways that individuals would reasonably expect, unless there is a valid exception or legal requirement, and
- APP 11.1, which requires organisations to implement measures to protect personal information and to regularly review whether they are authorised to retain it.
How will the compensation process work?
Meta is required to establish a payment scheme, which will be managed by a third-party administrator starting next year. The scheme will consist of two payment tiers:
- A fixed payment for eligible users who experienced general concern or embarrassment due to the incident, and
- A higher payment for eligible users who can demonstrate specific loss or damage caused by the incident.
Meta is responsible for identifying eligible users, notifying them, and publicising the scheme. Applications are expected to open in Q2 2025.
Key observations
Enforceable undertakings are effective in quickly resolving complex cases without the need for prolonged legal battles. However, they offer limited value in advancing the broader understanding of privacy laws, as they lack formal court rulings, comprehensive legal analysis, or the establishment of binding precedents that would help clarify how privacy laws should be applied in future cases.
Further, the enforceable undertaking does not provide clarity on how the OAIC determined the AUD 50 million settlement amount, what this figure represents or how it relates to the highest penalty thresholds that came into effect in 2022.
Overseas organisations are not exempt from Australian privacy law. Information Commissioner Elizabeth Tydd remarked: “The payment scheme is a significant amount that demonstrates that all entities operating in Australia must be transparent and accountable in the way they handle personal information, in accordance with their obligations under Australian privacy law, and give users reasonable choice and control about how their personal information is used.”
“This also applies to global corporations that operate here. Australians need assurance that whenever they provide their personal information to an organisation, they are protected by the Privacy Act wherever that information goes”.
[1] Federal Court of Australia Proceeding No NSD 246 of 2020.
On 20 February 2025, Oxfam Australia (Oxfam) committed to an enforceable undertaking with the Office of the Australian Information Commissioner (OAIC) to address the issues identified during the investigation of a data breach it suffered in 2021. No financial penalties have been imposed, but rather Oxfam have agreed to certain corrective measures to address gaps in its compliance program.
Background
In January 2021, the Australian branch of the not-for-profit Oxfam experienced a serious data breach that resulted in in the exposure of 1.7 million Oxfam records. An unknown threat actor was able to access a test database that, in normal circumstances, would contain fake data used during development processes. Instead, the database was found to contain the names, addresses, donation histories, and even financial records of many legitimate Oxfam supporters, with some data dating back over seven years.
Following a lengthy investigation, the OAIC alleged Oxfam failed to comply with the Australian Privacy Principles (APPs) of the Privacy Act 1988 (Cth) (Privacy Act), particularly data retention and destruction obligations set out in APP 11.
The Undertaking Between Oxfam and the OAIC
An enforceable undertaking allows an organisation to voluntarily commit to corrective actions, demonstrating compliance and cooperation with regulatory authorities, which can help avoid more severe penalties or litigation.
The undertaking requires Oxfam to bolster its compliance with privacy law by implementing a range of robust security measures, including multi-factor authentication, improved shared credentials processes, and enhanced password management. Oxfam must destroy or de-identify outdated personal information and improve its privacy and information security training programs for staff.
The undertaking additionally requires third party reports to be provided on Oxfam’s compliance, and continuing to engage with the OAIC throughout the process.
Key Takeaways
The OAIC has since bolstered its guidance for not-for-profits on information security, and retention and destruction practices. This guidance can be accessed at https://www.oaic.gov.au/privacy/privacy-guidance-for-organisations-and-government-agencies/organisations/privacy-for-not-for-profits,-including-charities.
Although the breach was discovered in 2021, the not-for-profit continues to deal with its repercussions. This incident, along with the ensuing investigation, highlights the essential need for strong security measures in IT systems.
Entities can use the actions outlined in the undertaking to inform reviews of their own systems and compliance.
On 23 January 2025, the Australian Privacy Commissioner (Commissioner) determined that Services Australia (responsible for the provision of services such as Medicare, Centrelink and child support payments) had interfered with an individual’s privacy and ordered them to pay $10,000 in compensation.1
Background
In 2019, a customer known as ATQ accessed their digital health record and discovered a combination of details intertwined with another customer who shared his name and date of birth. He subsequently filed a complaint with the Office of the Australian Information Commissioner (OAIC). The complaint was conciliated, and a resolution was reached on the understanding that ATQ’s personal information would be protected from further incidents.
One month later, ATQ received Medicare correspondence which had again become ‘intertwined’ with another customer’s details. Two years later, ATQ learnt his vaccination history had been intertwined once more.
ATQ subsequently made a new complaint to the OAIC in August 2022, this time seeking remedies including compensation. The OAIC formally opened an investigation in response.
Outcome of Investigation
The Commissioner determined that Services Australia had interfered with ATQ’s privacy and breached:
- APP 6: Concerning the use and disclosure of personal information,
- APP 10: Concerning the quality of personal information, and
- APP 11: Concerning the security of personal information.
The Commissioner ordered Services Australia to pay ATQ $10,000 for non-economic loss caused by the interference of his privacy, along with a written apology. Services Australia were also required to implement process changes and produce a report to the OAIC on improvements made. The agency’s existing guidelines, and procedures (while lengthy) were found to be insufficient in ensuring reasonable steps were taken to comply with the APPs.
Key Takeaways
The determination serves as a timely reminder about storing large amounts of personal (and particularly, sensitive) information. The Commissioner states that organisations should be wary of ‘digital doppelgangers’, considering “there are known to be several hundred ‘twins’ in Australia who have the same name and birth date”.2
To learn more on privacy compliance and data protection, please get in touch.
[1] ‘ATQ’ and CEO of Services Australia (Privacy) [2025] AICmr 19 [1], [94] (‘ATQ and Services Australia’)
[2] Recording a name to establish an identity, Beware: the digital doppelganger
On 19 February 2025, Mike Burgess, the Director-General of Security for the Australian Security Intelligence Organisation (ASIO), delivered the 6th Annual Threat Assessment, presenting a complex and concerning picture of Australia’s security landscape.
Over the next five years, ASIO anticipates “an unprecedented number of challenges and an unprecedented cumulative level of potential harm”.
Specifically addressing cyberattacks, ASIO notes that threats are increasingly targeted and sophisticated, with state-sponsored groups focusing on stealing sensitive data from government agencies, defence, and critical infrastructure.
At the end of 2024, Australia’s Home Affairs Cyber and Infrastructure Security Centre (CISC), announced the designation of 46 additional critical infrastructure assets as Systems of National Significance under Section 52B of the Security of Critical Infrastructure Act 2018. With this latest declaration, the total number of such systems now exceeds 200, spanning sectors like communications, energy, transport, financial services, food and grocery, and data storage or processing, many of which are aligned with those targeted in the US.
Commenting on the declarations, the Minister for Home Affairs and Cyber Security, Tony Burke, stated:
“The Australian Government is relentlessly focused every day on helping our country prepare for and safeguard against a significant cyber attack or other attempt to undermine our critical systems, but it’s not something we can do alone.”
“The Government appreciates the owners and operators of Systems of National Significance for joining us in the fight against malicious actors and protection our national security“.1
The Director-General also noted that effectively countering these threats requires a whole-of-nation approach, advocating for increased collaboration among intelligence agencies, law enforcement, government departments, and the private sector:
“Security is a shared responsibility, and our partnerships will be critical as we respond to these challenges. We will need to widen and deepen our partnerships, including our partnership with the community we protect”.
Read the Director-General’s full address here.
[1] Australian Government Media Release, Protecting Australia’s critical infrastructure https://minister.homeaffairs.gov.au/TonyBurke/Pages/protecting-australias-critical-infrastructure.aspx.
The Scams Prevention Framework Act 2025 (the Act) commenced from 21 February 2025. The Act applies to banking, telecommunications, and digital platform sectors (social media, search engine advertising and direct messaging services).
While some steps are still required to fully implement the legislation, the Act introduces the Scams Prevention Framework (the SPF) under Part IVF of the Competition and Consumer Act 2010 (Cth) (as amended).
Certain sectors can be designated as ‘regulated entities’ in the SPF. Although the banking, telecommunications, and digital platform sectors are currently prioritised, the superannuation, insurance, and cryptocurrency sectors are slated to follow.
What’s included in the SPF?
The SPF introduces:
Six overarching principles (SPF Principles): Requiring regulated entities to establish, document and implement governance frameworks to combat scams and take ‘reasonable steps’ to prevent, detect, report, disrupt, and respond to them.
Sector-specific codes (SPF Codes): Requiring regulated entities to adhere to minimum standards specific to their respective sectors. When assessing whether a regulated entity has met the ‘reasonable steps’ obligation under the SPF Principles, the key consideration will be its compliance with the relevant code requirements. While the SPF Codes have not yet been finalised, the consultation process with relevant stakeholders is underway.
The Treasury’s consultation paper offers some insight into the potential content of these sector-specific codes1:
- Banking: Implementing processes to verify the identity of payees, confirm the legitimacy of high-risk transactions, and identify consumers at heightened risk of being targeted by scammers, particularly those in vulnerable groups.
- Telecommunications: Implementing measures to connect only to trusted sources for the sender information displayed in text messages, block sender information from unapproved sources, and use filters to block messages containing known phishing links.
- Digital platforms: Implementing processes to authenticate business users and advertisers, detect high-risk interactions, take appropriate actions to mitigate scam activity, and safeguard user accounts from being compromised, ensuring accounts are promptly restored to their legitimate owners.
A multi-regulator framework: With the ACCC as the general regulator and sector-specific regulators: Australian Securities and Investments Commission (ASIC) for banking, Australian Communications and Media Authority (ACMA) for telecommunications, and the ACCC for digital platforms. If new sectors are designated, the ACCC will act as the interim regulator until a specific regulator is appointed. Regulators can share information to support SPF administration or enforcement without notifying affected individuals, and regulated entities can share information with the ACCC, which may pass it on to other regulated entities.
Dispute resolution mechanisms: Requiring regulated entities to offer a mandatory right to redress for scam victims by implementing internal dispute resolution processes and participating in an external dispute resolution scheme provided by the Australian Financial Complaints Authority (AFCA).
Civil penalties: A two-tier civil penalty system for violations, with the maximum penalty for serious breaches reaching up to AUD 50 million, along with additional regulatory and enforcement measures such as injunctions, enforceable undertakings, and statutory actions for damages.
What is a scam?
The Act also introduces the first formal definition of a ‘scam’ in Australian legislation, describing it as:
“A direct or indirect attempt (whether or not successful) to engage an SPF consumer
of a regulated service where it would be reasonable to conclude that the attempt:
- involves deception, and
- would, if successful, cause loss or harm, including the obtaining of SPF personal information of, or a benefit (such as financial benefit) from, the SPF consumer or the SPF consumer’s associates”.
Key takeaways
Organisations should:
- Review the SPF to identify the necessary updates to their existing policies, procedures, and complaint handling systems. This may include revising privacy policies, improving governance structures, and enhancing compliance controls to align with the new requirements.
- Implement effective systems and tools designed to detect, prevent, and report scams. Additionally, organisations should establish mechanisms to collect and analyse relevant scam data and intelligence to improve their ability to understand and address emerging scam trends.
- Actively monitor the ongoing development of the SPF Codes to stay informed about the evolving requirements.
[1] Based on The Australian Government the Treasury data, Scams – Mandatory Industry Codes Consultation Paper (November 2023).
On 23 January 2025, the Australian Privacy Commissioner (Commissioner) determined that Services Australia (responsible for the provision of services such as Medicare, Centrelink and child support payments) had interfered with an individual’s privacy and ordered them to pay $10,000 in compensation.
Background
In our last Cyber, Privacy and Technology Report, we flagged that the Office of the Australian Information Commissioner (OAIC) released new reference guides to assist businesses that use Artificial Intelligence products and models to consider and incorporate privacy compliance.
In this article we take a deep dive into the OAIC’s guidance for developers of generative Artificial Intelligence (Gen AI Guide).
Scope and Applicability of the OAIC Guidance
The OAIC’s guidance is intended for developers, organisations that create, train, adapt, or integrate generative AI models. It also applies to organisations that supply personal information for AI model training. The Privacy Act applies to Australian Government agencies and organisations with an annual turnover of more than $3 million.
Obligations under the Australian Privacy Principles (APPs)
The APPs establish the legal framework for handling personal information. The Gen AI Guide addresses compliance based on the following APPs:
APP 1 – Integrated Privacy by Design
Developers of Gen AI models must implement practices, procedures, and systems to ensure compliance with the Privacy Act. This includes having a clear and accessible privacy policy detailing how personal information is managed.
APP 3 – Collection Obligations
Under the Privacy Act, personal information should generally be collected directly from the individual unless unreasonable or impracticable. Since data scraping does not involve direct collection, developers need to assess whether their data collection meets this obligation.
When developers obtain a dataset from a third party that includes personal information, they are considered to be collecting that data themselves. This means they must ensure their collection and use of the dataset comply with privacy obligations. Many of the same considerations that apply to data scraping are also relevant to third-party datasets.
If a dataset compiled from scraped or crawled content is publicly available through a third party, developers may not be able to impose privacy-compliant terms on its use. In these instances, they must carefully assess the data sources and collection methods to identify potential privacy risks. Where necessary, developers should delete sensitive information and either remove or de-identify personal data before using the dataset.
APP 5 – Notification of Collection of Personal Information
Individuals must be informed about the collection of their personal information, including the purpose of collection and any third parties to whom the information may be disclosed. When scraping data to collect personal information, developers should disclose the categories of personal data used to train the model, the types of websites scraped, and, if possible, the specific domain names and URLs.
Due to the general lack of transparency in data scraping particularly involving personal information, it is considered best practice to allow a reasonable amount of time between notifying individuals about data collection and training the generative AI model.
APP 6 – Use and Disclosure Obligations
Developers may consider using personal information they have previously collected to train a Gen AI model. However, they must assess whether such use is permitted under the Privacy Act. These considerations also apply when an organisation provides personal information to a developer for the purpose of building an AI model. In the context of Gen AI training, ensuring that individuals are adequately informed can be challenging.
Training AI models involves complex data processing, which may be difficult for individuals to fully understand. Developers and organisations should ensure that consent is communicated in an accessible and clear manner. Additionally, as consent must be voluntary, specific, and current, relying on broad, general consent (such as agreeing to a privacy policy) is not sufficient for AI training purposes.
APP 10 – Accuracy when training AI models
Developers of Gen AI models must take reasonable steps to ensure the personal information they collect is accurate, up-to-date, and complete. The specific steps required depend on various factors, such as the sensitivity of the personal information, the nature of the developer, and the potential adverse consequences if the quality of personal information is not maintained.
Practical Recommendations
The Gen AI Guide set out key takeaways for developers of Gen AI models to comply with privacy requirements:
- Conduct Privacy Impact Assessments (PIAs): PIAs are an important process which allows developers to evaluate the potential privacy risks associated with AI projects and implement strategies to mitigate identified risks.
- Implement Data Minimisation Strategies: Developers should ensure that they collect only personal information necessary for the Gen AI model’s purpose, thereby reducing the risk of over-collection.
- Ensure Transparency: Developers should clearly communicate to individuals how their personal information will be used in Gen AI model development, thereby fostering trust and compliance.
- Obtain Informed Consent: Developers should seek explicit consent from individuals when collecting sensitive information to be used in training models, thereby ensuring those individuals understand how their data will be used.
- Maintain Data Quality: Developers should regularly update and verify personal information they have collected to ensure its accuracy and relevance.
- Secure Personal Information: Developers should implement robust security measures to protect personal information they have collected from unauthorised access, modification, or disclosure.
As Gen AI continues to evolve, developers of models must remain vigilant in complying with applicable privacy law and upholding privacy standards. The OAIC’s guidance serves as a valuable resource, outlining the obligations under the Privacy Act and offering developers of Gen AI models practical steps to ensure compliance.
By integrating these privacy considerations and appropriate safeguards into their development processes, organisations can build AI systems that are both innovative and respectful of individual privacy rights.
Background
It’s no secret that the rapid development of artificial intelligence (AI) in recent years has profoundly changed the way we work, study and conduct business. In particular, the use of AI in processes traditionally reliant on ‘human’ decision-makers (such as recruitment) continues to raise concerns of potential biases, accountability, and overall impact on job quality and worker rights.1
Responding to these concerns in April 2024, an inquiry into the digital transformation of workplaces was adopted by the federal Parliament House Standing Committee on Employment, Education and Training (Committee). The inquiry examined how AI decision-making and machine learning techniques are used in the modern workplace, receiving 66 submissions from a mix of professional associations, corporations, workers’ unions, education providers and agencies. The Committee was particularly interested in how these automated processes can impact on societal productivity, risk, individual rights, small business and the well-being of vulnerable people or minority groups.
In February 2025, the Committee released its findings and recommendations in its report entitled ‘The Future of Work’.
Key Themes
The Committee identified and reported on four key themes:
Regulation
There is a web of laws and regulations split between the Commonwealth and States that go towards regulating the use of AI. In the Committee’s view, existing frameworks fall short of providing the necessary guardrails for ensuring that AI technological developments do not adversely affect the workplace. The idea of technological advancement outpacing legal frameworks is not a new one.
However, given the impact employer decision-making can have on individuals, the workforce and society, complacency could be dangerous. The Committee noted that while several guidelines offering guidance on the safe use of AI in workplaces have been released, their effectiveness is compromised by the reliance on self-regulation. Employers are expected to implement these guidelines, but as there is no formal requirement to do so, the risks associated with AI decision-making are not adequately mitigated.
Opportunity
The productivity, efficiency and cost-saving capabilities of AI did not go unnoticed. The Committee encouraged research and development of AI, recognising that such technology can drive economic growth and generally enhance Australia’s competitiveness in the global job market.
However, the encouragement was not without qualification, with the Committee emphasising the importance of balancing technological advancement with the need to consider displacement and education of workers.
Data, Privacy and Surveillance
The Committee did not dispute employers’ general requirements to collect data about workers for the effective running of day-to-day operations. There were, however, concerns as to how the collected data may be used, especially in the development, training and use of AI. The Committee expressed clear dissatisfaction with regulatory frameworks in this respect, calling for laws to change in favour of workers attaining more control over how their workplace-generated data is used.
The health and phycological harm workers experience when subject to strict monitoring was cause for concern to the Committee, especially in light of recent efforts to assist workers in achieving work-life balance (such as the “right to disconnect”). Lastly, the Committee emphasised the importance of ensuring AI is not used to discriminate against already protected attributes (age, sex, religion, political opinion, etc).
Equity
In the ongoing push towards increased automation, most recently through AI, the risk of leaving behind marginalised groups is high. Marginalised and minority groups often lack access to educational opportunities, technology and jobs which can expose people to AI.
The Committee emphasised the importance of ensuring equal access to learning and training, as a failure to do so may lock workers from these groups out of employment opportunities, further perpetuating cycles of disadvantage. With low-skilled roles in the future increasingly likely to require the use of AI in some capacity, it’s important to ensure all workers can upskill. The Committee held the same concern for young and entry-level workers.
Takeaways for Employers
The Committee’s report made 21 recommendations, all designed to ensure the viability of future Australian workplaces. Although the recommendations targeted the Australian Government and its agencies, the report proposes various actions employers can take to inform their approach to AI in the workplace.
Firstly, consulting with workers during the design and implementation of AI systems can help reduce fear and anxiety often associated with new technologies, encouraging positive worker response and ultimately leading to better designed systems.
Secondly, implementing AI training programs to upskill workers can lead to improved quality of work and assist in their continued development.
Thirdly, caution should be taken where worker’s data is used by AI to make management decisions, as it is the duty of the employer to ensure employees are not discriminated against based on protected attributes.
Despite the recommendations made, we think it’s worth mentioning the infamous 1979 quote from an IBM presentation: “A computer can never be held accountable. Therefore a computer must never make a management decision.” If recent trends are anything to go by, there is no doubt this principle has been pushed to its boundaries, if not already set aside!
[1] See Philip Meissner and Yusuke Narita, ‘Artificial intelligence will transform decision-making. Here’s how’, World Economic Forum (Article, 27 September 2023) https://www.weforum.org/stories/2023/09/how-artificial-intelligence-will-transform-decision-making/.
Background
On 19 July 2024, CrowdStrike rolled out a faulty software update which affected Microsoft Windows users worldwide and cost Australian businesses an estimated $1 billion. On 25 October 2024, Delta Air Lines filed a lawsuit against CrowdStrike seeking in excess of US$500m for losses allegedly caused by the CrowdStrike outage. However, Delta now finds itself on the receiving end of a countersuit filed by CrowdStrike, alleging that “any damages suffered by Delta following the July 19 Incident are the result primarily of Delta’s own negligence” and its lawsuit reflects “a desperate attempt to shift blame for its slow recovery away from its failure to modernize its antiquated IT infrastructure.”
The case could provide important guidance for other companies that provide and rely on similar services and highlights the importance of understanding how liability is determined and attributed in negligence cases, whether in pursuit or defence of such claims.
Causation
The critical issue in any negligence case is establishing responsibility for the harm or losses suffered.
In Australia, plaintiffs are generally required to establish both factual and legal causation1 to be successful in negligence cases such as technology disputes. Factual causation is usually established by the “but for” test, which asks whether the plaintiff would have suffered the alleged losses if the defendant had not committed the alleged act. If the plaintiff would not have suffered the alleged losses but for the defendant’s act, factual causation has been established.
Legal causation, on the other hand, is a matter of establishing whether it is appropriate to hold the defendant legally liable for the alleged loss. Courts in Australia have a relatively broad remit to determine this, but will most often have regard to matters such as:
- whether the alleged loss was a sufficiently foreseeable consequence of the defendant’s actions. Consequences that are too far remote from the defendant’s actions will not be sufficient to establish legal causation, and
- whether there were any intervening acts or circumstances that more directly contributed to the alleged loss. If so, there may be a sufficient break in the chain of causation, such that the alleged losses would fall outside the defendant’s scope of liability.2
Matters involving intervening acts are typically complex and require a close examination of the facts to determine their impact on liability.
Applying those principles to the Delta case, Delta would therefore be required to establish:
- factual causation: that it would not have suffered the losses it complains of if CrowdStrike had not released the faulty software, and
- legal causation: that its losses are sufficiently tied to CrowdStrike’s release of the faulty software, and not too remote in scope or attributable to other circumstances that might alter the causal nexus.
However, Crowdstrike’s countersuit indicates that (at least, in CrowdStrike’s view) it may struggle doing so, and could be at risk of being found to have contributed to its own losses.
Contributory Negligence and Proportionate Liability
It is important in any technology dispute to ask whether the plaintiff can be said to have contributed to its own losses through its own actions – just as CrowdStrike certainly considers Delta is guilty of. If so, the defendant may rely on a contributory negligence defence to reduce its liability by the amount the plaintiff’s negligence contributed to its losses. It is, in effect, a penalty against the plaintiff for failing to take reasonable precautions against the risk of harm,3 and has the power to defeat a claim in its entirety.4
Similarly, one should ask whether any other causes or parties contributed to the plaintiff’s loss. If so, it may be possible to apportion liability among multiple defendants to reflect each party’s (or ‘concurrent wrongdoer’) responsibility.5 In order to raise a proportionate liability defence, the relevant defendant must establish that the other concurrent wrongdoer’s act or omission “materially contributed” to the loss claimed, playing a part in contributing to the loss.6 If this can be established, liability will then be apportioned based on the culpability and causal potency of each wrongdoer’s negligence.7
Implications
Whether you are bringing or facing a negligence claim in a technology dispute, it is important to consider:
- whether the chain of causation has been established,
- whether another party may be responsible, either in full or in part, for the alleged losses, and
- whether the party which suffered the losses contributed in some way to its own loss.
Our experienced Technology team can assist businesses and insurers with tailored advice and strategies to navigate the complexities of technology disputes. Get in touch with our authors to discuss how we can support your business.
[1] Enshrined in civil liability legislation nationwide. See, for example, s 5D of the Civil Liability Act 2022 (NSW).
[2] Medlin v State Government Insurance Commission [1995] HCA 5.
[3] Civil Liability Act 2002 (NSW) s 5R
[4] Civil Liability Act 2002 (NSW) s 5S
[5] Hunt and Hunt Lawyers v Mitchell Morgan Nominees Pty Ltd [2013] HCA 10.
[6] Hunt and Hunt Lawyers v Mitchell Morgan Nominees Pty Ltd [2013] HCA 10.
[7] (‘Lacrosse’) Tanah Merah Vic Pty Ltd & Ors v Owners Corporation No 1 of PS613436T & Ors [2021] VSCA 72.
Background
Following the ASIC’s successful prosecution of RI Advice in 2022 and the proceedings currently on foot against Medibank, the Regulator has once taken aim against a corporate for alleged inadequate cybersecurity measures that were exposed in a cyber-attack.
On 12 March 2024, ASIC commenced proceedings in the Federal Court against FIIG Securities Limited for breaches of the Corporations Act 2001 (Cth) in relation to a cyber attack and data breach that took place in May and June 2023.
The central allegations of ASIC’s claim are that FIIG failed to implement and maintain adequate cybersecurity measures required to protect against the risks and consequences of a cyber intrusion. The cybersecurity measures that ASIC claims were non-existent or deficient include (collectively, the Missing Cybersecurity Measures):
- a cyber incident response plan,
- account management processes such that privileged administrator accounts are only used where necessary and are subject to higher password complexity/control requirements,
- vulnerability scanning tools and processes,
- modern-standard firewalls,
- endpoint detection and response (EDR) software,
- security incident events management software to collect, log and analyse security information and to monitor the EDR software.
- adequate patching and software update practices,
- multi-factor identification for remote access users, which ASIC says should have been in place since 2022,
- mandatory security awareness training for all employees, and
- quarterly reviews of cybersecurity controls.
ASIC alleges that the failure to implement adequate cybersecurity measures (which includes the Missing Cybersecurity Measures listed above), amounts to breaches of the Corporations Act 2001 (Cth) as follows:
- s912A(1)(a) which requires FIIG to do all things necessary to ensure that the financial services covered by its AFSL are provided efficiently, honestly and fairly,
- s912A(1)(d), which requires FIIG to have available adequate resources (including financial, technological, and human resources) to provide the financial services covered by its AFSL, and
- s 912A(1)(h), which requires FIIG to have adequate risk management systems.
ASIC seeks declarations against FIIG in respect of the above contraventions as well as pecuniary penalties. ASIC also seeks an order requiring FIIG to complete a compliance programme involving review of its cybersecurity measures and commission an independent expert to report on those measures to ASIC.
These proceedings demonstrate ASIC’s ongoing commitment to enforcing AFSL holders’ cybersecurity obligations. Arguably it represents the sternest test yet of the application of 912A to cybersecurity measures and should therefore provide greater guidance on what measures AFSL holders need to consider to comply with their obligations to customers. ASIC Chair Joe Longo explained that this claim is “a wake-up call to all companies on the dangers of neglecting your cybersecurity systems”.
Class Actions Update
The class action against Optus and Medibank continue to progress.
In Optus, orders were made in December for the parties to complete mediation by the end of May this year and the matter is next listed for a case management hearing in August.
The decision in the Medibank privilege stoush, which we covered in our June 2024 update, was handed down on 7 March 2024. The decision has not been published because the parties were given until 21 March to propose redactions to the decision. However, it is clear from orders made on 17 March that Medibank will be required to disclose some of the documents the subject of the challenge. We expect the judgment to be released before the end of this month.
Our Technology Law team recently published an article covering key legal issues for Managed Service Providers (MSPs), and some recommendations for ensuring their Master Services Agreement (MSA) is fit for purpose.
We cover the purpose of the MSA and its pivotal role in legal risk management, and provide advice and tips on best practice for MSA drafting and negotiation.
We also cover some fundamental areas including the importance of the liability regime, IP issues and data protection, privacy and security. This is essential reading for MSPs! Link to the article here.
New Zealand
In the recent decision of BMN v Stonewood Limited [2024] NZHRRT 64 the Human Rights Review Tribunal (HRRT) revisited the question of damages under the Privacy Act 2020. The HRRT awarded compensation of over NZD60,000 following collection of personal information by an employer that contravened three information privacy principles.
Background
The defendant, Stonewood Group Limited, had arranged for an employee (BMN) to attend an off-site meeting. During the meeting the defendant seized the employee’s work laptop, personal cell phone, and a personal USB stick. All three devices contained the employee’s personal information. BMN’s employment was terminated one week later. BMN made multiple requests for the return of the personal information, although the defendant repeatedly failed to comply.
The decision
The defendant was found to have breached IPP 1, 2 and 4. In particular, the defendant had no lawful reason to collect the personal information, admitting that they had not “given any thought to the [Privacy] Act or any privacy obligations at the time the plan was formulated and then actioned”.1 Furthermore, the collection was not from the individual in question, and was unfair and unreasonably intrusive. The HRRT ultimately awarded BMN over NZD60,000 in compensation and ordered that the personal information be returned.
Takeaways
The decision in BMN is instructive for several reasons:
- Many HRRT decisions under the Privacy Act 2020 relate to access requests. In BMN the HRRT was asked to consider a claim concerning collection of personal information. The case is a good reminder of the obligations regarding collection, and the importance of ensuring personal information is collected for a valid purpose and in a manner that is fair.
- The HRRT was keen to point out that taking the items knowing they contained personal information, or being indifferent to the fact that they contained personal information, was collection for the purposes of the Act. The fact that the defendant had not considered the presence of personal information or their Privacy Act 2020 obligations was not a defence.
- As with other notable damages cases, the defendants post incident conduct was material to the level of damages. Here the defendant exacerbated the harm by refusing to return the personal information at BMN’s request.
The case is a good reminder of the importance of ensuring privacy obligations remain front of mind when collecting information and ensuring that agencies respond appropriately when privacy issues (not just data breaches and access requests) are raised.
[1] BMN v Stonewood Group Limited at [53]-[54].
Following the Office of the Privacy Commissioner’s launch of the Children’s Privacy Project in September 2023, the Commissioner looks set to develop further guidance on Children’s Privacy in Aotearoa, New Zealand, throughout 2025 and beyond.
The OPC consulted with Government agencies, professionals who work with children, and advocates for children through 2023 and 2024, culminating in a report summarising key findings and themes in April 2024.1 The OPC has recently announced it is now developing guidance covering some of the points identified through the consultation.
The purpose of the guidance is purportedly to:
- increase understanding of children’s privacy rights across the “children’s sector”,
- provide best practice advice for those dealing with children, and
- empower children and young people to understand their privacy rights.
The OPC goes on to suggest that best practice guidance will cover topics such as keeping students and parents/caregivers informed, wellbeing and safety, education technology, school enrolment forms, and visual guides on online safety and privacy.
Given the length of the project and the time invested in the consultation it is unsurprising (and welcome) news that the OPC will be developing some targeted guidance as a result. The OPC has suggested guidance will be released this year.
[1] https://www.privacy.org.nz/resources-2/children-and-young-people-policy-project/
New Zealand has (finally) committed to becoming a signatory of the Budapest Convention of Cybercrime. The convention was originally signed in 23 November 2001 and currently has 65 signatories, New Zealand being a notable holdout. The bill incorporating the convention passed its first reading in October 2024.1
The Convention is designed to encourage cross border co-operation between domestic law enforcement agencies investigating acts of cybercrime. The Convention achieves this by encouraging alignment of cybercrime frameworks and information sharing between signatories. In particular, by
- aligning countries’ laws substantive laws on cybercrime and computer facilitated crime,
- creating a framework for law enforcement agencies to formally seek assistance from one another and aligning the legal powers that member states use to obtain and preserve evidence, and
- creating a forum for member states to meet and discuss cybercrime trends and developments.
Ultimately the Convention facilitates cooperation between signatories and should assist New Zealand’s law enforcement in engaging with international contemporaries. Given the international nature of cybercrime, and consistent trends of foreign domiciled threat actors targeting New Zealand agencies, the ratification of the convention is a step in the right direction.
[1] https://www.beehive.govt.nz/release/government-extends-fight-against-cybercrime
The Public Service Commission (PSC) has recently released its findings following an inquiry into the protection of personal information collected by several government agencies for the purposes of the 2023 Census and COVID 19 vaccinationss.1
According to the Public Service Commissioner Sir Brian Roche the findings of the inquiry “makes for very sobering reading… It raises a number of issues that go to the core of the confidence and trust required to maintain the integrity and sanctity of information entrusted to government agencies.”
The inquiry, led by Pania Gray and Michael Heron KC, found that the The Ministry of Health, Te Whatu Ora and Statistics New Zealand (Stats NZ) did not have sufficient safeguards when sharing personal information with third-party service providers.
Regarding personal information collected for COVID 19 vaccination purposes, the inquiry considered the Ministry of Health and Te Whatu Ora practices when providing service providers access to individual-level vaccination information to facilitate the rollout of the COVID 19 vaccination programme. Despite imposing data protection obligations on the service providers through data sharing agreements, the inquiry found there was no audit or assurance arrangements to ensure those obligations were fulfilled. Furthermore there were no controls in place once data had been downloaded by service provider’s staff.
Regarding Census data, the Inquiry found that Stats NZ had not put proper safeguards in place when providing personal information to third parties. Through an information sharing agreement, Stats NZ shared personal information with third parties to assist in increasing engagement with the census, particularly among Māori. Stats NZ, however, failed to implement safeguards to ensure information would not be misused, including requirements for the staff of the relevant third parties to sign the Certificate of Confidentiality,2 conduct privacy impact assessments and develop a workforce training plan. Complaints were made about the processes followed by the third parties and Stats NZ staff raised serious concerns, but these were not acknowledged or adequately dealt with.
In the words of Sir Brian Roche “The system has failed and that isn’t acceptable – and it must be, and will be, remedied.” Following the inquiry, the Public Service Commissioner has asked the government agencies to suspend entering into new contracts, renewals or extensions of contracts with the third-party service providers until the Commissioner is satisfied that the contracts can adequately deal with information sharing and conflict of interest obligations.3 The Commission is working on a new information sharing standards which will be implemented by 1 July.
[2] A person collecting Census data is required to sign a Certificate of Confidentiality under s 42(3)(a) of the Data and Statistics Act 2022. Section 42(6) creates a lifelong statutory obligation of confidentiality on its signature and s 78 criminalises a breach of that obligation.
[3] “Findings of inquiry into protection personal information released” (18 February 2025) Te Kawa Mataaho Public Service Commission <www.publicservice.govt.nz>.
The Office of the Privacy Commissioner has confirmed that it will proceed with issuing a Biometric Processing Privacy Code of Practice. An updated draft of the Biometrics Code has been released for consultation, alongside draft guidance material. We have previously written about the role of the Biometrics Code (see here), which will create rules applicable to agencies using biometric technologies such as facial recognition or fingerprint scanning.
The updated draft takes into account 250 submissions received during the previous round of consultation. Changes to the draft include a focus on simplification, as well as:
- Adding a requirement to do a proportionality test and put in place privacy safeguards,
- Stronger notification and transparency obligations,
- Limits on some uses of biometric information e.g. emotion analysis and types of biometric categorisation,
- Restrictions on using biometrics are now targeted to the most intrusive or high risk uses, and
- Increasing the commencement period to 9 months for organisations already using biometrics.
Submissions on the draft Code are open until 14 March 2025, with the expectation being that the Code will come into force this year.
It will be important that agencies have considered these requirements when they come into force later this year. If these changes are relevant to the operation of your business, feel free to reach out to a member of our technology team to discuss any queries you may have.