Issue 10 of our Cyber, Privacy & Technology Report is out now. Packed with the latest news, it’s the quarterly go-to for insurers, brokers, and their customers operating in cyber, privacy, and technology sectors across Australia, New Zealand, and Singapore.

We hope you find this edition both insightful and practical in navigating the ever-evolving cyber and technology landscape.

If you’d like to discuss any of the topics covered, please don’t hesitate to reach out to a member of our team.

24/7 Cyber Hotline

Wotton Kearney operate a cyber incident response hotline that is monitored 24/7 by our dedicated team of breach response lawyers. By using a lawyer as an incident manager, we can immediately protect key reports and other sensitive communications with your customer and other vendors under legal professional privilege.

To access our hotline, please click here.

Australia

On 25 November 2024 the Senate passed the Cyber Security Bill 2024 (Cth) amendments to the Intelligence Services Act 2001 (Cth) and Security of Critical Infrastructure Act 2018 (Cth) (SOCI Act), and on 28 November 2024 the Senate passed the Privacy and Other Legislation Amendment Bill 2024 (Cth) (Privacy Bill). After Royal Assent is received, these bills will be an Act of Parliament.

Cyber Security Bill

The Cyber Security Bill will grant additional protections to people and businesses and improve Government’s visibility of the current cyber threat environment. This legislative package seeks to give effect to the legislative reforms under Shield 4 of the 2023-2030 Australian Cyber Security Strategy.

The Cyber Security Bill includes the following measures:

  • The development of security standards for smart or IOT devices, i.e. products that can directly or indirectly connect to the internet. A manufacturer of these products is expected to comply with these requirements, and a supplier is required to only supply products which are accompanied by a statement of compliance.
  • Creating a mandatory reporting obligation for all entities who meet a certain turnover threshold (amount still to be determined) who:
    • are affected by a cyber security incident (whether such an incident has occurred, is occurring or is imminent and other requirements are met) and such incident is having or could reasonably be expected to have, a direct or indirect impact on the entity, and
    • receives a ransom demand or a third party directly related to the incident receives a ransom demand, and
    • makes a ransom payment or gives a benefit in connection with a cyber security incident.
  • Such reports are to be made to the Department of Home Affairs and the Australian Signals Directorate (ASD) (if no other Commonwealth body is designated) within 72 hours of payment being made or becoming aware of a payment made. The report must include (i) the contact and business details of the entity who made the payment and the reporting entity, (ii) details of the cyber security incident and the impact it has on the entity, (iii) the ransomware demand made, (iv) the ransomware payment, and (v) communications between the entity and threat actor relating to the incident, demand and payment. This reporting obligation will commence from the earlier of a date fixed by Proclamation, or 6 months after the Cyber Security Act receives royal assent.
  • A “limited use” obligation restricting the sharing of incident information by the National Cyber Security Coordinator (NCSC) to other government agencies and regulators. This obligation will be complemented by an amendment to the Intelligence Services Act 2001 (Cth). Essentially, information disclosed in a ransomware payment reports can be used by a designated Commonwealth body for a permitted purpose, which enables the entity/Commonwealth body/State body/NCSC/Ministers to respond, mitigate or resolve a cyber security incident. Any information made available cannot be used to investigate or enforce a contravention by the entity making the report, except a contravention of this Bill or contravention by the reporting entity of a law that imposes a criminal offence.
  • Establishment of a Cyber Incident Review Board (CIRB) to conduct post-incident reviews into significant cyber security incidents. The CIRB must conduct reviews where incidents are referred to it by the Minister, NCSC, an entity impacted by the incident or a member of the CIRB. A review can only take place where (i) the incident seriously prejudiced the social or economic stability of Australia or its people, the defence of Australia, or national security, (ii) the incident involves novel or complex methods/technologies and an understanding of if will improve Australia’s preparedness, or (iii) the incident is of serious concern to the Australian people.

Amendments to the SOCI Act

The SOCI Act has been amended to:

  • expand the definition of critical infrastructure assets to include secondary assets which hold ‘business critical data’ and relate to the functioning of the primary asset.
  • Introduce ‘last resort’ directions power for the Secretary of the Department of Home Affairs, for managing multi-asset incidents and the consequences thereof.
  • enable greater intra-government sharing of protected information and cross-industry collaboration.
  • create a directions power for the Secretary of the Department of Home Affairs or the relevant Commonwealth regulator which is exercisable where it has been identified a critical infrastructure risk management program is seriously deficient.
  • include security and notification obligations for critical telecommunications assets.

Privacy Bill

Most of the provisions in the Privacy Bill, will commence the day after Royal Assent is received, however, in some cases such as the amendments to APP1 in relation to automated decisions will only commence 24 months after Royal Assent is given, and Schedule 2 dealing with the statutory tort for serious invasions of privacy will commence on the earlier of a date proclaimed or 6 months after Royal Assent is given.

During the parliamentary process a number of amendments to the draft bill were proposed which have passed. These include, among others:

  • 24 months after the commencement of Schedule 3 (Doxxing offences) the Minister must cause an independent review of these amendments. A report of the review must be provided within 6 months of the commencement of the review.
  • The consultation period for the Children’s Online Privacy Code will be extended from 30 days to 60 days.
  • The power of the OAIC to issue compliance notices for breaches of section 13K (i.e. the civil penalty provisions arising from a breach of the Australian Privacy Principles (APPs)). A failure to comply with a compliance notice will result in the imposition of civil penalties (up to $66,000) or infringement notice powers.
  • Updates to the statutory tort provisions to include additional exemptions for agencies, State and Territory territories, their staff members and law enforcement bodies.

For more information about the changes introduced by the Privacy Amendment Bill, view our ‘What You Need to Know: The Privacy and Other Legislation Amendment Bill, 2024’ summary, or access Issue 9 of our Cyber, Privacy and Technology Report.

The Australian Privacy Commissioner launched an investigation in 2022 into Bunnings Group Limited (Bunnings) and its personal information handling practices, with a focus on its use of facial recognition technology. The investigation has now found that Bunnings has breached Australians’ privacy rights.

Between November 2018 and November 2021, Bunnings collected personal and sensitive information of its customers through a facial recognition technology system. This included using CCTV cameras to capture the face of every individual who entered Bunnings stores. Bunnings claimed to be collecting the information to “help protect against serious issues, such as crime and violent behaviour” noting that “some 70 per cent of incidents are caused by the same group of people” thereby apparently justifying its use of the technology.

Whilst the Privacy Commissioner appreciated the potential use of facial recognition technology to help protect against violence and other crime, the benefits of such use needed to be weighed against the impact on the individual and their privacy rights. The Privacy Commissioner considered the use of facial technology in this instance was the most intrusive option available and was “disproportionately interfering with the privacy of everyone who entered its stores, not just high-risk individuals.”

Bunnings had also failed to take reasonable steps to notify individuals that their personal and sensitive information was being collected and did not outline it in its privacy policy. Biometric information is considered one of the most sensitive types of information under the Privacy Act 1988 (Cth) (Privacy Act), such that organisations must afford a high level of protection when collecting or holding this type of information. Notably, Bunnings was not fined for its privacy breach. However, the Privacy Commissioner has made various orders including that Bunnings must:

  • not repeat the conduct, or continue the acts or practices that resulted in the privacy interference,
  • publish a website statement about the privacy interference, which is easily accessible and prominently found on Bunnings’ website, and
  • delete all personal and sensitive information collected via the facial recognition tool that Bunnings still holds.

What are the regulations around the use of facial recognition technology in Australia?

The Australian Privacy Principles (APPs) are designed to protect Australians privacy from misuse and interference. Four APPs are relevant to the use of facial recognition technology and should be considered, which we outline below.

  • APP 1: Governance and ongoing assurance – this involves organisations having clear governance arrangements in place, including privacy risk management practices and policies which are regularly reviewed.
  • APP 3: Necessity and proportionality – organisations should only use facial recognition technology where it is both necessary and proportionate, and where the purpose of using the technology cannot be achieved by other less intrusive means.
  • APP 5: Consent and transparency – individuals need to be provided notice and information to allow them to provide consent to the collection of their information.
  • APP 10: Accuracy, bias and discrimination – the use of facial recognitional technology needs to be accurate and steps should be taken to remove any risk of bias to the individual.

What can you do?

If you are considering using facial recognition technology in your organisation, the Office of the Australian Information Commissioner (OAIC) recommends completing a Privacy Impact Assessment to ensure you have implemented the correct principles, policies and procedures.

This includes giving consideration to the above APPs and adopting a ‘privacy by design’ approach. A Privacy Impact Assessment involves identifying the impact and risks that implementing facial recognitional technology might have on the privacy of individuals. This is an important step to take to ensure compliance with the obligations under the Privacy Act.

The landscape of privacy law has seen significant developments in recent months, with the recognition of the tort of invasion of privacy in common law and the recent proposal of statutory reforms in Australia.

These developments signal a shift in how privacy is approached in both judicial and legislative contexts, with far-reaching implications for individuals and businesses alike.

Common law

In Lynn Waller v. Romy Barrett [2024] VCC 962 the plaintiff successfully argued that the defendant (her father) had breached her confidences by disclosing private details from counselling sessions to media outlets, resulting in the court awarding damages.

Judge Tran denied the previous decision in Victoria Park Racing & Recreation Grounds Co Ltd v. Taylor [1937], where the action of invasion of privacy was retained under the doctrine of breach of confidence. Her Honour found the tort of privacy is better suited as separate and distinct, to accommodate the principles of privacy and human dignity, which the tort breach of confidence is unable to protect.

The Court did not state the elements of the cause of action, but rather stated relief would be provided ‘as a minimum, in the circumstances where it has been available in the past – that is, the making of private matters in circumstances that a reasonable person would regard as highly offensive’.

This case represents a significant milestone in Australian common law, particularly as it arose just weeks before the Second Reading of a Commonwealth Bill proposing the introduction of a statutory tort for serious invasions of privacy.

Legislative reform

Amendments to the Privacy Act 1988 (Cth) recently passed through parliament. Among the changes is the introduction of a statutory tort for serious invasions of privacy. This new legal framework is crucial for filling existing gaps in privacy protection and addressing current and emerging privacy risks, such as doxing and unauthorised data sharing.

The statutory tort will empower individuals to seek redress in court for serious invasions of their privacy without being confined to the limitations of the Privacy Act. This development is vital as it offers a structured mechanism for individuals to protect their personal information in an increasingly digital world.

Under the new laws, to establish a cause of action in relation to a serious invasion of privacy, the following elements must be satisfied:

  • Invasion of Privacy: An individual must prove that the defendant invaded their privacy, and that it was by ‘intrusion upon seclusion’ and/or ‘misuse of information’.
  • Reasonable expectation of privacy: An individual must prove that a person in their position would have had a reasonable expectation of privacy.
  • Fault: An individual must prove that the defendant intentionally or recklessly invaded their privacy.
  • Seriousness: An individual must prove that the invasion of privacy was serious.

There is however a restricted time period for which such proceedings must commence. If a plaintiff was under the age of 18 when the offence occurred, proceedings must commence prior to them turning 21. In all other circumstances, proceedings must commence within one year of the plaintiff’s awareness of the offence, or three years after the event.

As businesses can be found liable directly or vicariously, it’s recommended to review and update policies and procedures ahead of the laws commencing, including in relation to the access, use and disclosure of information by employees or agents.

A new ‘International Organization for Standardization’ (ISO) standard has been developed to navigate compliance with privacy obligations.

What is the standard?

ISO/IEC 27701 provides a framework for Privacy Information Management Systems (PIMS), that expands on ISO 27001 (information security) and ISO 27002 (information security controls).

Understanding Privacy Compliance

ISO 27701 aims to help organisations understand privacy standards by outlining guidelines for handling personally identifiable information (PII), and is designed to compliment the obligations under the Privacy Act 1988 (Cth).

While the ISO standards do not replace legal advice, they can assist organisations with privacy management, and address risks related to data handling. This includes (but is not limited to) awareness of incident reporting and the consequences of a privacy breach

Improves Risk Management

By integrating privacy specific controls under the framework, the PIMS helps to identify, assess and manage risks specific to PII. This approach can aid the organisation in reducing the likelihood of data breaches and responding more efficiently to data related incidents. The standard also makes it easier for organisations to understand obligations when transferring data internationally.

Operational steps

Organisations seeking certification under the ISO 27701 may document their policies, procedures, protocols and activities in line with the standard’s operational checklists, and maintain records which are typically audited by internal and third-party auditors, to achieve proof of compliance with the ISO 27701 standard. The ISO 27701 relies on an organisation having achieved ISO 27001 for security management and therefore ISO 27701 certification is only available on completion of the ISO 27001 security management standards.

Sources

https://www.iso.org/obp/ui/en/#iso:std:iso-iec:27701:dis:ed-2:v1:en

https://www.standards.org.au/blog/as-27701-the-pims-standard-you-cant-afford-to-ignore

https://www.isms.0nline/iso-27701/

https://www.bsigroup.com/en-AU/iso-27701-privacy-information-management/

https://www.bsigroup.com/globalassets/localfiles/en-au/iso-27701/privacy_regulation_whitepaper-bsi0366-1911-au.pdf

https://www.bsigroup.com/globalassets/localfiles/en-au/isoiec-27701/privacy-matters-whitepaper-bsi0330-1908-au-web.pdf

https://www.bsigroup.com/globalassets/localfiles/en-au/iso-27701/iso-iec-27701-implementation-guide-bsi0366-1911-au.pdf

New guidance has been published by the Office of the Australian Information Commissioner (OAIC) relating to organisations’ use of third-party tracking pixels on websites. While the use of tracking pixels is not prohibited by the Privacy Act 1988 (Cth), the OAIC has set out general considerations.

What is a tracking pixel and how is it different to cookies?

Tracking pixels and cookies accomplish similar tasks but work in different ways. A tracking pixel is a tiny, transparent pixel embedded into the HTML code of a website, email or ad that blends into the background of a page and is not easily detected. A cookie is a small piece of unique data, usually stored in text files, that identify unique users by storing data on the user’s device. The main difference is that cookies can be disabled, blocked or cleared from browsers, whilst tracking pixels cannot be disabled by users. Furthermore, tracking pixels are not visible to users (unlike cookies) and therefore users’ have no knowledge that tracking pixels are being used.

Tracking pixels can collect all types of data, including personal information such as, names, addresses, date of birth, email addresses, phone numbers, transactional data, IP addresses, geolocation data, URL information, pages visited, content viewed, and session duration.

Privacy Commissioner Carly Kind has warned that tracking pixels could be privacy invasive if misused.

What does the guidance say?

The OAIC’s guidance encourages organisations to consider the following when using tracking pixels:

  • Reviewing terms and conditions: Third-party pixel providers generally offer non-negotiable terms and conditions which impose an obligation on the customer to ensure that the use of the tracking pixels comply with applicable laws.
  • Due diligence: Appropriate due diligence over the third party’s practices should be conducted to ensure that personal information is adequately protected.
  • Completing a privacy impact assessment (PIA): Organisations should conduct PIAs before adopting and using tracking pixels. A PIA should consider what information will be collected, how can the minimum amount of personal information be collected, how the third-party provider will use personal information, will data be sent overseas, how will data be secured and how long will it be stored for.
  • Compliance with the APPs: Collection of personal information must be in accordance with APP3, use and disclosure of personal information must be in accordance with APP6, APP7 in respect of direct marketing must be followed, and privacy policies and collection notices must be clear and transparent in accordance with APP1 and APP5.
  • Configuration: Third-party tracking pixels should be correctly configured so that the right parameters are set to minimise and limit the amount of personal information collected.
  • Conduct regular reviews: Configurations of tracking pixels should be checked and verified regularly to ensure they are deployed correctly, and their ongoing use is reasonable and necessary.

Privacy compliance should remain front-of-mind when considering any new any new tool or technologies. Organisations must take the time to understand how a product works, consider what potential privacy risks arise and implement measures to mitigate these risks as part of a privacy by design approach.

The Privacy Commissioner has recently ordered a real estate agency to apologise for disclosing a tenant’s personal details following a negative online review.

Background

The decision of AQE and Noonan Real Estate Agency Pty Ltd1 concerned a tenant that made a complaint about the agency online. The review included the tenant’s first name. The agency responded to the review to disagree and included further personal information about the tenant including their full name.

The Commissioner found the agency breached Australian Privacy Principle 6.1 (APP 6.1) of the Privacy Act 1988 (Cth), which states that personal information collected for a specific purpose cannot be used or disclosed for secondary purpose, unless a legitimate exception applies, such as when consent has been obtained, or when the secondary purpose is reasonably expected.

While the Commissioner acknowledged the agency’s right to respond to the tenant’s negative review, it was determined that a reasonable person in the position of the tenant would not have expected their personal information to be disclosed in the circumstances. Ultimately, the OAIC concluded that no valid exceptions under APP 6 applied, finding the disclosure an interference with the tenant’s privacy.

The OAIC ordered the agency to remove the tenant’s information from its response, issue a written apology within 30 days acknowledging the interference with the tenant’s privacy, and emphasised the need to avoid similar conduct in the future.

Takeaways

The regulatory intervention in this case serves as a reminder of businesses’ obligations. Responding to public feedback or criticism provides businesses with an opportunity to demonstrate transparency and engage constructively. However, this must be balanced with a strong commitment to confidentiality. In today’s digital landscape, where negative feedback can quickly escalate, companies must exercise caution in their responses. Consumers rely on businesses to safeguard their personal details, and when that trust is compromised, the consequences can be significant.

To manage this balance effectively, it is essential for businesses to implement comprehensive privacy policies and ensure staff are trained on the importance of discretion in public communications.

Privacy reforms and ‘doxxing’

Recent privacy reforms include an amendment to the Criminal Code Act 1995 (Cth) by introducing a new criminal offence of ‘doxxing’ under section 474.17C. This new provision makes it illegal to use a carriage service to publish or distribute personal information – such as names, photographs, phone numbers, and addresses – in a manner deemed menacing or harassing. With significant penalties, including a maximum of six years’ imprisonment, this new offence underscores the urgent need for organisations to adopt robust privacy practices.

1. (Privacy) [2024] AICmr 237.

As a result of an investigation initiated by the Office of the Australian Information Commissioner (OAIC), Privacy Commissioner Carly Kind has determined that two real estate training providers, DG Institute2 and Property Lovers3 (referred to as ‘the Grubisa companies,’) after their sole director unlawfully scraped data to target vulnerable individuals, thereby interfering with their privacy.

What is data scraping and how did they do it?

Data scraping generally refers to the process of collecting large volumes of publicly available data from websites or online databases without the knowledge or consent of the individuals whose data is being gathered.

The Grubisa companies collected personal information from publicly available sources, such as court listings and death notices, to identify individuals in distressed situations (e.g., divorce, bankruptcy or deceased estates). They then cross-referenced this data with property details from third-party databases to create ‘lead lists’ of properties where owners might be motivated to sell below market value.

These lists, which included street addresses, Google map links, property types, descriptions of the distressed situation, and contact information for lenders or solicitors, were provided to budding property entrepreneurs and investors through an ‘education’ program. The companies charged a fee of AU$25,000 – $30,000 for the program, with the lead lists valued at AU$12,000.

In their defence, the companies stated that they removed property owners’ names from the lists for “privacy reasons.” However, they still provided program participants with supporting materials that included detailed instructions on how to retrieve this information.

The determination

Commissioner Kind found that the Grubisa companies’ conduct breached several of the Australian Privacy Principles (APPs), including:

  • APP 3.5: Which requires personal information to be collected fairly, meaning without deception, intimidation, or excessive intrusion. In this instance, the collection was deemed unfair as the individuals did not reasonably expect their personal information to be used for commercial purposes that aimed to exploit their vulnerability by acquiring their properties below market value. Commissioner Kind also highlighted that the commercialisation of the personal information collected by the companies was in clear violation of the terms and conditions of each third-party website.
  • APP 5.1: By failing to take reasonable steps to inform individuals about the collection of their personal information or to ensure they were aware of relevant privacy matters, such as the purpose for collection and the recipients of their data.
  • APP 10.2: By neglecting to take reasonable steps to ensure the personal information they collected was accurate, up-to-date, and relevant. Specifically, the companies made minimal efforts to verify the quality of the data it collected, as the information contained “inherent uncertainties”, and potential leads were based on “educated guesses”.
  • APP 1.3: By failing to maintain a clear and up-to-date privacy policy, that included essential details, such as the types of personal information collected, its use, access rights, and complaint procedures.

Commissioner Kind has directed both companies to immediately cease collecting personal information unfairly from third-parties, delete their leads lists within 30 days, provide the OAIC with evidence of corrective actions, and update their privacy policies. Additionally, Property Lovers has been ordered to issue a written apology.

The OAIC’s stance on data scraping

Amid increasing regulatory attention on data scraping, the OAIC, alongside 11 other international privacy authorities, issued a Joint Statement on Data Scraping and the Protection of Privacy (Joint Statement), which outlines key expectations for organisations to safeguard personal information from data scraping, ensuring compliance with global data protection laws and fostering public trust.

The Joint Statement emphasises that publicly accessible personal information is still protected by data privacy laws in most jurisdictions. It also underscores the responsibility of organisations to safeguard personal data from unlawful scraping practices that violate these laws.

How can organisations safeguard against data scraping?

To help organisations address this issue, the Joint Statement outlines key steps businesses can take to mitigate the risks of data scraping, including:

  • Designate a team: Assign a specific team to identify, implement, and monitor anti-scraping controls.
  • Rate limiting: Set restrictions on how many visits a single account can make in a certain period. If abnormal activity is detected, such as excessive visits, block or limit access.
  • Monitor behaviour: Watch for signs of scraping, such as rapid or repetitive searches from new accounts that might indicate suspicious activity.
  • Bot detection and blocking: Use tools like IP address monitoring and CAPTCHAs to identify and block automated bots that attempt to scrape data.
  • Data breach notification: If data scraping results in a breach, organisations should assess their notification obligations in accordance with applicable privacy laws and regulations and, when required, notify affected individuals and relevant privacy regulators.

Given the constantly evolving nature of data scraping threats, organisations must adopt a proactive approach by continuously monitoring their systems, adapting to new vulnerabilities, and rigorously stress-testing their security controls to ensure robust protection of personal data.

2. Commissioner Initiated Investigation into Master Wealth Control Pty Ltd t/a DG Institute (Privacy) [2024] AICmr 243 (18 November 2024).

3. Commissioner Initiated Investigation into Property Lovers Pty Ltd (Privacy) [2024] AICmr 249 (22 November 2024).

Australia is facing significant challenges to its cyber threat landscape. In a recent report from the Australian Signals Directorate (ASD), FY 2022/23 saw a rise in cyber incidents, with the ASD responding to over 1,100 cases and receiving 36,700 calls (12% up from the previous year) to its Cyber Security Hotline. State-sponsored actors continue to target governments, businesses, and critical infrastructure, emphasising the need for robust cyber defence measures.

In response, the ASD has strengthened partnerships across government, industry, and international allies to combat cyber threats. The ASD has also used its autonomous cyber sanctions framework for the first time to penalise Russian individuals involved in cybercrime in 2024. The persistent nature of threats, including ransomware, fraud, and the exploitation of emerging technologies like artificial intelligence, highlights the importance of adopting secure-by-design ICT systems and following best-practice guidance, such as the Essential Eight. Organisations, particularly those managing critical infrastructure, are urged to adopt a proactive “when, not if” approach, ensuring preparedness through regular testing and updating of cyber incident response plans.

The ASD is the Australian Government’s technical authority on cyber security. The ASD brings together capabilities to improve Australia’s national cyber resilience.

The rapid development of Artificial Intelligence (AI) is revolutionising industries and reshaping the way we interact with technology. However, alongside opportunities for transformation, AI poses new challenges and risks for technology providers and users.

Stephen Morrissey, Christy Mellifont and Kayleigh Maxwell have written an article looking at recent examples of how the widespread use of AI technology is impacting on the liability landscape in Australia, and what we can learn from developments overseas.

The post specifically breaks down the privacy & security risks, intellectual property challenges and liability complexities surrounding AI.

Click here to view the article.

The Australian Government is currently reviewing the application of the Australian Consumer Law (ACL) to AI systems to ensure that consumer protections are sufficient in the face of evolving technologies.

This includes evaluating the need for new standards, tools, and regulations, such as the potential introduction of a standalone ‘AI Act’ or other framework legislation.

As part of this process, the Government has initiated a consultation to gather stakeholder feedback on:

  • how well adapted the ACL is to support Australian consumers and businesses to manage potential consumer law risks of AI systems,
  • the application of well-established ACL principles to AI systems,
  • the remedies available to consumers of AI systems under the ACL, and
  • the mechanisms for allocating liability among providers of AI systems.

Possible application of ACL to AI

The ACL applies to all goods or services – excluding financial goods and services – supplied to Australian consumers. This includes goods and services incorporating or using AI.

Although still largely untested in the courts, AI systems may activate consumer protections under the ACL in various ways, including:

  • Misleading or deceptive conduct: An AI system that provides misleading or deceptive information may violate s18-19 of the ACL. For example, if an AI system claims – intentionally or unintentionally – that a product performs better than it does based on false or outdated data, it can mislead consumers about its effectiveness, leading to financial harm as a result.
  • False or misleading representations: An AI system or service that generates false claims – intentionally or unintentionally – about a product or service may contravene s4, and 29-38 of the ACL. For example, if an AI system asserts that a product is proven to deliver a specific result without supporting evidence, it may mislead consumers about its efficacy, potentially resulting in financial harm.
  • Unconscionable conduct: An AI system that is intentionally designed to unfairly disadvantage consumers – such as by restricting their choices or imposing unreasonable terms – may be considered unconscionable conduct under s20-22 of the ACL, which safeguards consumers against exploitative business practices, ensuring that they receive fair and just treatment in their dealings with businesses.
  • Consumer guarantees: An AI system that fails to deliver accurate information or leads to negative outcomes – such as overcharging consumers or providing inadequate service – may breach the guarantees outlined in s54-56 of the ACL, which requires that products and services must be fit for purpose, of acceptable quality, and accurately described to consumers.

On 21 October 2024, the OAIC released new reference guides to assist businesses that use Artificial Intelligence products and models to consider and incorporate privacy compliance, including

Key takeaways include:

  • Privacy obligations will apply to any personal information input into an AI system, as well as the output data generated by AI (where it contains personal information).
  • Businesses should update their privacy policies and noticed with clear and transparent information about their use of AI products.
  • As a matter of best practice, organisations should not enter personal information, and particularly sensitive information, into publicly available generative AI tools.
  • Where developers are seeking to use personal information that they already hold for the purpose of training an AI model, and this was related to the primary purpose it was collected for, they need to carefully consider their privacy obligations.
  • Where personal information is used for an AI-related purpose that is not within reasonable expectations, to reduce regulatory risks, developers should seek consent and/or offer individuals a meaningful and informed ability to opt-out of such a use.

These guides aim to assist businesses in complying with privacy obligations when using AI products, and to provide guidance for developers utilising personal information to train generative AI models. In announcing the guides, Privacy Commissioner Carly Kind highlighted the importance of robust privacy governance of AI technology.

Businesses must assess risks when and prioritise privacy when harnessing AI technology to build trust within the community. The OAIC expects organisations to approach AI cautiously, and it reserves the right to take action against those that do not adhere to privacy standards. As technology evolves, Commissioner Kind advocates for reforms to strengthen AI privacy protections.

The Officed of the Australian Information Commissioner (OAIC) released its annual report for 2023-24, providing insight into the handling of information access and privacy rights in Australia.

Key 2023-2024 Statistics

In 2023-24, the OAIC handled a significant volume of privacy and information access inquiries, reflecting the growing importance of these issues within the Australian community.

  • Privacy Inquiries: The OAIC addressed 10,476 privacy-related inquiries during the year. Impressively, 97% of written privacy and information access inquiries from the public were resolved within 10 working days.
  • Privacy Complaints: The OAIC saw a 20% increase in the number of privacy complaints finalized, reaching a total of 3,104 compared to 2,576 in the previous period. Of these, 78% were resolved within 12 months, and the OAIC issued 12 determinations.
  • Notifiable Data Breaches (NDB) Scheme: The OAIC finalized 994 notifications under the Notifiable Data Breaches (NDB) scheme, 85% of notifications were finalized within 60 days.
  • Freedom of Information (FOI) Complaints: The number of FOI complaints finalized increased by 20% compared to the previous year, with agencies accepting 96% of the recommendations made following OAIC investigations.
  • Information Commissioner (IC) Reviews: The OAIC finalized 1,748 Information Commissioner (IC) reviews, a 15% increase in finalizations compared to the previous year. Despite a 7% increase in applications received, the OAIC issued 207 IC review decisions, a notable jump from 68 in 2022-23.

Australian Information Commissioner Elizabeth Tydd said in a statement: “We are seeing a welcome focus on privacy and access to information in Australia, and the OAIC will continue our work to foster better awareness and better practices in these crucial areas, that are integral to accountability and integrity. That will require targeted and effective enforcement that can minimise harms in the community and assist in strengthening trust and transparency in the digital economy”.

In a cybersecurity landscape of increasing third party data breaches, whenever a supplier has access to a customer’s data, or links into its systems (for example, through APIs), it’s vitally important to ensure that the technology agreement includes appropriate obligations as to data and security. With the proliferation of ‘cloud’ and ‘as-a-Service’ technology models over recent years, the increasingly interconnected nature of organisations’ technology and the storage and processing of data in multiple locations, it is now more crucial than ever that security gets the right level of focus and attention when drafting and negotiating technology contracts.

Buyers of technology products and services are becoming increasingly wary of a third party data breach impacting their organizations, concerned that their own data could be exfiltrated and sold through no fault of their own. As a result, Boards and Legal Departments are seeking to address this supply chain risk by including robust and comprehensive provisions around data security in the technology contracts they enter into, including obligations which apply if the customer’s data is impacted by a cyber incident or data breach affecting the supplier.

As a starting point, in their RFPs and related documents, customers should seek to elicit as much information as possible as to each prospective supplier’s security posture. Detailed questions should be asked around the systems, processes and safeguards the supplier will use to ensure that, in acting as a ‘custodian’ of the customer’s valuable, and often regulated, data, that data will be held and processed with a high level of care.

Suppliers can use this trend to their advantage by really stepping up in relation to security and distinguishing themselves from their competitors who may have a lesser focus on security. For example, if they are willing to obtain and maintain for the life of the contract security accreditations such as ISO 27001 – the international standard for information security management, they should include this in their proposal or response and be willing to agree to this as a contractual provision.

Following on from this, suppliers need to be prepared to include or accept in the contract (whether their own standard form agreements or terms, or the technology agreement put forward by the customer), appropriate requirements and obligations they are able and willing to meet as to security, including around where and how the customer’s data is stored and processed. The types of security controls a customer may ask for and which the vendor might consider agreeing to typically include:

  • Administrative Safeguards: security management processes, information access management, security incident procedures and contingency plans.
  • Technical Safeguards: access controls, audit controls, secure data transmission (including encryption), firewall security, and data backup (including frequency and location of backups).
  • Physical Safeguards: facility access controls, media reuse/disposal, workstation and mobile device security and physical incident and disaster procedures.

The suitability of such security controls will need to be considered in view of the nature of the products or services being provided and the level of risk, but once the parties reach agreement on which measures will apply, these will need to be translated into clear and robust security obligations to be included in the contract to give the customer the comfort it requires.

In this way, a customer is properly able to mitigate the supply chain risk a supplier’s products, services, or involvement in a project may pose, and the supplier has clear and unequivocal obligations which it knows it needs to meet to address this critical customer issue.

Our experienced Technology team can assist customers and suppliers navigate the complexities of IT security in technology contracts, including reviewing standard form agreements to ensure they are fit for purposes in relation to security and data obligations. Get in touch with our authors to discuss how we can support your business.

New Zealand

The Office of the Privacy Commissioner released its annual report for 2023/2024 on 26 November.4

The report makes for an interesting read – while some metrics remain consistent, there is much to suggest that privacy remains a developing and changing issue for Aotearoa.

Highlights from the regulator’s annual report include:

  • The OPC received:
    • 1003 complaints, up 15%
    • 864 privacy breach notifications, up 3%
    • 414 privacy breach notifications that were “serious” in nature, down 4%
  • Statistics show that Māori experience related to privacy are changing and markedly different from non-Māori. The OPC’s data suggests Māori awareness of privacy rights had increased since 2022 and Māori are significantly more likely to read privacy statements and avoid particular activities (such as use of social media or apps) due to privacy concerns than non-Māori.
  • The need for a modernisation of the Privacy Act 2020 is “increasingly urgent”. In particular, the OPC points to four specific amendments:
    • Increased and enhanced privacy rights, such as a “right to be forgotten”.
    • Stronger incentives for organisations to take privacy seriously through a more robust penalty regimes.
    • Requirements for agencies to demonstrate their privacy compliance, such as through the privacy management programmes recommended by the OECD.
    • Stronger protections for individuals from automated decision making, such as artificial intelligence.

While the report shows some areas of stability, particularly around levels of breach notification, the figures provided suggest differing experiences of privacy in Aotearoa, particularly for Māori, and again shines a spotlight on the already outdated Privacy Act 2020.

4. https://www.privacy.org.nz/publications/corporate-reports/annual-report-of-the-privacy-commissioner-2024/.

The OPC’s recent statements on a privacy breach by the IRD has shone a spotlight on the importance of privacy compliance outside the context of malicious incidents, and the dangers of sharing information with third parties.

In February of this year, Inland Revenue (IR) experienced two privacy breaches involving the unauthorised sharing of identifiable information to social media platforms. IR targeted tax marketing campaigns by sharing information about individuals who owe tax to platforms using a “hashing” encryption process. However, an unencrypted file of personal information was mistakenly shared to Meta. IR says it was “trying to fix a problem with a custom audience file.” Unencrypted personal and company information was also uploaded to Linkedin, but IR’s internal review concluded this did not involve sensitive information and was unlikely to cause serious harm.

IR only realised a breach had occurred after RNZ reported on its information-sharing practices, 8000 people complained, and conducted an internal review in September. In October, IR notified the 268,000 impacted individuals with an apology. IR requested for the file to be deleted from Meta.

The OPC released a statement on 5 November. It was pleased IR was taking the issue seriously and stated it is unlikely that the breaches are notifiable under the Privacy Act, yet it was concerning that they went unidentified until the RNZ coverage. IR holds “highly sensitive tax information” and the OPC has been clear that such agencies are responsible for the safe-keeping and anonymisation of information. This “includes regular reviews to ensure that de-identification techniques remain robust against potential malicious actors – a set and forget approach is not acceptable.” It then requested that complaints first be addressed to IR and reiterated that a complaint to the OPC is appropriate when the serious harm threshold has been met.

This incident emphasises the importance of not just ensuring anonymisation is conducted thoroughly and consistently, but also that robust privacy and data protection practices are maintained generally. This is particularly the case following the adoption of the Poupou Matatapu, and the OPC’s indication that there is little excuse for failures to adopt good privacy practices given the guidance available.

The Digital Identity Services Trust Framework Act 2023 (the Act) came into force on 1 July 2024. The Act sets out the legal framework for digital identity services in New Zealand.

A ‘digital identity service’ is a service or product that enables a user to share personal or organisational information in digital form. This includes providing a framework for consumers to verify their identity online, encouraging digital commerce and activity. For example, digital driver licences, banks IDs and trade certifications.

The Act allows for accreditation of a range of digital identity service providers to facilitate consumer choices in digital identity services. Providers can become accredited under the trust Framework established by the Act.

On 8 November 2024, the Digital Identity Service Trust Framework Rules (the Rules) came into force. The Rules set out various requirements for accreditation of providers. These include:

  • Authorisation: Valid authorisation is required from a user prior to any transaction. The provider must (amongst other things) confirm what information will be collected and disclose if any information storage or processing will take place outside New Zealand.
  • Minimising privacy risk: Providers must complete a privacy impact assessment, appoint an individual to deal with privacy risk, provide personnel with regular training, and maintain an incident response plan, incident register and privacy statement.
  • Security governance: Key security controls must be in line with best practice. A security management plan, security risk assessment and business continuity plan is required. All significant cyber security incidents must be reported to the Trust Framework Authority and CERT NZ.
  • Information security: Checks are required to assess information security, including storage of event logs, and using approved cryptographic products to protect digital information and systems.
  • Information and data governance: An information and data management plan is required, which sets out detailed record keeping practices.

If you would like to discuss the Digital Identify Framework generally, or how this may impact your business, please reach out to our cyber and technology team.

In 2023, the New Zealand Government announced its plans to integrate the NCSC (National Cyber Security Centre) and CERT NZ (Computer Emergency Response Team) to form a cohesive lead cyber security agency for New Zealand and a “single front door” for those seeking to access assistance from the Government during a cyber incident. The merger was proposed as one of many recommendations made to the Government by the Cyber Security Advisory Committee.

One year on, and the two organisations have now been brought together under a combined agency sitting within the Government Communications Security Bureau (GCSB).5 Moving towards a “leading operational agency” model more closely reflects the approach to cyber security services and public engagement seen in other jurisdictions.6

Whether this changes engagement on cyber incidents in a practical way, or is an indicator that further recommendations from the Cyber Security Advisory Committee may be implemented, remains to be seen. In the meantime, the integration of both organisations makes the reporting of cyber incidents more straightforward, which is welcome given the increasing prevalence and severity of cyber incidents targeting entities in Aotearoa.

5. https://www.gcsb.govt.nz/news/new-zealand-takes-the-first-step-in-creating-a-lead-operational-cyber-security-agency

6. https://www.beehive.govt.nz/release/government-strengthens-cyber-security”>https://www.beehive.govt.nz/release/government-strengthens-cyber-security

Earlier this year, the Minister of Commerce and Consumer Affairs, Andrew Bayly issued an open letter to New Zealand’s banking industry asking for action on three fronts to tackle online fraud in New Zealand:7 implementing a confirmation of payee system, updating the banking code of practice to provide further measures to help consumers with scams and fraud, and investigating a voluntary reimbursement scheme. A recent update from Minister Bayly at the Payments NZ’s conference in November suggests that progress is being made, but the Government’s patience may be growing thin.

Minister Bayly provided the following update on the measures proposed in the open letter of February:

  • The confirmation of payee system, whereby banks and deposit takers will be required to verify account names for domestic payments thereby reducing the risk of impersonation type social engineering frauds, should be implemented by December.
  • “Progress was being made” on a voluntary reimbursement scheme, and a report was expected “shortly” on updates to the Code of Banking Practice.

While the substantive update was positive, the associated messaging from Minister Bayly was clear. On the topic of modernising banking and payments, the finance and banking industry needed to “just get on with it”.

The approach from the Government is refreshing after years of slow progress on opening banking and fraud prevention measures, particularly when systems such as payee confirmation have been routine in other jurisdictions for some time. Given the prevalence of social engineering and payment fraud in New Zealand, the measures being pushed by the Government should help catch at least some of the payments before landing in the perpetrator’s account (or the account of an unwitting third-party mule). Similarly, a voluntary reimbursement fund will hopefully provide some relief for victims, albeit details are yet to be disclosed.

With the confirmation of payee system seemingly around the corner, we wait with interest for further details of the voluntary reimbursement scheme and updates to the Code of Banking Practice. Look out for further updates from the W+K cyber, privacy and data security team in future bulletins.

7. https://www.mbie.govt.nz/dmsdocument/28096-strengthening-bank-processes-and-consumer-protections-against-scams-open-letter-to-the-new-zealand-banking-industry-pdf

Singapore

In September 2024, LinkedIn attracted controversy when the platform updated its data collection policy allowing it to use content shared by users to train generative AI models. Users found themselves opted-in by default. After the update, LinkedIn’s interface informed users that opting out would mean that their data would not be used to train models going forward, but any training that had already taken place would not be affected.

LinkedIn explained that the data collected would be used to create an “AI-powered writing assistant” that would “enable message drafting using generative AI functionalities”, such as the generating of articles and profile summaries. In embarking on this move, LinkedIn joins a slate of other Big Tech players such as Meta and OpenAI, whose own webscraping programs attracted similar alarm and backlash.

In Singapore, mainstream news reported local LinkedIn users complaining that they had received no notification at all, or at the most, a generic email advising them that LinkedIn’s data policy had changed without highlighting the nature of the changes. Users felt that they had not been given sufficient information on the intended use of their data, and that they were uncertain about which third party service providers would receive their data as part of the AI model training.

The outcry was mirrored in other countries, with Hong Kong’s Privacy Commissioner for Personal Data issuing a statement expressing doubt that the default opt-in setting was an accurate reflection of users’ choices, and to warn users to to update their settings to revoke permission for the use of their data for AI model training if they so wished.8

LinkedIn has since announced that the collection of personal data for AI training in the UK, EU, EEA, Switzerland, Hong Kong and Mainland China has been paused. At present, LinkedIn is still collecting data for AI training from Singapore users – however, Singapore’s PDPC has since announced that it has contacted LinkedIn to seek proof of user consent.9

It will be interesting to see how similar cases will be dealt with by the PDPC over the next few years. While other countries have adopted a more cautious stance towards the use of personal data in developing generative AI technologies, the Singapore authorities’ overall approach has been relatively more business-forward. A few months prior to the LinkedIn controversy, the Personal Data Protection Commission released its Advisory Guidelines on the Use of Personal Data in AI Recommendation and Decision Systems (“AI Guidelines”).10

The AI Guidelines encourage businesses seeking to develop AI models using personal data already in their possession to consider relying on the Business Improvement Exception and the Research Exception in the Personal Data Protection Act 2012 (“PDPA”), if not able to obtain users’ consent for such use.

  • The Business Improvement Exception enables an organization to use personal data (already collected in accordance with the PDPA) towards developing a product or to enhance an existing product. This could include sharing within a group of related companies, or between departments in a company.11
  • The AI Guidelines highlight that if the AI model being trained is being developed to improve operational efficiency or to offer more or better products features or functionalities to users, the exception may be relevant – however, the organization would need to first consider several factors to determine if reliance on the exception is appropriate.12 Such factors include whether it is technically possible and/or cost-effective to use other means to develop the AI model, and common industry standards that may be applicable.
  • The Research Exception allows an organization to use personal data for research and development (even if there is no immediate application to their products or market) provided that the research cannot reasonably be accomplished unless: (a) the personal data is provided in an individually identifiable form, (b) there is a clear public benefit to using the personal data for the research purpose, (c) the results of the research will not be used to make any decision that affects the individual, and (d) the organization will not publish the results of the research in any form which would identify the individual whose personal data is used.13
  • The AI Guidelines further state that organizations can rely on the Research Exception to disclose personal data for a research purpose, including to another company for joint research and development. However, before doing so the organization must first consider if it would be impracticable to seek the consent of the individual for such disclosure.14

As more and more companies race to develop new AI functionalities for their existing products (e.g. Copilot, Grok, Meta AI, Apple Intelligence), we can expect increased scrutiny on the parameters of these two exceptions.

That said, another prong of the PDPC’s approach also appears to be encouraging developers to look into alternative types of data for model training, such as the use of synthetic data. Synthetic data is loosely defined as artificial data generated using mathematical models to mimic the characteristics and structure of a genuine dataset.

In July 2024, the PDPC released a Proposed Guide on Synthetic Data Generation in which the PDPA highlighted successful case examples of the use of synthetic data for AI Model training such as JP Morgan’s use of synthetic data to train its own AI models to detect fraudulent transactions.15 Synthetic data also has the advantage of not being personal data (if properly generated) – meaning that it can be shared with external parties and collaborators without fear of breaching privacy rights. In its guide, the PDPC highlighted that synthetic data can also be a better option than personal data as it can be manipulated to simulate rare scenarios, or to augment existing datasets to increase representation of minority groups and improve overall performance of the AI or machine learning models.16

Done right, the use of synthetic data and other Privacy Enhancing Technologies could go a long way to address the perceived risks to privacy rights caused by the widespread development of new Gen AI technologies.

8. Privacy Commissioner’s Office Reminds Users of LinkedIn to Beware of the Use of Their Personal Data for Training of Generative Artificial Intelligence (AI) Models

9. https://www.straitstimes.com/tech/users-cry-foul-over-linkedin-update-that-taps-personal-data-for-ai-training

10. https://www.straitstimes.com/tech/user-should-be-informed-when-personal-data-used-to-train-ai-systems-new-pdpc-guidelines

11. Para 1(2) of Part 5 of the First Schedule of the PDPA and para 1(2) of Division 2 of Part 2 of the Second Schedule to the PDPA.

12. [5.3] of the AI Guidelines

13. Division 3 of Part 2 of the Second Schedule to the PDPA

14. [6.3] of the AI Guidelines

15. Page 11 of the Guide on Synthetic Data Generation (PDPC, 24 September 2024)

16. Page 9-10 of the Guide on Synthetic Data Generation (PDPC, 24 September 2024)

Register for Wotton Kearney’s Cyber, Privacy and Technology updates below.

    Contacts