Cyber, Privacy + Technology Report – Issue 9, September 2024
We are delighted to share Issue 9 of our Cyber, Privacy + Technology Report – our quarterly wrap-up of relevant news for insurers, brokers and their customers doing business in Australia and New Zealand in the cyber, privacy and technology fields.
If you would like to discuss any of the topics covered in our September issue, please reach out to a member of the team.
Cyber
The Office of the Information Commissioner (OAIC) released its latest Notifiable Data Breaches (NDB) report for the period January to June 2024 on 16 September 2024, including further guidance on navigating the NDB scheme.
Statistics
Of note, the OAIC has received 527 data breach notifications, an increase of 9% compared to the previous 6 months, with May seeing the highest number of breaches (112). Of these, the OAIC saw a breach which affected the highest number of individuals (10 million Australians) since the NDB scheme came into effect. However, 68% of data breaches reports affected fewer than 100 people. The most affected industries remain health service providers (19%), the Australian Government (12%), finance (11%), education (8%) and retail (6%).
38% of the data breaches reported resulted from cybersecurity incidents, with the most common types of cyber incidents being phishing (compromised credentials), ransomware, and compromised or stolen credentials. 30% of breaches resulted from human errors, such as personal information being sent to the wrong recipient, unauthorised disclosure or release, and failure to use BCC when sending emails.
Guidance
The key themes and issues highlighted by the OAIC include:
- Mitigating cyber threats: entities are expected to have appropriate and proactive measures in place to mitigate cyber threats and protect personal information.
- Extended supply chain risks: entities that outsource the handling of personal information should implement a robust supplier risk management framework to reduce the impact of a data breach.
- Addressing the human factor: entities need to mitigate the potential for individuals to intentionally or inadvertently contribute to data breaches.
- Misconfiguration of cloud-based data holdings: entities need to appreciate that there is a shared responsibility for security of data held in the cloud.
- Relevance of a threat actor’s motivation in assessing a data breach: entities should not rely on assumptions regarding a threat actor’s motivation and should rather notify the OAIC and affected individuals when a data breach occurs.
- Data breaches in the Australian Government: government agencies (especially, service delivery functions) need to build community trust in their ability to protect the security of personal information.
Top tips
- Effective cybersecurity practices should be implemented, including using privacy by design across the lifecycle of information processing.
- Third party contracts with suppliers who handle your personal information should include sufficient obligations on such third parties to notify you before they engage with subcontractors who will handle your personal information, as well as promptly notify of any data breach-related incidents.
- Prioritise educating staff on secure handling practices and latest trends by threat actors.
- Regularly audit and review cloud configurations.
- Be cautious on relying on a threat actor’s assurances regarding the integrity of data when conducting reasonable assessments of a suspected eligible data breach.
- Appropriate safeguards should be in place to ensure that access controls are robust, maintained and well-managed.
The Australian Prudential Regulation Authority (APRA) has shared its guidance to APRA entities regarding common cyber weaknesses in relation to security in configuration management, privileged access management and security testing.
As a supervision priority, APRA has stated that it will maintain a “heightened focus on cyber resilience” in respect to the banking, superannuation and insurance industries. In its 2024/2025 Corporate Plan, APRA has reiterated that cybersecurity and cyber risk management remain key topics for it. APRA reports they will be investing in technology and data to inform risk-based decision-making, and will utilise additional funding to strengthen its data collection, cybersecurity and supervision systems. To be better equipped, APRA also announced that, from 2 September 2024, there will be two new frontline supervision divisions: (1) a General Insurance and Banking division, and (2) a Life Insurance, Private Health Insurance and Superannuation division, which are both complemented by the Cross-industry Risk division.
In a letter issued by APRA on 15 August 2024, it has clarified the expectation that APRA entities are to review their control environment against common weaknesses and identify any gaps which could materially impact the entity’s risk profile or financial soundness. APRA also recommends that APRA entities conduct regular self-assessments aligned with the sound practices in Prudential Practice Guide CPG 234 Information Security (CPG 234), and adopt mitigation strategies from established frameworks, like the Essential Eight.
APRA has provided further guidance regarding mitigation strategies for entities, in relation to:
Security in configuration management
- Entities should ensure the configuration of information assets minimise vulnerabilities and are reassessed when new vulnerabilities and threats are discovered and applied consistently.
- Entities should maintain controls to manage changes to information assets, including changes to configuration with the aim of maintaining information security.
- Entities should ensure that existing and emerging information security vulnerabilities and threats caused by insecure configuration of information assets are identified, assessed and remediated in a timely manner.
Privileged access management
- Entities should maintain complete and accurate records of all privileged accounts.
- Entities are to provide privileged access to information assets in circumstances where a valid business need exists, and only for as long as access is required.
- Entities are to ensure the strength of identification and authentication is commensurate with impact should an identity be falsified.
Security testing
- Entities are to ensure the testing program references the population of information security controls across the entity, enables the validation of design and operating effectiveness of the controls over time, and leverages a variety of testing approaches informed by contemporary industry practices, including IT general controls, vulnerability scanning, traditional penetration tests of software and infrastructure, and red-team and gold-team exercises, as referred to in CPG 234.
- Entitles should report test results to the appropriate governing body or individual and track associated follow-up actions.
Insurers should remain vigilant and apply strategies to mitigate risks posed by the evolving and escalating cyber threat landscape. As from 1 July 2025, APRA will increase minimum standards for operational resilience through the implementation of Prudential Standard CPS 230 Operational Risk Management. The aim of this Prudential Standard is to ensure that an APRA entity is resilient to operational risks and disruptions. We can also expect more industry letters from APRA on high-risk cyber topics and expect regulated entities to strengthen practices as appropriate.
The Australian Government is introducing a new Scam Code Act that will require social media companies, financial institutions and mobile networks to implement robust monitoring systems to detect and prevent scam-related content. If passed, companies will be required to report scams immediately. Failure to so do will expose them to fines of up to $50 million. Additional proposals include requiring banks to warn customers if they try to make a transaction that is flagged as a potential scam (this feature is currently voluntary). Telecommunications companies will be required to screen out numbers from scammers.
It is a significant advancement in the fight against fraud and scam-related activities and represents a major step in the country’s efforts to combat fraud and enhance consumer protection. Assistant Treasurer Stephen Jones likened the proposed new laws to what would be expected if fraudulent advertisements were published in a newspaper:
If Facebook or Instagram or Google or any of the others… are taking advertising money from criminals who are publishing criminal content, whose intent is to rob Australians of their information and money, there’s something very wrong about that.
You couldn’t do that if you’re a broadcaster in Australia. You couldn’t do that if you’re a newspaper. You would be liable as a newspaper publisher if you were publishing ads, taking money to publish ads with criminal content. You’d be liable for that. So, we simply ask the question: why should a social media platform be any different? And the very first step in ensuring that they’re not different is ensuring that they verify the identity of the advertiser.
The government is expected to unveil a bill before Parliament in the coming weeks that will require mandatory reporting of cyber ransom payments to the Australian Cyber Security Centre.
It’s expected that businesses with an annual turnover of at least $3 million and government entities will be required to comply. Fines of up to $15,000 are expected for those that fail to report a ransom payment. At this stage, no penalties appear to be on the cards for those that pay a ransom, however this hasn’t been ruled out in the future.
The changes will allow the government to have greater intelligence and awareness of the cybercrime landscape. Provisions are expected to be made to limit the use of reporting data.
The Minister for Home Affairs and Cybersecurity Tony Burke was reported as saying that proposed new cybersecurity laws which will be introduced later this year are expected to contain a “limited use” provision which will encourage organisations to share information following a cyber incident with government cyber agencies without fear that the information will be used against them. However, this will differ from the ‘safe harbour’ provisions initially expected, which would have provided companies with immunity from regulatory action for cooperating with authorities.
Privacy Commissioner Carly Kind has commented that it would be problematic if the provisions in the new cybersecurity laws were akin to safe harbour laws. In the Commissioner’s view, regulation should work alongside cyber incident response and, as it stands, the OAIC only takes further regulatory action for a “handful” of notifiable data breaches reported to it.
A commissioner from the Australian Securities and Investments Commission (ASIC) also said that information sharing plays a minimal role in investigations conducted, but rather the actions taken by the management and directors in planning for incidents are far more important.
Once the new laws are released, the impact to organisations who share information about ransom payments and/or cyber incidents will be better understood, allowing entities to appropriately adapt their response to cyber incidents.
On 13 June 2024, ASIC and the OAIC signed a memorandum of understanding (MOU) to facilitate sharing of information between the two agencies. The MOU enables proactive information sharing by each agency on their own motion, or in response to a written request.
Joe Longo, Chair of ASIC, said in a statement that the MOU would allow “appropriate mechanisms in place to be able to act fast and effectively when needed… to exercise our powers and perform our functions.”
The MOU does not create any enforceable rights or impose additional legally binding obligations on either agency.
In addition, a commissioner at ASIC has noted that, in investigating cyber incidents, it will consider how management and directors of an organisation have prepared for and responded to cyberattacks, which includes whether the organisation has developed a plan, updated it and tested it. For breach of their duties, directors can face penalties of up to 15 years imprisonment or $700 million. The commissioner has further reiterated that cyber washing will not be tolerated—where organisations make statements that they are cyber secure, they need to have the evidence to back it up.
Privacy
A first tranche of reforms to the Privacy Act 1988 (Cth) was published on 12 September 2024. The Privacy and Other Legislation Amendment Bill 2024 seeks to implement 23 reforms that were “agreed-to” in the government’s response to the Privacy Act Review Report. These proposed amendments, once in force, will see a strengthening in privacy obligations on APP entities and broader powers granted to the OAIC. The most notable of these amendments include:
- Security, retention and destruction: updating APP11 to clarify that reasonable steps to protect information includes taking technical and organisational measures.
- Overseas data flows: introducing a mechanism in APP8 providing for the lawful disclosure of personal information to an overseas recipient where such recipient is subject to the laws of a country or binding scheme which provides substantially similar protection to the Australian Privacy Principles (APPs).
- Automated decisions: amending APP1 to also require entities to include information in privacy policies about automated decisions that significantly affect the rights or interests of an individual.
- Broader enforcement options: including the ability of the OAIC to impose civil penalties depending on the level of seriousness of the privacy breach, empowering the OAIC to use general investigation and monitoring powers, expanding the powers of the Federal Court and Federal Circuit and Family Court in civil proceedings, and empowering the Information Commissioner to conduct public inquiries.
- Statutory tort for serious invasions of privacy: introducing a cause of action, defences, remedies and exemptions for serious invasions of privacy.
- Criminal offences for doxing: introducing a new offence for doxing (i.e. the intentional and malicious exposure of personal data using a carriage service in a manner that is menacing or harassing).
For most of the above amendments, they will only apply in relation to information held after the commencement of the relevant Part, regardless of whether the information was acquired or created before or after that commencement.
We expect the second tranche of privacy reforms (likely to be published in 2025) will address the more fundamental reforms that were expected, such as, a positive obligation that personal information handling is fair and reasonable, the end of the small business exemption, and an updated definition of personal information. What these reforms will actually look like remains to be seen. However, we can expect these changes to align with international privacy laws, most notably the General Data Protection Regulation (GDPR). For Australian organisations (who have not had to apply a strict level of privacy compliance in their businesses), this will require a deep dive into their data lakes to understand what data they hold, for what purposes, and how it’s been used. While privacy compliance can seem daunting and tedious, there are many benefits for organisations to undergo this exercise.
The OAIC has raised concerns about how social media platform X (formerly Twitter) collects user data to train its AI chatbot “Grok”.
The OAIC released a statement after discovering that X users were automatically opted in to data collection of their posts and interactions to train the chatbot—without consent. It raises questions about whether these practices are in breach of the APP under the Privacy Act 1988 (Cth). An OAIC spokesperson has flagged: “We are in the initial phases of looking at such practices across the industry.”
It will be important to follow how the OAIC approaches privacy regulation in the AI space, as this may shape the AI landscape for companies in Australia when planning and testing new technology with user data.
MediSecure, a provider of electronic prescriptions for dispensing of medications, confirmed in May 2024 that it was the victim of a large-scale ransomware data breach. In fact, the MediSecure data breach was one of the most significant in Australia’s history, with almost 13 million Australians impacted by this cyber incident (around 6.5TB of data)—the largest number of affected people ever notified to the OAIC under the Privacy Act’s NDB scheme. The affected data related to prescriptions distributed by MediSecure’s systems up until November 2023, with a range of details associated with prescriptions being impacted, including contact and health information such as email address, phone number, Medicare number, individual healthcare identifier and reason for prescription.
Just a few weeks later, and after requesting a financial bailout from the federal government, which was refused, MediSecure had gone into administration, with FTI Consulting being appointed as the administrators of MediSecure and liquidators of a subsidiary entity of MediSecure. Following on from the appointment of administrators and liquidators, a lengthy public statement was issued by MediSecure covering a number of different areas, including a description of the ransomware incident, details of the investigation that had commenced, and information as to the volume and nature of the impacted information. A less than informative update on progress in responding to the breach was issued by MediSecure on 18 July 2024, a significant part of which was given over to defending the company’s decision to seek government funds to assist it in responding to the breach, a request which was refused.
Two statements have been released by the OAIC in relation to the MediSecure breach, the first on 21 May 2024 and an update on 18 July 2024. The main statement issued in May referred to the fact that, somewhat unusually, but unsurprisingly given the scale of this breach, the National Cyber Security Coordinator (NCSC) had become involved in the breach response, working with state and federal agencies to come up with a whole-of-government response to the incident. The OAIC also noted that it had started its preliminary inquiries with MediSecure to ensure compliance with the requirements of the NDB scheme.
Privacy Commissioner Carly Kind also gave a timely reminder of the importance of implementing effective cybersecurity controls, and of compliance with the APPs, stating:
Protecting individuals’ personal information should be a top priority for all organisations, which should continually review and improve their practices and take control where they can. Only collect information that is necessary for you to carry out your business. Know what information you hold. And if that information is not necessary to your business, delete it.
Ms Kind also reiterated the need for privacy law reform in Australia, stating:
The coverage of Australia’s privacy legislation lags behind the advancing skills of malicious cyber actors. Reform of the Privacy Act is urgent to ensure all Australian organisations build the highest levels of security into their operations and the community’s personal information is protected to the maximum extent possible.
A number of interesting observations can be made from the MediSecure breach and the subsequent events:
- Organisations should not expect any financial assistance from the government if they suffer a cyber incident and do not have the requisite resources to effectively respond to it. This was the first time a company had sought a federal bailout following a data breach, and the request was met with short shrift from the government—the federal government’s role is to provide technical assistance following a cyber incident (including through the involvement of the NCSC), and it has never provided financial assistance to private companies. However, the NCSC has demonstrated its willingness to get involved and work to ensure a whole-of-government approach to a large-scale data breach, as per its remit.
- The subsequent entering into voluntary administration of MediSecure reiterates what we already know—that a cyber incident, if it’s sufficiently serious, can constitute an existential threat to an organisation.
- As the Privacy Commissioner pointedly remarked, it is critical that organisations understand what data they are holding, why it was collected and is being held, and how this data will be deleted when it is no longer required. APP entities can expect to be investigated on these matters if they do notify the OAIC of a data breach, and may face fines or other sanctions if they have failed to comply with the relevant APPs.
Technology
There has been some movement towards regulating the artificial intelligence (AI) space in Australia since ASIC Chair Joe Longo delivered the keynote address at the UTS Human Technology Institute Shaping our Future Symposium around eight months ago. In that speech, Mr Longo spoke on the current and future state of AI regulation and governance in Australia, referencing at the outset the federal government’s comment in its interim report on AI regulation that “Existing laws likely do not adequately prevent AI-facilitated harms before they occur, and more work is needed to ensure there is an adequate response to harms after they occur”. His speech concluded with the advice that “Bridging the governance gap means strengthening our current regulatory framework where it’s good, and shoring it up where it needs further development.”
Has there been in the intervening period any sign that the government is planning to regulate AI in the same way that the European Union has done so with its landmark Artificial Intelligence Act (which came into force on 1 August, though none of its requirements yet apply)? The answer is… we’re moving in the right direction! The government has been very active over recent weeks in relation to AI, publishing a series of proposals and policies likely to underpin the principles which will be reflected in mandatory AI legislation, when it finally arrives. These are:
- AI Policy: a policy (which took effect on 1 September 2024) for the responsible use of AI in government, mandatory for most non-corporate Commonwealth entities, and with corporate Commonwealth entities ‘encouraged’ to adopt the policy. The stated aim of the AI Policy is to “ensure that government plays a leadership role in embracing AI for the benefit of Australians while ensuring its safe, ethical and responsible use, in line with community expectations”. The AI Policy’s principles and requirements are laid under an “enable, engage and evolve” framework.
- Guardrails: a Voluntary AI Safety Standard published on 5 September 2024 (we take a deeper dive into this below). On the same day, the government (acting through the Department of Industry, Science and Resources) also published a proposals paper, Introducing mandatory guardrails for AI in high-risk settings. This proposals paper seeks to obtain the public’s views on ten proposed guardrails, how to define high-risk AI, and the regulatory options for mandating the guardrails. Consultations are currently open until 4 October and submissions can be made here. This will inform how the Australian Government regulates the adoption of AI, which could include adapting existing regulatory frameworks or creating new frameworks (such as through a new Australian AI Act).
- AI Index: the Australian Responsible AI Index 2024 – a report produced in conjunction with Fifth Quadrant that offers “a comprehensive analysis of Responsible AI (RAI) adoption in Australian organisations”. The survey and responses track 38 RAI practices across five key dimensions: accountability & oversight, safety & resilience, fairness, transparency, and explainability & contestability. The Index’s key areas of investigation were AI strategy, RAI implementation, and the AI usage landscape.
- Transparency: a standard for AI transparency statements (taking effect on 28 February 2025) designed to support agencies subject to the AI Policy in implementing the AI Policy. The standard obliges those agencies to make publicly available a statement outlining their approach to AI adoption as directed by the Digital Transformation Agency and sets out details of the information the transparency statement must contain. Once finalised, the transparency statement must be published on the agency’s public-facing website.
The Voluntary AI Safety Standard is designed to help organisations develop and deploy AI systems safely and reliably in Australia. This voluntary standard seeks to provide practical guidance to Australian organisations by providing ten voluntary guardrails (aligned to those set out in the proposals paper), which have been developed in line with current and evolving legal and regulatory obligations and public expectations.
The ten proposed guardrails, developed in consideration of Australia’s AI Ethics Principles, are:
- Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance.
- Establish and implement a risk management process to identify and mitigate risks.
- Protect AI systems and implement data governance measures and manage data quality and provenance.
- Test AI models and systems to evaluate model performance and monitor the system once deployed.
- Enable human control or intervention in an AI system to achieve meaningful human oversight across the life cycle.
- Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content.
- Establish processes for people impacted by AI systems to challenge use or outcomes.
- Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks.
- Keep and maintain records to allow third parties to assess compliance with guardrails.
- Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness.
The intention is that organisations will understand the specific factors and attributes of their AI systems, engage meaningfully with stakeholders, perform detailed risk assessments, undertake testing, and adopt appropriate controls and actions.
While the road towards mandatory AI regulation in Australia is long and winding, the above initiatives represent significant progress and an important step towards documenting and driving adoption of the key principles and protections which we hope will propel us towards better and safer AI outcomes in the (hopefully near!) future.
After a software update by cybersecurity firm CrowdStrike caused a global outage, legal experts in the US have speculated the company is protected from paying customers’ losses due to a contractual limitation of liability clause in its standard terms and conditions.
We examine how limitation of liability clauses operate in the Australian context and provide practical tips for reviewing or preparing IT service contracts.
Regulatory update – AML/CTF
On 11 September 2024, the Commonwealth Attorney-General tabled a bill into Parliament to amend the Anti-Money Laundering and Counter-Terrorism Financing Act 2006 (Act) and repeal the Financial Transaction Reports Act 1998 (FTR Act).
The Anti-Money Laundering and Counter-Terrorism Financing Amendment Bill 2024 (Bill) is the government’s signal to criminals and criminal syndicates that Australia is committed to reforming, expanding and improving its anti-money laundering and counter-terrorism financing (AML/CTF) regime in its crackdown on crime.