By: Chris Spain and Lily Goldsmith
At a glance
- There must always be human oversight when AI is used in healthcare.
- Practitioners need to have a sufficient understanding of how an AI tool functions, including its intended use, how it collects and stores information, and its risks and limitations.
- Practitioners must be transparent with patients about their use of AI.
- Practitioners need to obtain informed consent from their patients before inputting personal information into the application.
- Practitioners must ensure their use of AI does not breach privacy legislation and, where applicable, TGA requirements.
The use of artificial intelligence (AI) in healthcare is a relatively new but rapidly evolving area. AI has the potential to provide a range of benefits in healthcare and is already being utilised to automate many tasks. Common examples include the review of medical records, generation of clinical notes, and interpretation of diagnostic imaging. In some cases, it can perform tasks to the same standard, or higher, than that of individual practitioners. However, as AI systems become integrated into many areas of healthcare, practitioners face new challenges in upholding professional and ethical obligations.
AHPRA has recently released new guidelines to help practitioners understand how the existing responsibilities under the National Boards’ codes of conduct apply to the use of AI. The guidelines are centred around four key principles: accountability, understanding, transparency and informed consent.
Accountability
Fundamentally, the use of AI does not change or diminish a practitioner’s responsibility to exercise human judgement. AI should be used to enhance, not replace, clinical judgement and decision making. Practitioners must always check the accuracy of AI outputs and regularly review AI tools to ensure their continued suitability for clinical use.
Understanding
Practitioners should familiarise themselves with the AI tools they use by reviewing the product information. It is not possible to ensure compliance with professional and ethical obligations without a sufficient understanding of how the AI tool was developed and tested, how it collects and stores data, its intended use, and the potential risks and limitations of its use. For example, public generative AI tools such as ChatGPT can carry a multitude of risks when improperly used. Such applications often store data outside of Australia, and developers may sell that data without disclosing this or obtaining consent.
Transparency
Practitioners must be transparent with patients about their use of AI. The level of transparency required will depend on the type of AI tool being used. For example, where AI is being used for a more complex clinical purpose, such as interpreting imaging and scans, practitioners are not expected to explain the technicalities of the software to the patient. Conversely, where an AI scribe is being used to record consultations and write clinical notes, practitioners must disclose this to patients and explain how this impacts the patient, particularly regarding the collection and use of their personal data. This links in with the principle of understanding – it is crucial that practitioners understand the functioning of the AI tools they use to help enable transparency.
Informed consent
Where the AI tool requires the input of a patient’s personal information, practitioners must first obtain informed consent from the patient. It is always good practice to document the patient’s response in the clinical record. Practitioners must also be aware that failure to obtain informed consent may have criminal implications in some circumstances – for instance, the recording of a consultation.
Other professional obligations
Practitioners must ensure their use of AI does not breach privacy and health record legislation. AI use must uphold the right of patients to know what information is held about them and to have control over its use and disclosure. This applies to identified and de-identified data.
Practitioners need to be aware that the data and algorithms used in AI can be inherently biased. This is an important consideration when treating Aboriginal and Torres Strait Islander peoples, as well as other culturally diverse patient groups. To avoid creating or exacerbating existing disparities in healthcare, practitioners should review AI outputs and consider whether any biases exist which might affect diagnosis, treatment recommendations, or health outcomes for certain patient groups.
Where AI is used for a therapeutic purpose such as diagnosis, prediction, treatment and investigation, it is considered a medical device and regulated by the Therapeutic Goods Administration (TGA). Practitioners must ensure the use of such AI complies with TGA requirements. This requirement does not apply to more general AI tools, such as ChatGPT.
To ensure AI integration in healthcare benefits patients, and the risks are mitigated, healthcare professionals have an obligation to ensure their use of AI upholds professional and ethical obligations. This ongoing commitment to responsible AI use is crucial for maintaining patient trust, safety, and the integrity of healthcare practice in an increasingly technology-driven landscape.