Project title

Shared CAIRE (Care AI Role Evaluation): Testing different Human-Machine Interaction models for shared decision-making, and their ethical and legal implications.

Country

UK

Background

The use of Artificial Intelligence in healthcare is growing, and the NHS AI Lab has identified three main priorities:

  1. Evidencing AI’s potential
  2. Building confidence and demonstrating trustworthiness
  3. Clarifying who does what.

So far, the focus has been largely on the first priority. However, without building confidence in systems and clarity around ethical and legal issues, it may be difficult for AI to gain traction.

The currently prevalent Human-Machine Interaction (HMI) model for clinical practice is AI systems that make recommendations which the clinician decides whether to act upon, acting as a ‘sense check’. But this model could have negative implications for clinicians and patients alike. Clinicians, being asked to either reject or endorse the system’s output without fully understanding how it was reached, risk disenfranchisement from the decision-making process. There is also a real danger that clinicians become a ‘liability sink’ – the most obvious individual to hold legally accountable for harms consequent upon AI recommendations. For patients, this model risks them not receiving the best possible treatment as individuals since the clinician-in-the-loop is no longer doing what they are best at, including exercising sensitivity to patient preferences and context, but merely acting as a safeguard on a machine. There are alternative HMI models for how a patient, clinician and AI might interact in a shared decision-making process.

Summary

This novel project will test these models with clinicians and patients to see how behaviours change and to explore their ethical and legal implications. The aim is to realise AI’s potential to improve patient care and shared decision-making, protecting both clinician and patient wellbeing.

Outcome

The project’s results will be a set of clearly defined HMI models for healthcare settings, with evidence of their performance when deployed and evaluation of their ethical and legal implications. This will have an impact on system design, informing design approaches which consider human behaviours in complex healthcare settings and enable humans and machines genuinely to work together, drawing on the strengths of each.

Watch Dr Zoe’s YorkTalk to find out more