Ayorinde A, Mensah DO, Walsh J, Ghosh I, Ibrahim SA, Hogg J, Peek N, Griffiths F
Health Care Professionals’ Experience of Using AI: Systematic Review with Narrative Synthesis. J Med Internet Res 2024;26: e55766. doi: 10.2196/55766
Healthcare professionals' experience of using AI
Why it matters
There has been significant development in AI tools designed to help with clinical decisions in recent years. In the past, most of these tools relied on rules defined by medical experts, but many of the recent tools use machine learning to produce their outputs.
For healthcare professionals to adopt new technology, they need to be able to trust it and understand how it can help patients or improve care. However, when it comes to machine-learning-based clinical tools, these factors aren’t well understood. This study aimed to review published literature on healthcare professionals’ experiences of using machine learning tools to support their clinical decision-making.
What we found
Healthcare professionals’ opinions about AI tools, and how much value they added to clinical decision-making, were varied. A substantial number of healthcare professionals also had concerns about how accurate AI tools, or their recommendations, were.
While some people believed that AI offered added value and that the new tools could improve decision-making, others felt that they simply served to confirm their own clinical judgment, and some said they didn’t find AI tools useful at all.
We identified seven main themes:
- Understanding how AI is used.
- The amount of trust and confidence people have in AI tools.
- Judging the added value of AI.
- The availability of data and the limitations of AI.
- Balancing time and other priorities.
- Concerns about AI governance.
- Working together to support the use of AI tools.
Some healthcare professionals were concerned that they didn’t fully understand how AI works. This included not understanding the results AI produced, how the algorithms work, or the reasoning behind those results. This confusion is mainly due to the lack of transparency in AI systems that aren’t based on clear, knowledge-based rules.
Some healthcare professionals also expressed a lack of trust in AI tools, suggesting that they found it especially hard to be confident if AI results differed from their own. There was a reluctance to rely on AI tools alone, with some saying that they wouldn’t exclusively rely on AI tools as they were responsible for their patients’ care – they didn’t feel they had enough trust in machine-led tools. Some studies found a lack of clear, useful guidance or next steps from AI tools reduced their value in clinical decision-making, while some people suggested that the data used by the tools could lead to more inequity as some disadvantaged populations weren’t routinely represented in health datasets.
Many of the challenges we found might be eased by involving stakeholders like healthcare professionals, patients, and regulators in the design and development of AI systems. By working with these groups early on, developers can better understand the practical needs and concerns of healthcare providers, helping to create AI tools that are more useful in real-world settings.
Regular testing and evaluation of AI tools will be crucial for gathering accurate data, as will education and training that provides professionals with the knowledge and skills needed to confidently and effectively use AI in their work. As AI development improves, future research can then focus on identifying any unintended consequences of using AI in everyday clinical practice.