Skip to content

Bridging the Gap: challenges and strategies for implementing Artificial Intelligence-based tools in clinical practice

Citation:

Bridging the Gap: Challenges and Strategies for the Implementation of Artificial Intelligence-based Clinical Decision Support Systems in Clinical Practice. Yearbook of Medical Informatics, 33(01), 103-114 – August 2024. https://doi.org/10.1055/s-0044-1800729

Why it matters

In recent years we have seen a rapid surge in the amount of research dedicated to developing artificial intelligence (AI) algorithms designed to support clinical decision-making tasks.

Even though many of these algorithms are designed with precision and tested for accuracy, only a few are being used in everyday clinical practice. This gap between creating the algorithms and using them in real healthcare settings raises important questions about what’s causing the delay.

We reviewed recent studies that looked at how artificial intelligence-based clinical decision support systems (AI-CDSS) are being used in real healthcare settings and evaluated how far research into putting these tools into practice has come, focusing on tools that were developed using machine learning (their outputs come from algorithms trained on data).

What we found

We reviewed 31 recent research papers that explored the ways AI-CDSS are being implemented in healthcare settings and grouped them into four categories:

  1. Implementation theories, frameworks and models: studies that used theories, frameworks or models to guide how these tools are implemented
  2. Stakeholder perspectives: studies that explored what different people (like doctors, nurses, or patients) think about these tools,
  3. Implementation feasibility: studies that tested whether using the tools in real clinics is possible
  4. Technical infrastructure: studies that focused on the technology needed to support these tools

Most of the papers we reviewed (22 out of 31) focused on what healthcare workers think about AI tools, but they were mainly limited to what doctors thought of the tools before they were implemented. Doctors thought that there were potential benefits to using this type of AI tool but also said that they needed strong evidence to support their use, and to make sure that the tools would fit with the existing clinical workflow tools and systems that they had in place. There were also concerns about trust and transparency, the limited knowledge that healthcare staff have around AI-CDSS, how poorly the tools integrated with existing systems, and the risk of errors.

We found that there is a ‘precedence paradox’ in AI-CDSS: while there’s a clear need for evidence that these tools can help patients, hospitals cannot build up this evidence because they don’t have the infrastructure and step-by-step processes for integrating new tools like AI-CDSS. Unfortunately, these infrastructure and processes are rarely researched, and therefore the precedence paradox persists.

In addition, we found that researchers rarely used proven methods or frameworks from implementation science when studying how to bring AI tools into healthcare. Doing so could help researchers apply their findings across different studies in different healthcare settings and is useful when planning new studies.

Research into how to implement AI-CDSS in everyday healthcare is in its early stages, but could be strengthened by grounding it in proven theories, models and frameworks from implementation science, including the views of diverse stakeholders (not just healthcare professionals), carrying out more real-world feasibility studies, and developing reusable technical infrastructure that makes it easier to roll out AI tools quickly.

Sign up to receive the latest news, reports and articles from THIS Institute.