Woodcock T, Liberati EG, Dixon-Woods M. A mixed-methods study of challenges experienced by clinical teams in measuring improvement. BMJ Quality & Safety. 2021;30:106-115 http://dx.doi.org/10.1136/bmjqs-2018-009048
A mixed-methods study of challenges experienced by clinical teams in measuring improvement
Why it matters
Although it is widely accepted that measurement is a vital part of improving quality in healthcare, how well it’s done varies considerably.
Difficulties in measurement can mean that improvement projects don’t have reliable or valid data to assess progress, report accurately, or produce useful learning. Problems with data collection and interpretation include missing data or lack of clarity in sampling strategies.
One issue is that when measures are set externally, for example by regulators or funders, clinicians may feel the measurements are not relevant to things that concern them. But not much is known about what happens when clinical teams choose their own measures and design their own data collection systems.
Our study set out to understand the challenges clinical teams face when undertaking measurement based on measures they have chosen themselves.
Our approach
We drew on the independent evaluation of a patient safety improvement programme using an approach called Safer Clinical Systems, which seeks to support organisations to develop capacity to detect and address weaknesses in their systems and to measure and report their improvement.
Teams taking part in the programme across nine UK hospitals were expected to develop a detailed plan setting out measures for collecting useful data; establish data collection systems; and analyse and interpret their data.
Our study explored the experiences of the teams who took part in the programme by interviewing team members and observing their programme-related activities. We combined this with an expert review of the measurement plans and an analysis of the data collected.
What we found
Teams found setting up systems for collecting data was time and resource-intensive and was something they struggled with – sometimes having to compensate by doing extra, unpaid work.
Some teams identified too many measures. Not all plans demonstrated understanding of the underlying principles needed to gather good quality data. Many operational definitions of improvement measures needed to be more specific to capture improvements, while some were not logically linked to improvement actions. Teams were often confronted with missing data or measurement systems not being used in the way they were intended.
Data analysis also varied in quality, linked to skills and capacity in the teams. Poor quality data hindered meaningful analysis.
Our study suggests that measurement is a highly technical task, requiring specific expertise and skills. Teams without such expertise faced many difficulties and tended to underestimate challenges involved in measurement and the amount of time needed.
Brief training is unlikely to be enough to upskill teams adequately. Improved access to validated measures would be helpful, but more structural initiatives and programmes that build capability for measurement are also needed.