Identifying decisions that go into creating composite indicators

Background

Can a few ‘stars’ really shed light on the quality of healthcare? When it comes to composite indicators – like well-known star rating systems for hospitals – the answer is often unclear.

Composite indicators are created by bundling individual measures – such as mortality rates and patient feedback on doctors’ communication skills – in a single score. They are meant to simplify a wide range of complex information into something that is easy to understand. Increasingly popular in healthcare, they are used with the aim of providing an overall picture of the quality and safety of care.

But composite indicators are frequently plagued by statistical problems, have little theoretical basis, or fail to take into account the priorities of the people who are providing or experiencing care. It is also sometimes unclear how they are developed, what exactly they measure, and who gets to decide what is worthy of measurement. These problems can undermine the credibility of composite indicators and limit what can be learned from them.

Reporting guidelines might be part of the solution: they could help, for example, by making composite indicators more transparent. But writing useful guidelines requires a better understanding of the processes through which these indicators are created, and those processes – and what influences them – are currently unclear.

Approach

For this study, we are asking experts in the field about the range of choices that go into designing, developing and reporting composite indicators. This will involve interviews with a wide range of experts, including people who design composite indicators and people who use them.

From these interviews, we are compiling a comprehensive list of the decisions that go into developing composite indicators, and then asking the experts to rank them in order of their importance. Our findings should help inform a future consensus-building process to develop reporting guidelines for composite indicators.

Work on this project will continue until 2020.

We recognise the potential need for better methods for creating composite indicators, and that will be the subject of future work.