<Leslie> Hi Bob, sorry I am a bit late, I have been grappling with a data analysis problem and did not notice the time.
<Bob> Hi Leslie. Sounds interesting. Would you like to talk about that?
<Leslie> Yes please! It has been driving me nuts!
<Bob> OK. Some context first please.
<Leslie> Right, yes. The context is an improvement-by-design assignment with a primary care team who are looking at ways to reduce the unplanned admissions for elderly patients by 10%.
<Bob> OK. Why 10%?
<Leslie> Because they said that would be an operationally very significant reduction. Most of their unplanned admissions, and therefore costs for admissions, are in that age group. They feel that some admissions are avoidable with better primary care support and a 10% reduction would make their investment of time and effort worthwhile.
<Bob> OK. That makes complete sense. Setting a new design specification is OK. I assume they have some baseline flow data.
<Leslie> Yes. We have historical weekly unplanned admissions data for two years. It looks stable, though rather variable on a week-by-week basis.
<Bob> So has the design change been made?
<Leslie> Yes, over three months ago – so I expected to be able to see something by now but there are no red flags on the XmR chart of weekly admissions. No change. They are adamant that they are making a difference, particularly in reducing re-admissions. I do not want to disappoint them by saying that all their hard work has made no difference!
<Bob> OK Leslie. Let us approach this rationally. What are the possible causes that the weekly admissions chart is not signalling a change?
<Leslie> If there has not been a change in admissions. This could be because they have indeed reduced readmissions but new admissions have gone up and is masking the effect.
<Bob> Yes. That is possible. Any other ideas?
<Leslie> That their intervention has made no difference to re-admissions and their data is erroneous … or worse still … fabricated!
<Bob> Yes. That is possible too. Any other ideas?
<Leslie> Um. No. I cannot think of any.
<Bob> What about the idea that the XmR chart is not showing a change that is actually there?
<Leslie> You mean a false negative? That the sensitivity of the XmR chart is limited? How can that be? I thought these charts will always signal a significant shift.
<Bob> It depends on the degree of shift and the amount of variation. The more variation there is the harder it is to detect a small shift. In a conventional statistical test we would just use bigger samples, but that does not work with an XmR chart because the run tests are all fixed length. Pre-defined sample sizes.
<Leslie> So that means we can miss small but significant changes and come to the wrong conclusion that our change has had no effect! Isn’t that called a Type 2 error?
<Bob> Yes, it is. And we need to be aware of the limitations of the analysis tool we are using. So, now you know that how might you get around the problem?
<Leslie> One way would be to aggregate the data over a longer time period before plotting on the chart … we know that will reduce the sample variation.
<Bob> Yes. That would work … but what is the downside?
<Leslie> That we have to wait a lot longer to show a change, or not. We do not want that.
<Bob> I agree. So what we do is we use a chart that is much more sensitive to small shifts of the mean. And that is called a cusum chart. These were not invented until 30 years after Shewhart first described his time-series chart. To give you an example, do you recall that the work-in-progress chart is much more sensitive to changes in flow than either demand or activity charts?
<Leslie> Yes, and the WIP chart also reacts immediately if either demand or activity change. It is the one I always look at first.
<Bob> That is because a WIP chart is actually a cusum chart. It is the cumulative sum of the difference between demand and activity.
<Leslie> OK! That makes sense. So how do I create and use a cusum chart?
<Bob> I have just emailed you some instructions and a few examples. You can try with your unplanned admissions data. It should only take a few minutes. I will get a cup of tea and a chocolate Hobnob while I wait.
[Five minutes later]
<Leslie> Wow! That is just brilliant! I can see clearly on the cusum chart when the shifts happened and when I split the XmR chart at those points the underlying changes become clear and measurable. The team did indeed achieve a 10% reduction in admissions just as they claimed they had. And I checked with a statistical test which confirmed that it is statistically significant.
<Bob> Good work. Cusum charts take a bit of getting used to and we have be careful about the metric we are plotting and a few other things but it is a useful trick to have up our sleeves for situations like this.
<Leslie> Thanks Bob. I will bear that in mind. Now I just need to work out how to explain cusum charts to others! I do not want to be accused of using statistical smoke-and-mirrors! I think a golf metaphor may work with the GPs.