There seems to be a belief among some people that the “*optimum*” average bed occupancy for a hospital is around 85%.

More than that risks running out of beds and admissions being blocked, 4 hour breaches appearing and patients being put at risk. Less than that is inefficient use of expensive resources. They claim there is a ‘magic sweet spot’ that we should aim for.

**Unfortunately, this 85% optimum occupancy belief is a myth.**

So, first we need to dispel it, then we need to understand where it came from, and then we are ready to learn how to actually prevent queues, delays, disappointment, avoidable harm and financial non-viability.

Disproving this myth is surprisingly easy. A simple thought experiment is enough.

*Suppose we have a policy where we keep patients in hospital until someone needs their bed, then we discharge the patient with the longest length of stay and admit the new one into the still warm bed – like a baton pass. There would be no patients turned away – 0% breaches. And all our the beds would always be full – 100% occupancy. Perfection!*

And it does not matter if the number of admissions arriving per day is varying – as it will.

And it does not matter if the length of stay is varying from patient to patient – as it will.

**We have disproved the hypothesis that a maximum 85% average occupancy is required to achieve 0% breaches.**

The source of this specific myth appears to be a paper published in the British Medical Journal in 1999 called “*Dynamics of bed use in accommodating emergency admissions: stochastic simulation model*“

So it appears that this myth was cooked up by academic health economists using a computer model.

And then amateur queue theory zealots jump on the band-wagon to defend this meaningless mantra and create a smoke-screen by bamboozling the mathematical muggles with tales of Poisson processes and Erlang equations.

And they are sort-of correct … the theoretical behaviour of the “ideal” stochastic demand process was described by Poisson and the equations that describe the theoretical behaviour were described by Agner Krarup Erlang. Over 100 years ago before we had computers.

BUT …

The academics and amateurs conveniently omit one minor, but annoying, fact … that real world systems have people in them … and people are irrational … and people cook up policies that ride roughshod over the mathematics, the statistics and the simplistic, stochastic mathematical and computer models.

And when creative people start meddling then just about anything can happen!

So what went wrong here?

One problem is that the academic *hefalumps* unwittingly stumbled into a whole minefield of pragmatic process design traps.

Here are just some of them …

1. Occupancy is a ratio – it is a meaningless number without its context – the flow parameters.

2. Using linear, stochastic models is dangerous – they ignore the non-linear complex system behaviours (chaos to you and me).

3. Occupancy relates to space-capacity and says nothing about the flow-capacity or the space-capacity and flow-capacity scheduling.

4. Space-capacity utilisation (i.e. occupancy) and systemic operational efficiency are **not** equivalent.

5. Queue theory is a simplification of reality that is needed to make the mathematics manageable.

6. Ignoring the fact that our real systems are both complex and adaptive implies that blind application of basic queue theory rhetoric is dangerous.

And if we recognise and avoid these traps and we re-examine the problem a little more pragmatically then we discover something very useful:

**That the maximum space capacity requirement (the number of beds needed to avoid breaches) is actually easily predictable.**

It does not need a black-magic-box full of scary queue theory equations or rather complicated stochastic simulation models to do this … all we need is our tried-and-trusted tool … a spreadsheet.

And we need something else … some flow science training and some simulation model design discipline.

When we do that we discover something else …. that the expected average occupancy is not 85% … or 65%, or 99%, or 95%.

**There is no one-size-fits-all optimum occupancy number**.

And as we explore further we discover that:

**The expected average occupancy is context dependent.**

And when we remember that our real system is adaptive, and it is staffed with well-intended, well-educated, creative people (who may have become rather addicted to reactive fire-fighting), then we begin to see why the behaviour of real systems seems to defy the predictions of the *85% optimum occupancy* myth:

**Our hospitals seem to work better-than-predicted at much higher occupancy rates.**

And then we realise that we might actually be able to design proactive policies that are better able to manage unpredictable variation; better than the simplistic maximum 85% average occupancy mantra.

And finally another penny drops … average occupancy is an **output** of the system …. not an input. It is an effect.

And so is average length of stay.

Which implies that setting these output effects as causal inputs to our bed model creates a meaningless, self-fulfilling, self-justifying delusion.

Ooops!

Now our challenge is clear … we need to learn *proactive and adaptive flow policy design* … and using that understanding we have the potential to deliver zero delays **and** high productivity at the same time.

And doing that requires a bit more than a spreadsheet … but it is possible.