Archive for the ‘Design’ Category

It is November 2018, the clocks have changed back to GMT, the trick-and-treats are done, the fireworks light the night skies and spook the hounds, and the seasonal aisles in the dwindling number of high street stores are already stocked for Christmas.

I have been a bit quiet on the blog front this year but that is because there has been a lot happening behind the scenes and I have had to focus.

One output of is the recent publication of an article in Future Healthcare Journal on the topic of health care systems engineering (HCSE).  Click here to read the article and the rest of this excellent edition of FHJ that is dedicated to “systems”.

So, as we are back to the winter phase of the annual NHS performance cycle it is a good time to glance at the A&E Performance Radar and see who is doing well, and not-so-well.

Based on past experience, I was expecting Luton to be Top-of-the-Pops and so I was surprised (and delighted) to see that Barnsley have taken the lead.  And the chart shows that Barnsley has turned around a reasonable but sagging performance this year.

So I would be asking “What has happened at Barnsley that we can all learn from? What did you change and how did you know what and how to do that?

To be sure, Luton is still in the top three and it is interesting to explore who else is up there and what their A&E performance charts look like.

The data is all available for anyone with a web-browser to view – here.

For completeness, this is the chart for Luton, and we can see that, although the last point is lower than Barnsley, the performance-over-time is more consistent and less variable. So who is better?

NB. This is a meaningless question and illustrates the unhelpful tactic of two-point comparisons with others, and with oneself. The better question is “Is my design fit-for-purpose?”

The question I have for Luton is different. “How do you achieve this low variation and how do you maintain it? What can we all learn from you?”

And I have some ideas how they do that because in a recent HSJ interview they said “It is all about the filters“.


What do they mean by filters?

A filter is an essential component of any flow design if we want to deliver high safety, high efficiency, high effectiveness, and high productivity.  In other words, a high quality, fit-4-purpose design.

And the most important flow filters are the “upstream” ones.

The design of our upstream flow filters is critical to how the rest of the system works.  Get it wrong and we can get a spiralling decline in system performance because we can unintentionally trigger a positive feedback loop.

Queues cause delays and chaos that consume our limited resources.  So, when we are chasing cost improvement programme (CIP) targets using the “salami slicer” approach, and combine that with poor filter design … we can unintentionally trigger the perfect storm and push ourselves over the catastrophe cliff into perpetual, dangerous and expensive chaos.

If we look at the other end of the NHS A&E league table we can see typical examples that illustrate this pattern.  I have used this one only because it happens to be bottom this month.  It is not unique.

All other NHS trusts fall somewhere between these two extremes … stable, calm and acceptable and unstable, chaotic and unacceptable.

Most display the stable and chaotic combination – the “Zone of Perpetual Performance Pain”.

So what is the fundamental difference between the outliers that we can all learn from? The positive deviants like Barnsley and Luton, and the negative deviants like Blackpool.  I ask this because comparing the extremes is more useful than laboriously exploring the messy, mass-mediocrity in the middle.

An effective upstream flow filter design is a necessary component, but it is not sufficient. Triage (= French for sorting) is OK but it is not enough.  The other necessary component is called “downstream pull” and omitting that element of the design appears to be the primary cause of the chronic chaos that drags trusts and their staff down.

It is not just an error of omission though, the current design is an actually an error of commission. It is anti-pull; otherwise known as “push”.


This year I have been busy on two complicated HCSE projects … one in secondary care and the other in primary care.  In both cases the root cause of the chronic chaos is the same.  They are different systems but have the same diagnosis.  What we have revealed together is a “push-carveout” design which is the exact opposite of the “upstream-filter-plus-downstream-pull” design we need.

And if an engineer wanted to design a system to be chronically chaotic then it is very easy to do. Here is the recipe:

a) Set high average utilisation target of all resources as a proxy for efficiency to ensure everything is heavily loaded. Something between 80% and 100% usually does the trick.

b) Set a one-size-fits-all delivery performance target that is not currently being achieved and enforce it punitively.  Something like “>95% of patients seen and discharged or admitted in less than 4 hours, or else …”.

c) Divvy up the available resources (skills, time, space, cash, etc) into ring-fenced pots.

Chronic chaos is guaranteed.  The Laws of Physics decree it.


Unfortunately, the explanation of why this is the case is counter-intuitive, so it is actually better to experience it first, and then seek the explanation.  Reality first, reasoning second.

And, it is a bittersweet experience, so it needs to be done with care and compassion.

And that’s what I’ve been busy doing this year. Creating the experiences and then providing the explanations.  And if done gradually what then happens is remarkable and rewarding.

The FHJ article outlines one validated path to developing individual and organisational capability in health care systems engineering.

A few years ago I had a rant about the dangers of the widely promoted mantra that 85% is the optimum average measured bed-occupancy target to aim for.

But ranting is annoying, ineffective and often counter-productive.

So, let us revisit this with some calm objectivity and disprove this Myth a step at a time.

The diagram shows the system of interest (SoI) where the blue box represents the beds, the coloured arrows are the patient flows, the white diamond is a decision and the dotted arrow is information about how full the hospital is (i.e. full/not full).

A new emergency arrives (red arrow) and needs to be admitted. If the hospital is not full the patient is moved to an empty bed (orange arrow), the medical magic happens, and some time later the patient is discharged (green arrow).  If there is no bed for the emergency request then we get “spillover” which is the grey arrow, i.e. the patient is diverted elsewhere (n.b. these are critically ill patients …. they cannot sit and wait).


This same diagram could represent patients trying to phone their GP practice for an appointment.  The blue box is the telephone exchange and if all the lines are busy then the call is dropped (grey arrow).  If there is a line free then the call is connected (orange arrow) and joins a queue (blue box) to be answered some time later (green arrow).

In 1917, a Danish mathematician/engineer called Agner Krarup Erlang was working for the Copenhagen Telephone Company and was grappling with this very problem: “How many telephone lines do we need to ensure that dropped calls are infrequent AND the switchboard operators are well utilised?

This is the perennial quality-versus-cost conundrum. The Value-4-Money challenge. Too few lines and the quality of the service falls; too many lines and the cost of the service rises.

Q: Is there a V4M ‘sweet spot” and if so, how do we find it? Trial and error?

The good news is that Erlang solved the problem … mathematically … and the not-so good news is that his equations are very scary to a non mathematician/engineer!  So this solution is not much help to anyone else.


Fortunately, we have a tool for turning scary-equations into easy-2-see-pictures; our trusty Excel spreadsheet. So, here is a picture called a heat-map, and it was generated from one of Erlang’s equations using Excel.

The Erlang equation is lurking in the background, safely out of sight.  It takes two inputs and gives one output.

The first input is the Capacity, which is shown across the top, and it represents the number of beds available each day (known as the space-capacity).

The second input is the Load (or offered load to use the precise term) which is down the left side, and is the number of bed-days required per day (e.g. if we have an average of 10 referrals per day each of whom would require an average 2-day stay then we have an average of 10 x 2 = 20 bed-days of offered load per day).

The output of the Erlang model is the probability that a new arrival finds all the beds are full and the request for a bed fails (i.e. like a dropped telephone call).  This average probability is displayed in the cell.  The colour varies between red (100% failure) and green (0% failure), with an infinite number of shades of red-yellow-green in between.

We can now use our visual heat-map in a number of ways.

a) We can use it to predict the average likelihood of rejection given any combination of bed-capacity and average offered load.

Suppose the average offered load is 20 bed-days per day and we have 20 beds then the heat-map says that we will reject 16% of requests … on average (bottom left cell).  But how can that be? Why do we reject any? We have enough beds on average! It is because of variation. Requests do not arrive in a constant stream equal to the average; there is random variation around that average.  Critically ill patients do not arrive at hospital in a constant stream; so our system needs some resilience and if it does not have it then failures are inevitable and mathematically predictable.

b) We can use it to predict how many beds we need to keep the average rejection rate below an arbitrary but acceptable threshold (i.e. the quality specification).

Suppose the average offered load is 20 bed-days per day, and we want to have a bed available more than 95% of the time (less than 5% failures) then we will need at least 25 beds (bottom right cell).

c) We can use it to estimate the maximum average offered load for a given bed-capacity and required minimum service quality.

Suppose we have 22 beds and we want a quality of >=95% (failure <5%) then we would need to keep the average offered load below 17 bed-days per day (i.e. by modifying the demand and the length of stay because average load = average demand * average length of stay).


There is a further complication we need to be mindful of though … the measured utilisation of the beds is related to the successful admissions (orange arrow in the first diagram) not to the demand (red arrow).  We can illustrate this with a complementary heat map generated in Excel.

For scenario (a) above we have an offered load of 20 bed-days per day, and we have 20 beds but we will reject 16% of requests so the accepted bed load is only 16.8 bed days per day  (i.e. (100%-16%) * 20) which is the reason that the average  utilisation is only 16.8/20 = 84% (bottom left cell).

For scenario (b) we have an offered load of 20 bed-days per day, and 25 beds and will only reject 5% of requests but the average measured utilisation is not 95%, it is only 76% because we have more beds (the accepted bed load is 95% * 20 = 19 bed-days per day and 19/25 = 76%).

For scenario (c) the average measured utilisation would be about 74%.


So, now we see the problem more clearly … if we blindly aim for an average, measured, bed-utilisation of 85% with the untested belief that it is always the optimum … this heat-map says it is impossible to achieve and at the same time offer an acceptable quality (>95%).

We are trading safety for money and that is not an acceptable solution in a health care system.


So where did this “magic” value of 85% come from?

From the same heat-map perhaps?

If we search for the combination of >95% success (<5% fail) and 85% average bed-utilisation then we find it at the point where the offered load reaches 50 bed-days per day and we have a bed-capacity of 56 beds.

And if we search for the combination of >99% success (<1% fail) and 85% average utilisation then we find it with an average offered load of just over 100 bed-days per day and a bed-capacity around 130 beds.

H’mm.  “Houston, we have a problem“.


So, even in this simplified scenario the hypothesis that an 85% average bed-occupancy is a global optimum is disproved.

The reality is that the average bed-occupancy associated with delivering the required quality for a given offered load with a specific number of beds is almost never 85%.  It can range anywhere between 50% and 100%.  Erlang knew that in 1917.


So, if a one-size-fits-all optimum measured average bed-occupancy assumption is not valid then how might we work out how many beds we need and predict what the expected average occupancy will be?

We would design the fit-4-purpose solution for each specific context …
… and to do that we need to learn the skills of complex adaptive system design …
… and that is part of the health care systems engineering (HCSE) skill-set.

 

One of the really cool things about the 1.3 kg of ChimpWare between our ears is the way it learns.

We have evolved the ability to predict the likely near-future based on a small number of past experiences.

And we do that by creating stored mental models.

Not even the most powerful computers can do it as well as we do – and we doing it without thinking. Literally. It is an unconscious process.

This ability to pro-gnose (‘know before’) gave our ancestors a major survival advantage when we were wandering about on the savanna over 10 million years ago, and we have used this amazing capability to build societies, mega-cities and spaceships.


But this ability is not perfect – it has a flaw – our ChimpOS does not store a picture of reality like a digital camera, it stores a patchy and distorted perception of reality – and then fills in the gaps with guesses (i.e. gaffes).  And we do not notice – consciously.

The cognitive trap is set and sits waiting to be sprung and to trip us up.


Here is an example:

“Improvement implies change”

Yes. That is a valid statement because we can show that whenever improvement has been the effect, then some time before that a change happened.  And we can show that when there are no changes, the system continues to behave as it always has.  Status quo.

The cognitive trap is that our ChimpOS is very good at remembering temporal associations – for example an association between “improvement” and “change” because we remember in the present. So if two concepts are presented at the same time, and we spice the pie with some emotion, then we are more likely to associate them.

The problem comes when we play back the memory … it can come back as …

“change implies improvement” which is not valid.  And we do not notice.

To prove it is not valid we just need to find one example where a change led to a deterioration; an unintended negative consequence, a surprising, confusing and disappointing failure to achieve our intended improvement.

An embarrassing gap between our intent and our impact.

And finding that evidence is not hard. Failures and disappointments in the world of improvement are all too common.


And then we fall into the same cognitive trap because we generalise from a single, bad experience and the lesson our ChimpOS stores for future reference is “change is bad”.

And forever afterwards we feel anxious whenever the idea of change is suggested.

And it is a very effective survival tactic – for a hominid living on the African savanna 10 million years ago, and at risk of falling prey to sharp-fanged, hungry predators.  It is a less useful tactic in the modern world where the risk of being eaten-for-lunch is minimal, and where the pace of change is accelerating.  We must learn to innovate and improve to survive in the social jungle … and we are not well equipped!


Here is another common cognitive trap:

Excellence implies no failures.

Yes. If we are delivering a consistently excellent service then the absence of failures will be a noticeable feature.

No failures implies excellence.

Sadly, this is not a valid inference.  If quality-of-service is measured on a continuum from Excrement-to-Excellent, then we can be delivering a consistently mediocre service, one that is barely adequate, and also have no failures.


The design flaw here is that our ChimpWare/ChimpOS memory system is lossy.

We do not remember all the information required to reconstruct an accurate memory of reality – there is too much information – so we distort, we delete and we generalise.  And we do that because when we evolved it was a good enough solution, and it enabled us to survive as a species, so the ChimpWare/ChimpOS genes were passed on.

We cannot reverse millions of years of evolution.  We cannot get a hardware or software upgrade.  We need to learn to manage with the limitations of what we have between our ears.

And to avoid the cognitive traps we need to practice the discipline of bringing our unconscious assumptions into conscious awareness … and we do that by asking carefully framed questions.

Here is another example to practice with:

A high-efficiency design implies high-utilisation of resources.

Yes, that is valid. Idle resources means wasted resources which means lower efficiency.

Q1: Is the converse also valid?
Q2: Is there any evidence that disproves the converse is valid?

If high-utilisation does not imply high-efficiency, what are the implications of falling into this cognitive trap?  What is the value of measuring utilisation? Does it have a value?

These are useful questions.

It is that time of year – again.

Winter.

The NHS is struggling, front-line staff are having to use heroic measures just to keep the ship afloat, and less urgent work has been suspended to free up space and time to help man the emergency pumps.

And the finger-of-blame is being waggled by the army of armchair experts whose diagnosis is unanimous: “lack of cash caused by an austerity triggered budget constraint”.


And the evidence seems plausible.

The A&E performance data says that each year since 2009, the proportion of patients waiting more than 4 hours in A&Es has been increasing.  And the increase is accelerating. This is a progressive quality failure.

And health care spending since the NHS was born in 1948 shows a very similar accelerating pattern.    

So which is the chicken and which is the egg?  Or are they both symptoms of something else? Something deeper?


Both of these charts are characteristic of a particular type of system behaviour called a positive feedback loop.  And the cost chart shows what happens when someone attempts to control the cash by capping the budget:  It appears to work for a while … but the “pressure” is building up inside the system … and eventually the cash-limiter fails. Usually catastrophically. Bang!


The quality chart shows an associated effect of the “pressure” building inside the acute hospitals, and it is a very well understood phenomenon called an Erlang-Kingman queue.  It is caused by the inevitable natural variation in demand meeting a cash-constrained, high-resistance, high-pressure, service provider.  The effect is to amplify the natural variation and to create something much more dangerous and expensive: chaos.


The simple line-charts above show the long-term, aggregated  effects and they hide the extremely complicated internal structure and the highly complex internal behaviour of the actual system.

One technique that system engineers use to represent this complexity is a causal loop diagram or CLD.

The arrows are of two types; green indicates a positive effect, and red indicates a negative effect.

This simplified CLD is dominated by green arrows all converging on “Cost of Care”.  They are the positive drivers of the relentless upward cost pressure.

Health care is a victim of its own success.

So, if the cash is limited then the naturally varying demand will generate the queues, delays and chaos that have such a damaging effect on patients, providers and purses.

Safety and quality are adversely affected. Disappointment, frustration and anxiety are rife. Expectation is lowered.  Confidence and trust are eroded.  But costs continue to escalate because chaos is expensive to manage.

This system behaviour is what we are seeing in the press.

The cost-constraint has, paradoxically, had exactly the opposite effect, because it is treating the effect (the symptom) and ignoring the cause (the disease).


The CLD has one negative feedback loop that is linked to “Efficiency of Processes”.  It is the only one that counteracts all of the other positive drivers.  And it is the consequence of the “System Design”.

What this means is: To achieve all the other benefits without the pressures on people and purses, all the complicated interdependent processes required to deliver the evolving health care needs of the population must be proactively designed to be as efficient as technically possible.


And that is not easy or obvious.  Efficient design does not happen naturally.  It is hard work!  It requires knowledge of the Anatomy and Physiology of Systems and of the Pathology of Variation.  It requires understanding how to achieve effectiveness and efficiency at the same time as avoiding queues and chaos.  It requires that the whole system is continually and proactively re-designed to remain reliable and resilient.

And that implies it has to be done by the system itself; and that means the NHS needs embedded health care systems engineering know-how.

And when we go looking for that we discover sequence of gaps.

An Awareness gap, a Belief gap and a Capability gap. ABC.

So the first gap to fill is the Awareness gap.

It had been some time since Bob and Leslie had chatted so an email from the blue was a welcome distraction from a complex data analysis task.

<Bob> Hi Leslie, great to hear from you. I was beginning to think you had lost interest in health care improvement-by-design.

<Leslie> Hi Bob, not at all.  Rather the opposite.  I’ve been very busy using everything that I’ve learned so far.  It’s applications are endless, but I have hit a problem that I have been unable to solve, and it is driving me nuts!

<Bob> OK. That sounds encouraging and interesting.  Would you be able to outline this thorny problem and I will help if I can.

<Leslie> Thanks Bob.  It relates to a big issue that my organisation is stuck with – managing urgent admissions.  The problem is that very often there is no bed available, but there is no predictability to that.  It feels like a lottery; a quality and safety lottery.  The clinicians are clamoring for “more beds” but the commissioners are saying “there is no more money“.  So the focus has turned to reducing length of stay.

<Bob> OK.  A focus on length of stay sounds reasonable.  Reducing that can free up enough beds to provide the necessary space-capacity resilience to dramatically improve the service quality.  So long as you don’t then close all the “empty” beds to save money, or fall into the trap of believing that 85% average bed occupancy is the “optimum”.

<Leslie> Yes, I know.  We have explored all of these topics before.  That is not the problem.

<Bob> OK. What is the problem?

<Leslie> The problem is demonstrating objectively that the length-of-stay reduction experiments are having a beneficial impact.  The data seems to say they they are, and the senior managers are trumpeting the success, but the people on the ground say they are not. We have hit a stalemate.


<Bob> Ah ha!  That old chestnut.  So, can I first ask what happens to the patients who cannot get a bed urgently?

<Leslie> Good question.  We have mapped and measured that.  What happens is the most urgent admission failures spill over to commercial service providers, who charge a fee-per-case and we have no choice but to pay it.  The Director of Finance is going mental!  The less urgent admission failures just wait on queue-in-the-community until a bed becomes available.  They are the ones who are complaining the most, so the Director of Governance is also going mental.  The Director of Operations is caught in the cross-fire and the Chief Executive and Chair are doing their best to calm frayed tempers and to referee the increasingly toxic arguments.

<Bob> OK.  I can see why a “Reduce Length of Stay Initiative” would tick everyone’s Nice If box.  So, the data analysts are saying “the length of stay has come down since the Initiative was launched” but the teams on the ground are saying “it feels the same to us … the beds are still full and we still cannot admit patients“.

<Leslie> Yes, that is exactly it.  And everyone has come to the conclusion that demand must have increased so it is pointless to attempt to reduce length of stay because when we do that it just sucks in more work.  They are feeling increasingly helpless and hopeless.

<Bob> OK.  Well, the “chronic backlog of unmet need” issue is certainly possible, but your data will show if admissions have gone up.

<Leslie> I know, and as far as I can see they have not.

<Bob> OK.  So I’m guessing that the next explanation is that “the data is wonky“.

<Leslie> Yup.  Spot on.  So, to counter that the Information Department has embarked on a massive push on data collection and quality control and they are adamant that the data is complete and clean.

<Bob> OK.  So what is your diagnosis?

<Leslie> I don’t have one, that’s why I emailed you.  I’m stuck.


<Bob> OK.  We need a diagnosis, and that means we need to take a “history” and “examine” the process.  Can you tell me the outline of the RLoS Initiative.

<Leslie> We knew that we would need a baseline to measure from so we got the historical admission and discharge data and plotted a Diagnostic Vitals Chart®.  I have learned something from my HCSE training!  Then we planned the implementation of a visual feedback tool that would show ward staff which patients were delayed so that they could focus on “unblocking” the bottlenecks.  We then planned to measure the impact of the intervention for three months, and then we planned to compare the average length of stay before and after the RLoS Intervention with a big enough data set to give us an accurate estimate of the averages.  The data showed a very obvious improvement, a highly statistically significant one.

<Bob> OK.  It sounds like you have avoided the usual trap of just relying on subjective feedback, and now have a different problem because your objective and subjective feedback are in disagreement.

<Leslie> Yes.  And I have to say, getting stuck like this has rather dented my confidence.

<Bob> Fear not Leslie.  I said this is an “old chestnut” and I can say with 100% confidence that you already have what you need in your T4 kit bag?

<Leslie>Tee-Four?

<Bob> Sorry, a new abbreviation. It stands for “theory, techniques, tools and training“.

<Leslie> Phew!  That is very reassuring to hear, but it does not tell me what to do next.

<Bob> You are an engineer now Leslie, so you need to don the hard-hat of Improvement-by-Design.  Start with your Needs Analysis.


<Leslie> OK.  I need a trustworthy tool that will tell me if the planned intervention has has a significant impact on length of stay, for better or worse or not at all.  And I need it to tell me that quickly so I can decide what to do next.

<Bob> Good.  Now list all the things that you currently have that you feel you can trust.

<Leslie> I do actually trust that the Information team collect, store, verify and clean the raw data – they are really passionate about it.  And I do trust that the front line teams are giving accurate subjective feedback – I work with them and they are just as passionate.  And I do trust the systems engineering “T4” kit bag – it has proven itself again-and-again.

<Bob> Good, and I say that because you have everything you need to solve this, and it sounds like the data analysis part of the process is a good place to focus.

<Leslie> That was my conclusion too.  And I have looked at the process, and I can’t see a flaw. It is driving me nuts!

<Bob> OK.  Let us take a different tack.  Have you thought about designing the tool you need from scratch?

<Leslie> No. I’ve been using the ones I already have, and assume that I must be using them incorrectly, but I can’t see where I’m going wrong.

<Bob> Ah!  Then, I think it would be a good idea to run each of your tools through a verification test and check that they are fit-4-purpose in this specific context.

<Leslie> OK. That sounds like something I haven’t covered before.

<Bob> I know.  Designing verification test-rigs is part of the Level 2 training.  I think you have demonstrated that you are ready to take the next step up the HCSE learning curve.

<Leslie> Do you mean I can learn how to design and build my own tools?  Special tools for specific tasks?

<Bob> Yup.  All the techniques and tools that you are using now had to be specified, designed, built, verified, and validated. That is why you can trust them to be fit-4-purpose.

<Leslie> Wooohooo! I knew it was a good idea to give you a call.  Let’s get started.


[Postscript] And Leslie, together with the other stakeholders, went on to design the tool that they needed and to use the available data to dissolve the stalemate.  And once everyone was on the same page again they were able to work collaboratively to resolve the flow problems, and to improve the safety, flow, quality and affordability of their service.  Oh, and to know for sure that they had improved it.

The NHS appears to be descending in a frenzy of fear as the winter looms and everyone says it will be worse than last and the one before that.

And with that we-are-going-to-fail mindset, it almost certainly will.

Athletes do not start a race believing that they are doomed to fail … they hold a belief that they can win the race and that they will learn and improve even if they do not. It is a win-win mindset.

But to succeed in sport requires more than just a positive attitude.

It also requires skills, training, practice and experience.

The same is true in healthcare improvement.


That is not the barrier though … the barrier is disbelief.

And that comes from not having experienced what it is like to take a system that is failing and transform it into one that is succeeding.

Logically, rationally, enjoyably and surprisingly quickly.

And, the widespread disbelief that it is possible is paradoxical because there are plenty of examples where others have done exactly that.

The disbelief seems to be “I do not believe that will work in my world and in my hands!

And the only way to dismantle that barrier-of-disbelief is … by doing it.


How do we do that?

The emotionally safest way is in a context that is carefully designed to enable us to surface the unconscious assumptions that are the bricks in our individual Barriers of Disbelief.

And to discard the ones that do not pass a Reality Check, and keep the ones that are OK.

This Disbelief-Busting design has been proven to be effective, as evidenced by the growing number of individuals who are learning how to do it themselves, and how to inspire, teach and coach others to as well.


So, if you would like to flip disbelief-and-hopeless into belief-and-hope … then the door is here.

The first step in a design conversation is to understand the needs of the customer.

It does not matter if you are designing a new kitchen, bathroom, garden, house, widget, process, or system.  It is called a “needs analysis”.

Notice that it is not called a “wants analysis”.  They are not the same thing because there is often a gap between what we want (and do not want) and what we need (and do not need).

The same is true when we are looking to use a design-based approach to improve something that we already have.


This is especially true when we are improving services because the the needs and wants of a service tend to drift and shift continuously, and we are in a continual state of improvement.

For design to work the “customers” and the “suppliers” need work collaboratively to ensure that they both get what they need.

Frustration and fragmentation are the symptoms of a combative approach where a “win” for one is a “lose” for the other (NB. In absolute terms both will end up worse off than they started so both lose in the long term.)


And there is a tried and tested process to collaborative improvement-by-design.

One version is called “experience based co-design” (EBCD) and it was cooked up in a health care context about 20 years ago and shown to work in a few small pilot studies.

The “experience” that triggered the projects was almost always a negative one and was associated with feelings of frustration, anxiety and disappointment. So, the EBCD case studies were more focused on helping the protagonists to share their perspectives, in the belief that will be enough to solve the problem.  And it is indeed a big step forwards.

It has a limitation though.  It assumes that the staff and patients know how to design processes so that they are fit-4-purpose, and the evidence to support that assumption is scanty.

In one pilot in mental health, the initial improvement (a fall in patient and carer complaints) was not sustained.  The reason given was that the staff who were involved in the pilot inevitably moved on, and as they did the old attitudes, beliefs and behaviours returned.


So, an improved version of EBCD is needed.  One that is based on hard evidence of what works and what does not.  One that is also focused on moving towards a future-purpose rather than just moving away from past-problems.

Let us call this improved version “Evidence-Based Co-Design“.

And we already know that by a different name:

Health Care Systems Engineering (HCSE).

OODA is something we all do thousands of times a day without noticing.

Observe – Orient – Decide – Act.

The term is attributed to Colonel John Boyd, a real world “Top Gun” who studied economics and engineering, then flew and designed fighter planes, then became a well-respected military strategist.

OODA is a continuous process of updating our mental model based on sensed evidence.

And it is a fast process because happens largely out of awareness.

This was Boyd’s point: In military terms, the protagonist that can make wiser and faster decisions are more likely to survive in combat.


And notice that it is not a simple linear sequence … it is a system … there are parallel paths and both feed-forward and feed-backward loops … there are multiple information flow paths.

And notice that the Implicit Guidance & Control links do not go through Decision – this means they operate out of awareness and are much faster.

And notice the Feed Forward links link the OODA steps – this is the conscious, sequential, future looking process that we know by another name:

Study-Adjust-Plan-Do.


We use the same process in medicine: first we study the patient and the problem they are presenting (history, examination, investigation), then we adjust our generic mental model of how the body works to the specific patient (diagnosis), then we plan and decide a course of action to achieve the intended outcome, and then we act, we do it (treatment).

But at any point we can jump back to an earlier step and we can jump forwards to a later one.  The observe, orient, decide, act modes are running in parallel.

And the more experience we have of similar problems the faster we can complete the OODA (or SAPD) work because we learn what is the most useful information to attend to, and we learn how to interpret it.

We learn the patterns and what to look for – and that speeds up the process – a lot!


This emergent learning is then re-inforced if the impact of our action matches our intent and prediction and our conscious learning is then internalised as unconscious “rules of thumb” called heuristics.


We start by thinking our way consciously and slowly … and … we finish by feeling our way unconsciously and quickly.


Until … we  encounter a novel problem that does not fit any of our learned pattern matching neural templates. When that happens, our unconscious, parallel processing, pattern-matching system alerts us with a feeling of confusion and bewilderment – and we freeze (often with fright!)

Now we have a choice: We can retreat to using familiar, learned, reactive, knee-jerk patterns of behaviour (presumably in the hope that they will work) or we can switch into a conscious learning loop and start experimenting with novel ideas.

If we start at Hypothesis then we have the Plan-Do-Study-Act cycle; where we generate novel hypotheses to explain the unexpected, and we then plan experiments to test our hypotheses; and we then study the outcome of the experiments and we then we act on our conclusions.

This mindful mode of thinking is well described in the book “Managing the Unexpected” by Weick and Sutcliffe and is the behaviour that underpins the success of HROs – High Reliability Organisations.

The image is of the latest (3rd edition) but the previous (2nd edition) is also worth reading.

So we have two interdependent problem solving modes – the parallel OODA system and the sequential SAPD process.

And we can switch between them depending on the context.


Which is an effective long-term survival strategy because the more we embrace the unexpected, the more opportunities we will have to switch into exploration mode and learn new patterns; and the more patterns we recognise the more efficient and effective our unconscious decision-making process will become.

This complex adaptive system behaviour has another name … Resilience.

“Those who cannot remember the past are condemned to repeat it”.

Aphorism by George Santayana, philosopher (1863-1952).

And the history of quality improvement (QI) is worth reflecting on, because there is massive pressure to grow QI capability in health care as a way of solving some chronic problems.

The chart below is a Google Ngram, it was generated using some phrases from the history of Quality Improvement:

TQM = the total quality management movement that grew from the work of Walter Shewhart in the 1920’s and 30’s and was “incubated” in Japan after being transplanted there by Shewhart’s student W. Edwards Deming in the 1950’s.
ISO 9001 = an international quality standard first published in 2000 that developed from the British Standards Institute (BSI) in the 1970’s via ISO 9000 that was first published in 1987.
Six Sigma = a highly statistical quality improvement / variation reduction methodology that originated in the rapidly expanding semiconductor industry in the 1980’s.

The rise-and-fall pattern is characteristic of how innovations spread; there is a long lag phase, then a short accelerating growth phase, then a variable plateau phase and then a long, decelerating decline phase.

It is called a life-cycle. It is how complex adaptive systems behave. It is how innovations spread. It is expected.

So what happened?

Did the rise of TQM lead to the rise of ISO 9000 which triggered the development of the Six Sigma methodology?

It certainly looks that way.

So why is Six Sigma “dying”?  Or is it just being replaced by something else?


This is the corresponding Ngram for “Healthcare Quality Improvement” which seems to sit on the timeline in about the same place as ISO 9001 and that suggests that it was triggered by the TQM movement. 

The Institute of Healthcare Improvement (IHI) was officially founded in 1991 by Dr Don Berwick, some years after he attended one of the Deming 4-day workshops and had an “epiphany”.

Don describes his personal experience in a recent plenary lecture (from time 01:07).  The whole lecture is worth watching because it describes the core concepts and principles that underpin QI.


So given the fact that safety and quality are still very big issues in health care – why does the Ngram above suggest that the use of the term Quality Improvement does not sustain?

Will that happen in healthcare too?

Could it be that there is more to improvement than just a focus on safety (reducing avoidable harm) and quality (improving patient experience)?

Could it be that flow and productivity are also important?

The growing angst that permeates the NHS appears to be more focused on budgets and waiting-time targets (4 hrs in A&E, 63 days for cancer, 18 weeks for scheduled care, etc.).

Mortality and Quality hardly get a mention any more, and the nationally failed waiting time targets are being quietly dropped.

Is it too politically embarrassing?

Has the NHS given up because it firmly believes that pumping in even more money is the only solution, and there isn’t any more in the tax pot?


This week another small band of brave innovators experienced, first-hand, the application of health care systems engineering (HCSE) to a very common safety, flow, quality and productivity problem …

… a chronically chaotic clinic characterized by queues and constant calls for more capacity and cash.

They discovered that the queues, delays and chaos (i.e. a low quality experience) were not caused by lack of resources; they were caused by flow design.  They were iatrogenic.  And when they applied the well-known concepts and principles of scheduling design, they saw the queues and chaos evaporate, and they measured a productivity increase of over 60%.

OMG!

Improvement science is more than just about safety and quality, it is about flow and productivity as well; because we all need all four to improve at the same time.

And yes we need all the elements of Deming’s System of Profound Knowledge (SoPK), but need more than that.  We need to harness the knowledge of the engineers who for centuries have designed and built buildings, bridges, canals, steam engines, factories, generators, telephones, automobiles, aeroplanes, computers, rockets, satellites, space-ships and so on.

We need to revisit the legacy of the engineers like Watt, Brunel, Taylor, Gantt, Erlang, Ford, Forrester and many, many others.

Because it does appear to be possible to improve-by-design as well as to improve-by-desire.

Here is the Ngram with “Systems Engineering” (SE) added and the time line extended back to 1955.  Note the rise of SE in the 1950’s and 1960’s and note that it has sustained.

That pattern of adoption only happens when something is proven to be fit-4-purpose, and is valued and is respected and is promoted and is taught.

What opportunity does systems engineering offer health care?

That question is being actively explored … here.

This week a ground-breaking case study was published.

It describes how a team in South Wales discovered how to make the flows visible in a critical part of their cancer pathway.

Radiology.

And they did that by unintentionally falling into a trap!  A trap that many who set out to improve health care services fall into.  But they did not give up.  They sought guidance and learned some profound lessons.

Part 1 of their story is shared here.


One lesson they learned is that, as they take on more complex improvement challenges, they need to be equipped with the right tools, and they need to be trained to use them, and they need to have practiced using them.

Another lesson they learned is that making the flows in a system visible is necessary before the current behaviour of the system can be understood.

And they learned that they needed a clear diagnosis of how the current system is not performing; before they can attempt to design an intervention to deliver the intended improvement.

They learned how the Study-Plan-Do cycle works, and they learned the reason it starts with “Study”, and not with “Plan”.


They tried, failed, took one step back, asked, listened and learned.


Then with their new knowledge, more advanced tools, and deeper understanding they took two steps forward; diagnosed problem, designed an intervention, and delivered a significant improvement.

And visualised just how significant.

Then they shared Part 2 of their story … here.

 

 

Beliefs drive behaviour. Behaviour drives change. Improvement requires change.

So, improvement requires challenging beliefs; confirming some and disproving others.

And beliefs can only be confirmed or disproved rationally – with evidence and explanation. Rhetoric is too slippery. We can convince ourselves of anything with that!

So it comes as an emotional shock when one of our beliefs is disproved by experiencing reality from a new perspective.

Our natural reaction is surprise, perhaps delight, and then defense. We say “Yes, but ...”.

And that is healthy skepticism and it is a valuable and necessary part of the change and improvement process.

If there are not enough healthy skeptics on a design team it is unbalanced.

If there are too many healthy skeptics on a design team it is unbalanced.


This week I experienced this phenomenon first hand.

The context was a one day practical skills workshop and the topic was:

How to improve the safety, timeliness, quality and affordability of unscheduled care“.

The workshop is designed to approach this challenge from a different perspective.

Instead of asking “What is the problem and how do we solve it?” we took the system engineering approach of asking “What is the purpose and how can we achieve it?”

We used a range of practical exercises to illustrate some core concepts and principles – reality was our teacher. Then we applied those newly acquired insights to the design challenge using a proven methodology that ensured we do not skip steps.


And the outcome was: the participants discovered that …

it is indeed possible to improve the safety, timeliness, quality and affordability of unscheduled health care …

using health care systems engineering concepts, principles, techniques and tools that, until the workshop, they had been unaware even existed.


Their reaction was “OMG” and was shortly followed by “Yes, but …” which is to be expected and is healthy.

The rest of the “Yes, but … ” sentence was “… how will I convince my colleagues?

One way is for them to seek out the same experience …

… because reality is a much better teacher than rhetoric.

HCSE Practical Skills One Day Workshops

 

The Elephant in the Room is an English-language metaphorical idiom for an obvious problem or risk no one wants to discuss.

An undiscussable topic.

And the undiscussability is also undiscussable.

So the problem or risk persists.

And people come to harm as a result.

Which is not the intended outcome.

So why do we behave this way?

Perhaps it is because the problem looks too big and too complicated to solve in one intuitive leap, and we give up and label it a “wicked problem”.


The well known quote “When eating an elephant take one bite at a time” is attributed to Creighton Abrams, a US Chief of Staff.


It says that even seemingly “impossible” problems can be solved so long as we proceed slowly and carefully, in small steps, learning as we go.

And the continued decline of the NHS UK Unscheduled Care performance seems to be an Elephant-in-the-Room problem, as shown by the monthly A&E 4-hour performance over the last 10 years and the fact that this chart is not published by the NHS.

Red = England, Brown=Wales, Grey=N.Ireland, Purple=Scotland.


This week I experienced a bite of this Elephant being taken and chewed on.

The context was a Flow Design – Practical Skills – One Day Workshop and the design challenge posed to the eager delegates was to improve the quality and efficiency of a one stop clinic.

A seemingly impossible task because the delegates reported that the queues, delays and chaos that they experienced in the simulated clinic felt very realistic.

Which means that this experience is accepted as inevitable, and is impossible to improve without more resources, but financial cuts prevent that, so we have to accept the waits.


At the end of the day their belief had been shattered.

The queues, delays and chaos had evaporated and the cost to run the new one stop clinic design was actually less than the old one.

And when we combined the quality metrics with the cost metrics and calculated the measured improvement in productivity; the answer was over 70%!

The delegates experienced it all first-hand. They did the diagnosis, design, and delivery using no more than squared-paper and squeaky-pen.

And at the end they were looking at a glaring mismatch between their rhetoric and the reality.

The “impossible to improve without more money” hypothesis lay in tatters – it had been rationally, empirically and scientifically disproved.

I’d call that quite a big bite out of the Elephant-in-the-Room.


So if you have a healthy appetite for Elephant-in-the-Room challenges, and are not afraid to try something different, then there is a whole menu of nutritious food-for-thought at a FISH&CHIPs® practical skills workshop.

This is the now-infamous statement that Donald Rumsfeld made at a Pentagon Press Conference which triggered some good-natured jesting from the assembled journalists.

But there is a problem with it.

There is a fourth combination that he does not mention: the Unknown-Knowns.

Which is a shame because they are actually the most important because they cause the most problems.  Avoidable problems.


Suppose there is a piece of knowledge that someone knows but that someone else does not; then we have an unknown-known.

None of us know everything and we do not need to, because knowledge that is of no value to us is irrelevant for us.

But what happens when the unknown-known is of value to us, and more than that; what happens when it would be reasonable for someone else to expect us to know it; because it is our job to know.


A surgeon would be not expected to know a lot about astronomy, but they would be expected to know a lot about anatomy.


So, what happens if we become aware that we are missing an important piece of knowledge that is actually already known?  What is our normal human reaction to that discovery?

Typically, our first reaction is fear-driven and we express defensive behaviour.  This is because we fear the potential loss-of-face from being exposed as inept.

From this sudden shock we then enter a characteristic emotional pattern which is called the Nerve Curve.

After the shock of discovery we quickly flip into denial and, if that does not work then to anger (i.e. blame).  We ignore the message and if that does not work we shoot the messenger.


And when in this emotionally charged state, our rationality tends to take a back seat.  So, if we want to benefit from the discovery of an unknown-known, then we have to learn to bite-our-lip, wait, let the red mist dissipate, and then re-examine the available evidence with a cool, curious, open mind.  A state of mind that is receptive and open to learning.


Recently, I was reminded of this.


The context is health care improvement, and I was using a systems engineering framework to conduct some diagnostic data analysis.

My first task was to run a data-completeness-verification-test … and the data I had been sent did not pass the test.  There was some missing.  It was an error of omission (EOO) and they are the hardest ones to spot.  Hence the need for the verification test.

The cause of the EOO was an unknown-known in the department that holds the keys to the data warehouse.  And I have come across this EOO before, so I was not surprised.

Hence the need for the verification test.

I was not annoyed either.  I just fed back the results of the test, explained what the issue was, explained the cause, and they listened and learned.


The implication of this specific EOO is quite profound though because it appears to be ubiquitous across the NHS.

To be specific it relates to the precise details of how raw data on demand, activity, length of stay and bed occupancy is extracted from the NHS data warehouses.

So it is rather relevant to just about everything the NHS does!

And the error-of-omission leads to confusion at best; and at worst … to the following sequence … incomplete data =>  invalid analysis => incorrect conclusion => poor decision => counter-productive action => unintended outcome.

Does that sound at all familiar?


So, if would you like to learn about this valuable unknown-known is then I recommend the narrative by Dr Kate Silvester, an internationally recognised expert in healthcare improvement.  In it, Kate re-tells the story of her emotional roller-coaster ride when she discovered she was making the same error.


Here is the link to the full abstract and where you can download and read the full text of Kate’s excellent essay, and help to make it a known-known.

That is what system-wide improvement requires – sharing the knowledge.

Have you heard the phrase “you either love it or you hate it“?  It is called the Marmite Effect.

Improvement science has Marmite-like effect on some people, or more specifically, the theory part does.

Both evidence and experience show that most people prefer to learn-by-doing first; and then consolidate their learning with the minimum, necessary amount of supporting theory.

But that is not how we usually share what we know with others.  We usually attempt to teach the theory first, perhaps in the belief that it will speed up the process of learning.

Sadly, it usually has the opposite effect. Too much theory too soon often creates a barrier to engagement. It actually slows learning down! Which was not the impact we were intending.


The implications of this is that teachers of the science of improvement need to provide a range of different ways to engage with the subject.  Complementary ways.  And leave the choice of which suits whom … to the learner.

And the way to tell if it is working is … the sound of laughter.

Why is that?


Laughing is a complex behaviour that leaves us feeling happier. Which is good.

Comedians make a living from being able to trigger this behaviour in their audiences, and we will gladly part with hard cash when we know something will make us feel better.

And laughing is one of the healthiest ways to feel better!

So why do we laugh when we are learning?

It is believed that one trigger for the laughter reaction is the sudden shift from one perspective to another.  More specifically, a mental shift that relieves a growing emotional tension.  The punch line of a really good joke for example.

And later-in-life learning is often more a process of unlearning.

When we challenge a learned assumption with evidence and if we disprove it … we are unlearning.  And doing that generates emotional tension. We are often very attached to our unconscious assumptions and will usually resist them being challenged.

The way to unlearn effectively is to use the evidence of our own eyes to raise doubts about our unconscious assumptions.  We need to actively generate a bit of confusion.

Then, we resolve the apparent paradox by creatively shifting perspective, often with a real example, a practical explanation or a hands-on demonstration.

And when we experience the “Ah ha! Now I see!” reaction, and we emerge from the fog of confusion, we will relieve the emotional tension and our involuntary reaction is to laugh.

But if our teacher unintentionally triggers a Marmite effect; a “Yeuk, I am NOT enjoying this!” feeling, then we need to respect that, and step back, and adopt a different tack.


Over the last few months I have been experimenting with different approaches to introducing the principles of improvement-by-design.

And the results are clear.

A minority prefer to start with the abstract theory, and then apply it in practice.

The majority have various degrees of Marmite reaction to the theory, and some are so put off that they actively disengage.  But when they have an opportunity to see the same principles demonstrated in a concrete, practical way; they learn and laugh.

Unlearning-by-doing seems to work better for the majority.

So, if you want to have fun and learn how to deliver significant and sustained improvements … then the evidence points to this as the starting point …

… the Flow Design Practical Skills One Day Workshop.

And if you also want to dip into a bit of the tried-and-tested theory that underpins improvement-by-design then you can do that as well, either before or later (when it becomes necessary), or both.


So, to have lots of fun and learn some valuable improvement-by-design practical skills at the same time …  click here.

This week about thirty managers and clinicians in South Wales conducted two experiments to test the design of the Flow Design Practical Skills One Day Workshop.

Their collective challenge was to diagnose and treat a “chronically sick” clinic and the majority had no prior exposure to health care systems engineering (HCSE) theory, techniques, tools or training.

Two of the group, Chris and Jat, had been delegates at a previous ODWS, and had then completed their Level-1 HCSE training and real-world projects.

They had seen it and done it, so this experiment was to test if they could now teach it.

Could they replicate the “OMG effect” that they had experienced and that fired up their passion for learning and using the science of improvement?

Read on »

In medical training we have to learn about lots of things. That is one reason why it takes a long time to train a competent and confident clinician.

First, we learn the anatomy (structure) and the physiology (function) of the normal, healthy human.

Then we learn about how this amazingly complicated system can go wrong.  We learn about pathology.  And we do that so that we understand the relationship between the cause (disease) and the effect (symptoms and signs).

Then we learn about diagnostics – which is how to work backwards from the effects to the most likely cause(s).

And only then can we learn about therapeutics – the design and delivery of a treatment plan that we are confident will relieve the symptoms by curing the disease.

And we learn about prevention – how to avoid some illnesses (and delay others) by addressing the root causes earlier.  Much of the increase in life expectancy over the last 200 years has come from prevention, not from cure.


The NHS is an amazingly complicated system, and it too can go wrong.  It can exhibit a wide spectrum of symptoms and signs; medical errors, long delays, unhappy patients, burned-out staff, and overspent budgets.

But, there is no equivalent training in how to diagnose and treat a sick health care system.  And this is not acceptable, especially given that the knowledge of how to do this is already available.

It is called complex adaptive systems engineering (CASE).


Before the Renaissance, the understanding of how the body works was primitive and it was believed that illness was “God’s Will” so we had to just grin-and-bear (and pray).

The Scientific Revolution brought us new insights, profound theories, innovative techniques and capability-extending tools.  And the impact has been dramatic.  Those who do have access to this knowledge live better and longer than ever.  Those who do not … do not.

Our current understanding of how health care systems work is, to be blunt, medieval.  The current approaches amount to little more than rune reading, incantations and the prescription of purgatives and leeches.  And the impact is about as effective.

So we need to study the anatomy, physiology, pathology, diagnostics and therapeutics of complex adaptive systems like healthcare.  And most of all we need to understand how to prevent catastrophes happening in the first place.  We need the NHS to be immortal.


And this week a prototype complex adaptive pathology training system was tested … and it employed cutting-edge 21st Century technology: Pasta Twizzles.

The specific topic under scrutiny was variation.  A brain-bending concept that is usually relegated to the mystical smoke-and-mirrors world called “Sadistics”.

But no longer!

The Mists-of-Jargon and Fog-of-Formulae were blown away as we switched on the Fan-of-Facilitation and the Light-of-Simulation and went exploring.

Empirically. Pragmatically.


And what we discovered was jaw-dropping.

A disease called the “Flaw of Averages” and its malignant manifestation “Carveoutosis“.


And with our new knowledge we opened the door to a previously hidden world of opportunity and improvement.

Then we activated the Laser-of-Insight and evaporated the queues and chaos that, before our new understanding, we had accepted as inevitable and beyond our understanding or control.

They were neither. And never had been. We were deluding ourselves.

Welcome to the Resilient Design – Practical Skills – One Day Workshop.

Validation Test: Passed.

A story was shared this week.

A story of hope for the hard-pressed NHS, its patients, its staff and its managers and its leaders.

A story that says “We can learn how to fix the NHS ourselves“.

And the story comes with evidence; hard, objective, scientific, statistically significant evidence.


The story starts almost exactly three years ago when a Clinical Commissioning Group (CCG) in England made a bold strategic decision to invest in improvement, or as they termed it “Achieving Clinical Excellence” (ACE).

They invited proposals from their local practices with the “carrot” of enough funding to allow GPs to carve-out protected time to do the work.  And a handful of proposals were selected and financially supported.

This is the story of one of those proposals which came from three practices in Sutton who chose to work together on a common problem – the unplanned hospital admissions in their over 70’s.

Their objective was clear and measurable: “To reduce the cost of unplanned admissions in the 70+ age group by working with hospital to reduce length of stay.

Did they achieve their objective?

Yes, they did.  But there is more to this story than that.  Much more.


One innovative step they took was to invest in learning how to diagnose why the current ‘system’ was costing what it was; then learning how to design an improvement; and then learning how to deliver that improvement.

They invested in developing their own improvement science skills first.

They did not assume they already knew how to do this and they engaged an experienced health care systems engineer (HCSE) to show them how to do it (i.e. not to do it for them).

Another innovative step was to create a blog to make it easier to share what they were learning with their colleagues; and to invite feedback and suggestions; and to provide a journal that captured the story as it unfolded.

And they measured stuff before they made any changes and afterwards so they could measure the impact, and so that they could assess the evidence scientifically.

And that was actually quite easy because the CCG was already measuring what they needed to know: admissions, length of stay, cost, and outcomes.

All they needed to learn was how to present and interpret that data in a meaningful way.  And as part of their IS training,  they learned how to use system behaviour charts, or SBCs.


By Jan 2015 they had learned enough of the HCSE techniques and tools to establish the diagnosis and start to making changes to the parts of the system that they could influence.


Two years later they subjected their before-and-after data to robust statistical analysis and they had a surprise. A big one!

Reducing hospital mortality was not a stated objective of their ACE project, and they only checked the mortality data to be sure that it had not changed.

But it had, and the “p=0.014” part of the statement above means that the probability that this 20.0% reduction in hospital mortality was due to random chance … is less than 1.4%.  [This is well below the 5% threshold that we usually accept as “statistically significant” in a clinical trial.]

But …

This was not a randomised controlled trial.  This was an intervention in a complicated, ever-changing system; so they needed to check that the hospital mortality for comparable patients who were not their patients had not changed as well.

And the statistical analysis of the hospital mortality for the ‘other’ practices for the same patient group, and the same period of time confirmed that there had been no statistically significant change in their hospital mortality.

So, it appears that what the Sutton ACE Team did to reduce length of stay (and cost) had also, unintentionally, reduced hospital mortality. A lot!


And this unexpected outcome raises a whole raft of questions …


If you would like to read their full story then you can do so … here.

It is a story of hunger for improvement, of humility to learn, of hard work and of hope for the future.

Sometimes change is dramatic. A big improvement appears very quickly. And when that happens we are caught by surprise (and delight).

Our emotional reaction is much faster than our logical response. “Wow! That’s a miracle!


Our logical Tortoise eventually catches up with our emotional Hare and says “Hare, we both know that there is no such thing as miracles and magic. There must be a rational explanation. What is it?

And Hare replies “I have no idea, Tortoise.  If I did then it would not have been such a delightful surprise. You are such a kill-joy! Can’t you just relish the relief without analyzing the life out of it?

Tortoise feels hurt. “But I just want to understand so that I can explain to others. So that they can do it and get the same improvement.  Not everyone has a ‘nothing-ventured-nothing-gained’ attitude like you! Most of us are too fearful of failing to risk trusting the wild claims of improvement evangelists. We have had our fingers burned too often.


The apparent miracle is real and recent … here is a snippet of the feedback:

Notice carefully the last sentence. It took a year of discussion to get an “OK” and a month of planning to prepare the “GO”.

That is not a miracle and some magic … that took a lot of hard work!

The evangelist is the customer. The supplier is an engineer.


The context is the chronic niggle of patients trying to get an appointment with their GP, and the chronic niggle of GPs feeling overwhelmed with work.

Here is the back story …

In the opening weeks of the 21st Century, the National Primary Care Development Team (NPDT) was formed.  Primary care was a high priority and the government had allocated £168m of investment in the NHS Plan, £48m of which was earmarked to improve GP access.

The approach the NPDT chose was:

harvest best practice +
use a panel of experts +
disseminate best practice.

Dr (later Sir) John Oldham was the innovator and figure-head.  The best practice was copied from Dr Mark Murray from Kaiser Permanente in the USA – the Advanced Access model.  The dissemination method was copied from from Dr Don Berwick’s Institute of Healthcare Improvement (IHI) in Boston – the Collaborative Model.

The principle of Advanced Access is “today’s-work-today” which means that all the requests for a GP appointment are handled the same day.  And the proponents of the model outlined the key elements to achieving this:

1. Measure daily demand.
2. Set capacity so that is sufficient to meet the daily demand.
3. Simple booking rule: “phone today for a decision today”.

But that is not what was rolled out. The design was modified somewhere between aspiration and implementation and in two important ways.

First, by adding a policy of “Phone at 08:00 for an appointment”, and second by adding a policy of “carving out” appointment slots into labelled pots such as ‘Dr X’ or ‘see in 2 weeks’ or ‘annual reviews’.

Subsequent studies suggest that the tweaking happened at the GP practice level and was driven by the fear that, by reducing the waiting time, they would attract more work.

In other words: an assumption that demand for health care is supply-led, and without some form of access barrier, the system would be overwhelmed and never be able to cope.


The result of this well-intended tampering with the Advanced Access design was to invalidate it. Oops!

To a systems engineer this is meddling was counter-productive.

The “today’s work today” specification is called a demand-led design and, if implemented competently, will lead to shorter waits for everyone, no need for urgent/routine prioritization and slot carve-out, and a simpler, safer, calmer, more efficient, higher quality, more productive system.

In this context it does not mean “see every patient today” it means “assess and decide a plan for every patient today”.

In reality, the actual demand for GP appointments is not known at the start; which is why the first step is to implement continuous measurement of the daily number and category of requests for appointments.

The second step is to feed back this daily demand information in a visual format called a time-series chart.

The third step is to use this visual tool for planning future flow-capacity, and for monitoring for ‘signals’, such as spikes, shifts, cycles and slopes.

That was not part of the modified design, so the reasonable fear expressed by GPs was (and still is) that by attempting to do today’s-work-today they would unleash a deluge of unmet need … and be swamped/drowned.

So a flood defense barrier was bolted on; the policy of “phone at 08:00 for an appointment today“, and then the policy of  channeling the over spill into pots of “embargoed slots“.

The combined effect of this error of omission (omitting the measured demand visual feedback loop) and these errors of commission (the 08:00 policy and appointment slot carve-out policy) effectively prevented the benefits of the Advanced Access design being achieved.  It was a predictable failure.

But no one seemed to realize that at the time.  Perhaps because of the political haste that was driving the process, and perhaps because there were no systems engineers on the panel-of-experts to point out the risks of diluting the design.

It is also interesting to note that the strategic aim of the NPCT was to develop a self-sustaining culture of quality improvement (QI) in primary care. That didn’t seem to have happened either.


The roll out of Advanced Access was not the success it was hoped. This is the conclusion from the 300+ page research report published in 2007.


The “Miracle on Tavanagh Avenue” that was experienced this week by both patients and staff was the expected effect of this tampering finally being corrected; and the true potential of the original demand-led design being released – for all to experience.

Remember the essential ingredients?

1. Measure daily demand and feed it back as a visual time-series chart.
2. Set capacity so that is sufficient to meet the daily demand.
3. Use a simple booking rule: “phone anytime for a decision today”.

But there is also an extra design ingredient that has been added in this case, one that was not part of the original Advanced Access specification, one that frees up GP time to provide the required “resilience” to sustain a same-day service.

And that “secret” ingredient is how the new design worked so quickly and feels like a miracle – safe, calm, enjoyable and productive.

This is health care systems engineering (HCSE) in action.


So congratulations to Harry Longman, the whole team at GP Access, and to Dr Philip Lusty and the team at Riverside Practice, Tavangh Avenue, Portadown, NI.

You have demonstrated what was always possible.

The fear of failure prevented it before, just as it prevented you doing this until you were so desperate you had no other choices.

To read the fuller story click here.

PS. Keep a close eye on the demand time-series chart and if it starts to rise then investigate the root cause … immediately.


Phil and Pete are having a coffee and a chat.  They both work in the NHS and have been friends for years.

They have different jobs. Phil is a commissioner and an accountant by training, Pete is a consultant and a doctor by training.

They are discussing a challenge that affects them both on a daily basis: unscheduled care.

Both Phil and Pete want to see significant and sustained improvements and how to achieve them is often the focus of their coffee chats.


<Phil> We are agreed that we both want improvement, both from my perspective as a commissioner and from your perspective as a clinician. And we agree that what we want to see improvements in patient safety, waiting, outcomes, experience for both patients and staff, and use of our limited NHS resources.

<Pete> Yes. Our common purpose, the “what” and “why”, has never been an issue.  Where we seem to get stuck is the “how”.  We have both tried many things but, despite our good intentions, it feels like things are getting worse!

<Phil> I agree. It may be that what we have implemented has had a positive impact and we would have been even worse off if we had done nothing. But I do not know. We clearly have much to learn and, while I believe we are making progress, we do not appear to be learning fast enough.  And I think this knowledge gap exposes another “how” issue: After we have intervened, how do we know that we have (a) improved, (b) not changed or (c) worsened?

<Pete> That is a very good question.  And all that I have to offer as an answer is to share what we do in medicine when we ask a similar question: “How do I know that treatment A is better than treatment B?”  It is the essence of medical research; the quest to find better treatments that deliver better outcomes and at lower cost.  The similarities are strong.

<Phil> OK. How do you do that? How do you know that “Treatment A is better than Treatment B” in a way that anyone will trust the answer?

 <Pete> We use a science that is actually very recent on the scientific timeline; it was only firmly established in the first half of the 20th century. One reason for that is that it is rather a counter-intuitive science and for that reason it requires using tools that have been designed and demonstrated to work but which most of us do not really understand how they work. They are a bit like magic black boxes.

<Phil> H’mm. Please forgive me for sounding skeptical but that sounds like a big opportunity for making mistakes! If there are lots of these “magic black box” tools then how do you decide which one to use and how do you know you have used it correctly?

<Pete> Those are good questions! Very often we don’t know and in our collective confusion we generate a lot of unproductive discussion.  This is why we are often forced to accept the advice of experts but, I confess, very often we don’t understand what they are saying either! They seem like the medieval Magi.

<Phil> H’mm. So these experts are like ‘magicians’ – they claim to understand the inner workings of the black magic boxes but are unable, or unwilling, to explain in a language that a ‘muggle’ would understand?

<Pete> Very well put. That is just how it feels.

<Phil> So can you explain what you do understand about this magical process? That would be a start.


<Pete> OK, I will do my best.  The first thing we learn in medical research is that we need to be clear about what it is we are looking to improve, and we need to be able to measure it objectively and accurately.

<Phil> That  makes sense. Let us say we want to improve the patient’s subjective quality of the A&E experience and objectively we want to reduce the time they spend in A&E. We measure how long they wait. 

<Pete> The next thing is that we need to decide how much improvement we need. What would be worthwhile? So in the example you have offered we know that reducing the average time patients spend in A&E by just 30 minutes would have a significant effect on the quality of the patient and staff experience, and as a by-product it would also dramatically improve the 4-hour target performance.

<Phil> OK.  From the commissioning perspective there are lots of things we can do, such as commissioning alternative paths for specific groups of patients; in effect diverting some of the unscheduled demand away from A&E to a more appropriate service provider.  But these are the sorts of thing we have been experimenting with for years, and it brings us back to the question: How do we know that any change we implement has had the impact we intended? The system seems, well, complicated.

<Pete> In medical research we are very aware that the system we are changing is very complicated and that we do not have the power of omniscience.  We cannot know everything.  Realistically, all we can do is to focus on objective outcomes and collect small samples of the data ocean and use those in an attempt to draw conclusions can trust. We have to design our experiment with care!

<Phil> That makes sense. Surely we just need to measure the stuff that will tell us if our impact matches our intent. That sounds easy enough. What’s the problem?

<Pete> The problem we encounter is that when we measure “stuff” we observe patient-to-patient variation, and that is before we have made any changes.  Any impact that we may have is obscured by this “noise”.

<Phil> Ah, I see.  So if the our intervention generates a small impact then it will be more difficult to see amidst this background noise. Like trying to see fine detail in a fuzzy picture.

<Pete> Yes, exactly like that.  And it raises the issue of “errors”.  In medical research we talk about two different types of error; we make the first type of error when our actual impact is zero but we conclude from our data that we have made a difference; and we make the second type of error when we have made an impact but we conclude from our data that we have not.

<Phil> OK. So does that imply that the more “noise” we observe in our measure for-improvement before we make the change, the more likely we are to make one or other error?

<Pete> Precisely! So before we do the experiment we need to design it so that we reduce the probability of making both of these errors to an acceptably low level.  So that we can be assured that any conclusion we draw can be trusted.

<Phil> OK. So how exactly do you do that?

<Pete> We know that whenever there is “noise” and whenever we use samples then there will always be some risk of making one or other of the two types of error.  So we need to set a threshold for both. We have to state clearly how much confidence we need in our conclusion. For example, we often use the convention that we are willing to accept a 1 in 20 chance of making the Type I error.

<Phil> Let me check if I have heard you correctly. Suppose that, in reality, our change has no impact and we have set the risk threshold for a Type 1 error at 1 in 20, and suppose we repeat the same experiment 100 times – are you saying that we should expect about five of our experiments to show data that says our change has had the intended impact when in reality it has not?

<Pete> Yes. That is exactly it.

<Phil> OK.  But in practice we cannot repeat the experiment 100 times, so we just have to accept the 1 in 20 chance that we will make a Type 1 error, and we won’t know we have made it if we do. That feels a bit chancy. So why don’t we just set the threshold to 1 in 100 or 1 in 1000?

<Pete> We could, but doing that has a consequence.  If we reduce the risk of making a Type I error by setting our threshold lower, then we will increase the risk of making a Type II error.

<Phil> Ah! I see. The old swings-and-roundabouts problem. By the way, do these two errors have different names that would make it  easier to remember and to explain?

<Pete> Yes. The Type I error is called a False Positive. It is like concluding that a patient has a specific diagnosis when in reality they do not.

<Phil> And the Type II error is called a False Negative?

<Pete> Yes.  And we want to avoid both of them, and to do that we have to specify a separate risk threshold for each error.  The convention is to call the threshold for the false positive the alpha level, and the threshold for the false negative the beta level.

<Phil> OK. So now we have three things we need to be clear on before we can do our experiment: the size of the change that we need, the risk of the false positive that we are willing to accept, and the risk of a false negative that we are willing to accept.  Is that all we need?

<Pete> In medical research we learn that we need six pieces of the experimental design jigsaw before we can proceed. We only have three pieces so far.

<Phil> What are the other three pieces then?

<Pete> We need to know the average value of the metric we are intending to improve, because that is our baseline from which improvement is measured.  Improvements are often framed as a percentage improvement over the baseline.  And we need to know the spread of the data around that average, the “noise” that we referred to earlier.

<Phil> Ah, yes!  I forgot about the noise.  But that is only five pieces of the jigsaw. What is the last piece?

<Pete> The size of the sample.

<Phil> Eh?  Can’t we just go with whatever data we can realistically get?

<Pete> Sadly, no.  The size of the sample is how we control the risk of a false negative error.  The more data we have the lower the risk. This is referred to as the power of the experimental design.

<Phil> OK. That feels familiar. I know that the more experience I have of something the better my judgement gets. Is this the same thing?

<Pete> Yes. Exactly the same thing.

<Phil> OK. So let me see if I have got this. To know if the impact of the intervention matches our intention we need to design our experiment carefully. We need all six pieces of the experimental design jigsaw and they must all fall inside our circle of control. We can measure the baseline average and spread; we can specify the impact we will accept as useful; we can specify the risks we are prepared to accept of making the false positive and false negative errors; and we can collect the required amount of data after we have made the intervention so that we can trust our conclusion.

<Pete> Perfect! That is how we are taught to design research studies so that we can trust our results, and so that others can trust them too.

<Phil> So how do we decide how big the post-implementation data sample needs to be? I can see we need to collect enough data to avoid a false negative but we have to be pragmatic too. There would appear to be little value in collecting more data than we need. It would cost more and could delay knowing the answer to our question.

<Pete> That is precisely the trap than many inexperienced medical researchers fall into. They set their sample size according to what is achievable and affordable, and then they hope for the best!

<Phil> Well, we do the same. We analyse the data we have and we hope for the best.  In the magical metaphor we are asking our data analysts to pull a white rabbit out of the hat.  It sounds rather irrational and unpredictable when described like that! Have medical researchers learned a way to avoid this trap?

<Pete> Yes, it is a tool called a power calculator.

<Phil> Ooooo … a power tool … I like the sound of that … that would be a cool tool to have in our commissioning bag of tricks. It would be like a magic wand. Do you have such a thing?

<Pete> Yes.

<Phil> And do you understand how the power tool magic works well enough to explain to a “muggle”?

<Pete> Not really. To do that means learning some rather unfamiliar language and some rather counter-intuitive concepts.

<Phil> Is that the magical stuff I hear lurks between the covers of a medical statistics textbook?

<Pete> Yes. Scary looking mathematical symbols and unfathomable spells!

<Phil> Oh dear!  Is there another way for to gain a working understanding of this magic? Something a bit more pragmatic? A path that a ‘statistical muggle’ might be able to follow?

<Pete> Yes. It is called a simulator.

<Phil> You mean like a flight simulator that pilots use to learn how to control a jumbo jet before ever taking a real one out for a trip?

<Pete> Exactly like that.

<Phil> Do you have one?

<Pete> Yes. It was how I learned about this “stuff” … pragmatically.

<Phil> Can you show me?

<Pete> Of course.  But to do that we will need a bit more time, another coffee, and maybe a couple of those tasty looking Danish pastries.

<Phil> A wise investment I’d say.  I’ll get the the coffee and pastries, if you fire up the engines of the simulator.

monkey_on_back_anim_150_wht_11200

About 25 years ago a paper was published in the Harvard Business Review with the interesting title of “Teaching Smart People How To Learn

The uncomfortable message was that many people who are top of the intellectual rankings are actually very poor learners.

This sounds like a paradox.  How can people be high-achievers and yet be unable to learn?


Health care systems are stuffed full of super-smart, high-achieving professionals. The cream of educational crop. The top 2%. They are called “doctors”.

And we have a problem with improvement in health care … a big problem … the safety, delivery, quality and affordability of the NHS is getting worse. Not better.

Improvement implies change and change implies learning, so if smart people struggle to learn then could that explain why health care systems find self-improvement so difficult?

This paragraph from the 1991 HBR paper feels uncomfortably familiar:

defensive_reasoning_2

The author, Chris Argyris, refers to something called “single-loop learning” and if we translate this management-speak into the language of medicine it would come out as “treating the symptom and ignoring the disease“.  That is poor medicine.

Chris also suggests an antidote to this problem and gave it the label “double-loop learning” which if translated into medical speak becomes “diagnosis“.  And that is something that doctors can relate to because without a diagnosis, a justifiable treatment is difficult to formulate.


We need to diagnose the root cause(s) of the NHS disease.


The 1991 HBR paper refers back to an earlier 1977 HBR paper called Double Loop Learning in Organisations where we find the theory that underpins it.

The proposed hypothesis is that we all have cognitive models that we use to decide our actions (and in-actions), what I have referred to before as ChimpWare.  In it is a reference to a table published in a 1974 book and the message is that Single-Loop learning is a manifestation of a Model 1 theory-in-action.

defensive_reasoning_models


And if we consider the task that doctors are expected to do then we can empathize with their dominant Model 1 approach.  Health care is a dangerous business.  Doctors can cause a lot of unintentional harm – both physical and psychological.  Doctors are dealing with a very, very complex system – a human body – that they only partially understand.  No two patients are exactly the same and illness is a dynamic process.  Everyone’s expectations are high. We have come a long way since the days of blood-letting and leeches!  Failure is not tolerated.

Doctors are intelligent and competitive … they had to be to win the education race.

Doctors must make tough decisions and have to have tough conversations … many, many times … and yet not be consumed in the process.  They often have to suppress emotions to be effective.

Doctors feel the need to protect patients from harm – both physical and emotional.

And collectively they do a very good job.  Doctors are respected and trusted professionals.


But …  to quote Chris Argyris …

“Model I blinds people to their weaknesses. For instance, the six corporate presidents were unable to realize how incapable they were of questioning their assumptions and breaking through to fresh understanding. They were under the illusion that they could learn, when in reality they just kept running around the same track.”

This blindness is self-reinforcing because …

“All parties withheld information that was potentially threatening to themselves or to others, and the act of cover-up itself was closed to discussion.”


How many times have we seen this in the NHS?

The Mid-Staffordshire Hospital debacle that led to the Francis Report is all the evidence we need.


So what is the way out of this double-bind?

Chris gives us some hints with his Model II theory-in-use.

  1. Valid information – Study.
  2. Free and informed choice – Plan.
  3. Constant monitoring of the implementation – Do.

The skill required is to question assumptions and break through to fresh understanding and we can do that with design-led approach because that is what designers do.

They bring their unconscious assumptions up to awareness and ask “Is that valid?” and “What if” questions.

It is called Improvement-by-Design.

And the good news is that this Model II approach works in health care, and we know that because the evidence is accumulating.

 

thinker_figure_unsolve_puzzle_150_wht_18309Many of the challenges that we face in delivering effective and affordable health care do not have well understood and generally accepted solutions.

If they did there would be no discussion or debate about what to do and the results would speak for themselves.

This lack of understanding is leading us to try to solve a complicated system design challenge in our heads.  Intuitively.

And trying to do it this way is fraught with frustration and risk because our intuition tricks us. It was this sort of challenge that led Professor Rubik to invent his famous 3D Magic Cube puzzle.

It is difficult enough to learn how to solve the Magic Cube puzzle by trial and error; it is even more difficult to attempt to do it inside our heads! Intuitively.


And we know the Rubik Cube puzzle is solvable, so all we need are some techniques, tools and training to improve our Rubik Cube solving capability.  We can all learn how to do it.


Returning to the challenge of safe and affordable health care, and to the specific problem of unscheduled care, A&E targets, delayed transfers of care (DTOC), finance, fragmentation and chronic frustration.

This is a systems engineering challenge so we need some systems engineering techniques, tools and training before attempting it.  Not after failing repeatedly.

se_vee_diagram

One technique that a systems engineer will use is called a Vee Diagram such as the one shown above.  It shows the sequence of steps in the generic problem solving process and it has the same sequence that we use in medicine for solving problems that patients present to us …

Diagnose, Design and Deliver

which is also known as …

Study, Plan, Do.


Notice that there are three words in the diagram that start with the letter V … value, verify and validate.  These are probably the three most important words in the vocabulary of a systems engineer.


One tool that a systems engineer always uses is a model of the system under consideration.

Models come in many forms from conceptual to physical and are used in two main ways:

  1. To assist the understanding of the past (diagnosis)
  2. To predict the behaviour in the future (prognosis)

And the process of creating a system model, the sequence of steps, is shown in the Vee Diagram.  The systems engineer’s objective is a validated model that can be trusted to make good-enough predictions; ones that support making wiser decisions of which design options to implement, and which not to.


So if a systems engineer presented us with a conceptual model that is intended to assist our understanding, then we will require some evidence that all stages of the Vee Diagram process have been completed.  Evidence that provides assurance that the model predictions can be trusted.  And the scope over which they can be trusted.


Last month a report was published by the Nuffield Trust that is entitled “Understanding patient flow in hospitals”  and it asserts that traffic flow on a motorway is a valid conceptual model of patient flow through a hospital.  Here is a direct quote from the second paragraph in the Executive Summary:

nuffield_report_01
Unfortunately, no evidence is provided in the report to support the validity of the statement and that omission should ring an alarm bell.

The observation that “the hospitals with the least free space struggle the most” is not a validation of the conceptual model.  Validation requires a concrete experiment.


To illustrate why observation is not validation let us consider a scenario where I have a headache and I take a paracetamol and my headache goes away.  I now have some evidence that shows a temporal association between what I did (take paracetamol) and what I got (a reduction in head pain).

But this is not a valid experiment because I have not considered the other seven possible combinations of headache before (Y/N), paracetamol (Y/N) and headache after (Y/N).

An association cannot be used to prove causation; not even a temporal association.

When I do not understand the cause, and I am without evidence from a well-designed experiment, then I might be tempted to intuitively jump to the (invalid) conclusion that “headaches are caused by lack of paracetamol!” and if untested this invalid judgement may persist and even become a belief.


Understanding causality requires an approach called counterfactual analysis; otherwise known as “What if?” And we can start that process with a thought experiment using our rhetorical model.  But we must remember that we must always validate the outcome with a real experiment. That is how good science works.

A famous thought experiment was conducted by Albert Einstein when he asked the question “If I were sitting on a light beam and moving at the speed of light what would I see?” This question led him to the Theory of Relativity which completely changed the way we now think about space and time.  Einstein’s model has been repeatedly validated by careful experiment, and has allowed engineers to design and deliver valuable tools such as the Global Positioning System which uses relativity theory to achieve high positional precision and accuracy.


So let us conduct a thought experiment to explore the ‘faster movement requires more space‘ statement in the case of patient flow in a hospital.

First, we need to define what we mean by the words we are using.

The phrase ‘faster movement’ is ambiguous.  Does it mean higher flow (more patients per day being admitted and discharged) or does it mean shorter length of stage (the interval between the admission and discharge events for individual patients)?

The phrase ‘more space’ is also ambiguous. In a hospital that implies physical space i.e. floor-space that may be occupied by corridors, chairs, cubicles, trolleys, and beds.  So are we actually referring to flow-space or storage-space?

What we have in this over-simplified statement is the conflation of two concepts: flow-capacity and space-capacity. They are different things. They have different units. And the result of conflating them is meaningless and confusing.


However, our stated goal is to improve understanding so let us consider one combination, and let us be careful to be more precise with our terminology, “higher flow always requires more beds“. Does it? Can we disprove this assertion with an example where higher flow required less beds (i.e. space-capacity)?

The relationship between flow and space-capacity is well understood.

The starting point is Little’s Law which was proven mathematically in 1961 by J.D.C. Little and it states:

Average work in progress = Average lead time  X  Average flow.

In the hospital context, work in progress is the number of occupied beds, lead time is the length of stay and flow is admissions or discharges per time interval (which must be the same on average over a long period of time).

(NB. Engineers are rather pedantic about units so let us check that this makes sense: the unit of WIP is ‘patients’, the unit of lead time is ‘days’, and the unit of flow is ‘patients per day’ so ‘patients’ = ‘days’ * ‘patients / day’. Correct. Verified. Tick.)

So, is there a situation where flow can increase and WIP can decrease? Yes. When lead time decreases. Little’s Law says that is possible. We have disproved the assertion.


Let us take the other interpretation of higher flow as shorter length of stay: i.e. shorter length of stay always requires more beds.  Is this correct? No. If flow remains the same then Little’s Law states that we will require fewer beds. This assertion is disproved as well.

And we need to remember that Little’s Law is proven to be valid for averages, does that shed any light on the source of our confusion? Could the assertion about flow and beds actually be about the variation in flow over time and not about the average flow?


And this is also well understood. The original work on it was done almost exactly 100 years ago by Agner Krarup Erlang and the problem he looked at was the quality of customer service of the early telephone exchanges. Specifically, how likely was the caller to get the “all lines are busy, please try later” response.

What Erlang showed was there there is a mathematical relationship between the number of calls being made (the demand), the probability of a call being connected first time (the service quality) and the number of telephone circuits and switchboard operators available (the service cost).


So it appears that we already have a validated mathematical model that links flow, quality and cost that we might use if we substitute ‘patients’ for ‘calls’, ‘beds’ for ‘telephone circuits’, and ‘being connected’ for ‘being admitted’.

And this topic of patient flow, A&E performance and Erlang queues has been explored already … here.

So a telephone exchange is a more valid model of a hospital than a motorway.

We are now making progress in deepening our understanding.


The use of an invalid, untested, conceptual model is sloppy systems engineering.

So if the engineering is sloppy we would be unwise to fully trust the conclusions.

And I share this feedback in the spirit of black box thinking because I believe that there are some valuable lessons to be learned here – by us all.


To vote for this topic please click here.
To subscribe to the blog newsletter please click here.
To email the author please click here.

motorway[Beep] Bob’s computer alerted him to Leslie signing on to the Webex session.

<Bob> Good afternoon Leslie, how are you? It seems a long time since we last chatted.

<Leslie> Hi Bob. I am well and it has been a long time. If you remember, I had to loop out of the Health Care Systems Engineering training because I changed job, and it has taken me a while to bring a lot of fresh skeptics around to the idea of improvement-by-design.

<Bob> Good to hear, and I assume you did that by demonstrating what was possible by doing it, delivering results, and describing the approach.

<Leslie> Yup. And as you know, even with objective evidence of improvement it can take a while because that exposes another gap, the one between intent and impact.  Many people get rather defensive at that point, so I have had to take it slowly. Some people get really fired up though.

 <Bob> Yes. Respect, challenge, patience and persistence are all needed. So, where shall we pick up?

<Leslie> The old chestnut of winter pressures and A&E targets.  Except that it is an all-year problem now and according to what I read in the news, everyone is predicting a ‘melt-down’.

<Bob> Did you see last week’s IS blog on that very topic?

<Leslie> Yes, I did!  And that is what prompted me to contact you and to re-start my CHIPs coaching.  It was a real eye opener.  I liked the black swan code-named “RC9” story, it makes it sound like a James Bond film!

<Bob> I wonder how many people dug deeper into how “RC9” achieved that rock-steady A&E performance despite a rising tide of arrivals and admissions?

<Leslie> I did, and I saw several examples of anti-carve-out design.  I have read though my notes and we have talked about carve out many times.

<Bob> Excellent. Being able to see the signs of competent design is just as important as the symptoms of inept design. So, what shall we talk about?

<Leslie> Well, by co-incidence I was sent a copy of of a report entitled “Understanding patient flow in hospitals” published by one of the leading Think Tanks and I confess it made no sense to me.  Can we talk about that?

<Bob> OK. Can you describe the essence of the report for me?

<Leslie> Well, in a nutshell it said that flow needs space so if we want hospitals to flow better we need more space, in other words more beds.

<Bob> And what evidence was presented to support that hypothesis?

<Leslie> The authors equated the flow of patients through a hospital to the flow of traffic on a motorway. They presented a table of numbers that made no sense to me, I think partly because there are no units stated for some of the numbers … I’ll email you a picture.

traffic_flow_dynamics

<Bob> I agree this is not a very informative table.  I am not sure what the definition of “capacity” is here and it may be that the authors may be equating “hospital bed” to “area of tarmac”.  Anyway, the assertion that hospital flow is equivalent to motorway flow is inaccurate.  There are some similarities and traffic engineering is an interesting subject, but they are not equivalent.  A hospital is more like a busy city with junctions, cross-roads, traffic lights, roundabouts, zebra crossings, pelican crossings and all manner of unpredictable factors such as cyclists and pedestrians. Motorways are intentionally designed without these “impediments”, for obvious reasons! A complex adaptive flow system like a hospital cannot be equated to a motorway. It is a dangerous over-simplification.

<Leslie> So, if the hospital-motorway analogy is invalid then the conclusions are also invalid?

<Bob> Sometimes, by accident, we get a valid conclusion from an invalid method. What were the conclusions?

<Leslie> That the solution to improving A&E performance is more space (i.e. hospital beds) but there is no more money to build them or people to staff them.  So the recommendations are to reduce volume, redesign rehabilitation and discharge processes, and improve IT systems.

<Bob> So just re-iterating the habitual exhortations and nothing about using well-understood systems engineering methods to accurately diagnose the actual root cause of the ‘symptoms’, which is likely to be the endemic carveoutosis multiforme, and then treat accordingly?

<Leslie> No. I could not find the term “carve out” anywhere in the document.

<Bob> Oh dear.  Based on that observation, I do not believe this latest Think Tank report is going to be any more effective than the previous ones.  Perhaps asking “RC9” to write an account of what they did and how they learned to do it would be more informative?  They did not reduce volume, and I doubt they opened more beds, and their annual report suggests they identified some space and flow carveoutosis and treated it. That is what a competent systems engineer would do.

<Leslie> Thanks Bob. Very helpful as always. What is my next step?

<Bob> Some ISP-2 brain-teasers, a juicy ISP-2 project, and some one day training workshops for your all-fired-up CHIPs.

<Leslie> Bring it on!


For more posts like this please vote here.
For more information please subscribe here.

reading_a_book_pa_150_wht_3136An effective way to improve is to learn from others who have demonstrated the capability to achieve what we seek.  To learn from success.

Another effective way to improve is to learn from those who are not succeeding … to learn from failures … and that means … to learn from our own failings.

But from an early age we are socially programmed with a fear of failure.

The training starts at school where failure is not tolerated, nor is challenging the given dogma.  Paradoxically, the effect of our fear of failure is that our ability to inquire, experiment, learn, adapt, and to be resilient to change is severely impaired!

So further failure in the future becomes more likely, not less likely. Oops!


Fortunately, we can develop a healthier attitude to failure and we can learn how to harness the gap between intent and impact as a source of energy, creativity, innovation, experimentation, learning, improvement and growing success.

And health care provides us with ample opportunities to explore this unfamiliar terrain. The creative domain of the designer and engineer.


The scatter plot below is a snapshot of the A&E 4 hr target yield for all NHS Trusts in England for the month of July 2016.  The required “constitutional” performance requirement is better than 95%.  The delivered whole system average is 85%.  The majority of Trusts are failing, and the Trust-to-Trust variation is rather wide. Oops!

This stark picture of the gap between intent (95%) and impact (85%) prompts some uncomfortable questions:

Q1: How can one Trust achieve 98% and yet another can do no better than 64%?

Q2: What can all Trusts learn from these high and low flying outliers?

[NB. I have not asked the question “Who should we blame for the failures?” because the name-shame-blame-game is also a predictable consequence of our fear-of-failure mindset.]


Let us dig a bit deeper into the information mine, and as we do that we need to be aware of a trap:

A snapshot-in-time tells us very little about how the system and the set of interconnected parts is behaving-over-time.

We need to examine the time-series charts of the outliers, just as we would ask for the temperature, blood pressure and heart rate charts of our patients.

Here are the last six years by month A&E 4 hr charts for a sample of the high-fliers. They are all slightly different and we get the impression that the lower two are struggling more to stay aloft more than the upper two … especially in winter.


And here are the last six years by month A&E 4 hr charts for a sample of the low-fliers.  The Mark I Eyeball Test results are clear … these swans are falling out of the sky!


So we need to generate some testable hypotheses to explain these visible differences, and then we need to examine the available evidence to test them.

One hypothesis is “rising demand”.  It says that “the reason our A&E is failing is because demand on A&E is rising“.

Another hypothesis is “slow flow”.  It says that “the reason our A&E is failing is because of the slow flow through the hospital because of delayed transfers of care (DTOCs)“.

So, if these hypotheses account for the behaviour we are observing then we would predict that the “high fliers” are (a) diverting A&E arrivals elsewhere, and (b) reducing admissions to free up beds to hold the DTOCs.

Let us look at the freely available data for the highest flyer … the green dot on the scatter gram … code-named “RC9”.

The top chart is the A&E arrivals per month.

The middle chart is the A&E 4 hr target yield per month.

The bottom chart is the emergency admissions per month.

Both arrivals and admissions are increasing, while the A&E 4 hr target yield is rock steady!

And arranging the charts this way allows us to see the temporal patterns more easily (and the images are deliberately arranged to show the overall pattern-over-time).

Patterns like the change-for-the-better that appears in the middle of the winter of 2013 (i.e. when many other trusts were complaining that their sagging A&E performance was caused by “winter pressures”).

The objective evidence seems to disprove the “rising demand”, “slow flow” and “winter pressure” hypotheses!

So what can we learn from our failure to adequately explain the reality we are seeing?


The trust code-named “RC9” is Luton and Dunstable, and it is an average district general hospital, on the surface.  So to reveal some clues about what actually happened there, we need to read their Annual Report for 2013-14.  It is a public document and it can be downloaded here.

This is just a snippet …

… and there are lots more knowledge nuggets like this in there …

… it is a treasure trove of well-known examples of good system flow design.

The results speak for themselves!


Q: How many black swans does it take to disprove the hypothesis that “all swans are white”.

A: Just one.

“RC9” is a black swan. An outlier. A positive deviant. “RC9” has disproved the “impossibility” hypothesis.

And there is another flock of black swans living in the North East … in the Newcastle area … so the “Big cities are different” hypothesis does not hold water either.


The challenge here is a human one.  A human factor.  Our learned fear of failure.

Learning-how-to-fail is the way to avoid failing-how-to-learn.

And to read more about that radical idea I strongly recommend reading the recently published book called Black Box Thinking by Matthew Syed.

It starts with a powerful story about the impact of human factors in health care … and here is a short video of Martin Bromiley describing what happened.

The “black box” that both Martin and Matthew refer to is the one that is used in air accident investigations to learn from what happened, and to use that learning to design safer aviation systems.

Martin Bromiley has founded a charity to support the promotion of human factors in clinical training, the Clinical Human Factors Group.

So if we can muster the courage and humility to learn how to do this in health care for patient safety, then we can also learn to how do it for flow, quality and productivity.

Our black swan called “RC9” has demonstrated that this goal is attainable.

And the body of knowledge needed to do this already exists … it is called Health and Social Care Systems Engineering (HSCSE).


For more posts like this please vote here.
For more information please subscribe here.
To email the author please click here.


Postscript: And I am pleased to share that Luton & Dunstable features in the House of Commons Health Committee report entitled Winter Pressures in A&E Departments that was published on 3rd Nov 2016.

Here is part of what L&D shared to explain their deviant performance:

luton_nuggets

These points describe rather well the essential elements of a pull design, which is the antidote to the rather more prevalent pressure cooker design.

database_transferring_data_150_wht_10400It has been a busy week.

And a common theme has cropped up which I have attempted to capture in the diagram below.

It relates to how the NHS measures itself and how it “drives” improvement.

The measures are called “failure metrics” – mortality, infections, pressure sores, waiting time breaches, falls, complaints, budget overspends.  The list is long.

The data for a specific trust are compared with an arbitrary minimum acceptable standard to decide where the organisation is on the Red-Amber-Green scale.

If we are in the red zone on the RAG chart … we get a kick.  If not we don’t.

The fear of being bullied and beaten raises the emotional temperature and the internal pressure … which drives movement to get away from the pain.  A nematode worm will behave this way. They are not stupid either.

As as we approach the target line our RAG indicator turns “amber” … this is the “not statistically significant zone” … and now the stick is being waggled, ready in case the light goes red again.

So we muster our reserves of emotional energy and we PUSH until our RAG chart light goes green … but then we have to hold it there … which is exhausting.  One pain is replaced by another.

The next step is for the population of NHS nematodes to be compared with each other … they must be “bench-marked”, and some are doing better than others … as we might expect. We have done our “sadistics” training courses.

The bottom 5% or 10% line is used to set the “arbitrary minimum standard target” … and the top 10% are feted at national award ceremonies … and feast on the envy of the other 90 or 95% of “losers”.

The Cream of the Crop now have a big tick in their mission statement objectives box “To be in the Top 10% of Trusts in the UK“.  Hip hip huzzah.

And what has this system design actually achieved? The Cream of the Crap.

Oops!


It is said that every system is perfectly designed to deliver what it delivers.

And a system that has been designed to only use failure and fear to push improvement can only ever achieve chronic mediocrity – either chaotic mediocrity or complacent mediocrity.

So, if we actually do want to tap into the vast zone of unfulfilled potential, and if we do actually want to escape the perpetual pain of the Cream of the Crap Trap forever … we need a better system design.

So we need some system engineers to help us do that.

And this week I met some … at the Royal Academy of Engineering in London … and it felt like finding a candle of hope in the darkness of despair.

I said it had been a busy week!


For more posts like this please vote here.
For more information please subscribe here.

This 100 second video of the late Russell Ackoff is solid gold!

In it he describes the DIKUW hierarchy – data, information, knowledge, understanding and wisdom – and how it is critical to put effectiveness before efficiency.

A wise objective is a purpose … the intended outcome … and a well designed system will be both effective and efficient.  That is the engineers definition of productivity.  Doing the right thing first, and doing it right second.

So how do we transform data into wisdom? What are needs to be added or taken away? What is the process?

Data is what we get from our senses.

To convert data into information we add context.

To convert information into knowledge we use memory.

To convert knowledge into understanding we need to learn-by-doing.

And the test of understanding is to be able to teach someone else what we know and to be able to support them developing an understanding through practice.

To convert understanding into wisdom requires years of experience of seeing, doing and teaching.

There are no short cuts.

So the sooner we start learning-by-doing the quicker we will develop the wisdom of purpose, and the understanding of process.


For more posts like this please vote here.
For more information please subscribe here.

On 5th July 2018, the NHS will be 70 years old, and like many of those it was created to serve, it has become elderly and frail.

We live much longer, on average, than we used to and the growing population of frail elderly are presenting an unprecedented health and social care challenge that the NHS was never designed to manage.

The creases and cracks are showing, and each year feels more pressured than the last.


This week a story that illustrates this challenge was shared with me along with permission to broadcast …

“My mother-in-law is 91, in general she is amazingly self-sufficient, able to arrange most of her life with reasonable care at home via a council tendered care provider.

She has had Parkinson’s for years, needing regular medication to enable her to walk and eat (it affects her jaw and swallowing capability). So the care provision is time critical, to get up, have lunch, have tea and get to bed.

She’s also going deaf, profoundly in one ear, pretty bad in the other. She wears a single ‘in-ear’ aid, which has a micro-switch on/off toggle, far too small for her to see or operate. Most of the carers can’t put it in, and fail to switch it off.

Her care package is well drafted, but rarely adhered to. It should be 45 minutes in the morning, 30, 15, 30 through the day. Each time administering the medications from the dossette box. Despite the register in/out process from the carers, many visits are far less time than designed (and paid for by the council), with some lasting 8 minutes instead of 30!

Most carers don’t ensure she takes her meds, which sometimes leads to dropped pills on the floor, with no hope of picking them up!

While the care is supposedly ‘time critical’ the provider don’t manage it via allocated time slots, they simply provide lists, that imply the order of work, but don’t make it clear. My mother-in-law (Mum) cannot be certain when the visit will occur, which makes going out very difficult.

The carers won’t cook food, but will micro-wave it, thus if a cooked meal is to happen, my Mum will start it, with the view of the carers serving it. If they arrive early, the food is under-cooked (“Just put vinegar on it, it will taste better”) and if they arrive late, either she’ll try to get it out herself, or it will be dried out / cremated.

Her medication pattern should be every 4 to 5 hours in the day, with a 11:40 lunch visit, and a 17:45 tea visit, followed by a 19:30 bed prep visit, she finishes up with too long between meds, followed by far too close together. Her GP has stated that this is making her health and Parkinson’s worse.

Mum also rarely drinks enough through the day, in the hot whether she tends to dehydrate, which we try to persuade her must be avoided. Part of the problem is Parkinson’s related, part the hassle of getting to the toilet more often. Parkinson’s affects swallowing, so she tends to sip, rather than gulp. By sipping often, she deludes herself that she is drinking enough.

She also is stubbornly not adjusting methods to align to issues. She drinks tea and water from her lovely bone china cups. Because her grip is not good and her hand shakes, we can’t fill those cups very high, so her ‘cup of tea’ is only a fraction of what it could be.

As she can walk around most days, there’s no way of telling whether she drinks enough, and she frequently has several different carers in a day.

When Mum gets dehydrated, it affects her memory and her reasoning, similar to the onset of dementia. It also seems to increase her probability of falling, perhaps due to forgetting to be defensive.

When she falls, she cannot get up, thus usually presses her alarm dongle, resulting in me going round to get her up, check for concussion, and check for other injuries, prior to settling her down again. These can be ten weeks apart, through to a few in a week.

When she starts to hallucinate, we do our very best to increase drinking, seeking to re-hydrate.

On Sunday, something exceptional happened, Mum fell out of bed and didn’t press her alarm. The carer found her and immediately called the paramedics and her GP, who later called us in. For the first time ever she was not sufficiently mentally alert to press her alarm switch.

After initial assessment, she was taken to A&E, luckily being early on Sunday morning it was initially quite quiet.

Hospital

The Hospital is on the boundary between two counties, within a large town, a mixture of new build elements, between aging structures. There has been considerable investment within A&E, X-ray etc. due partly to that growth industry and partly due to the closures of cottage hospitals and reducing GP services out of hours.

It took some persuasion to have Mum put on a drip, as she hadn’t had breakfast or any fluids, and dehydration was a probable primary cause of her visit. They took bloods, an X-ray of her chest (to check for fall related damage) and a CT scan of her head, to see if there were issues.

I called the carers to tell them to suspend visits, but the phone simply rang without be answered (not for the first time.)

After about six hours, during which time she was awake, but not very lucid, she was transferred to the day ward, where after assessment she was given some meds, a sandwich and another drip.

Later that evening we were informed she was to be kept on a drip for 24 hours.

The next day (Bank Holiday Monday) she was transferred to another ward. When we arrived she was not on a drip, so their decisions had been reversed.

I spoke at length with her assigned staff nurse, and was told the following: Mum could come out soon if she had a 24/7 care package, and that as well as the known issues mum now has COPD. When I asked her what COPD was, she clearly didn’t know, but flustered a ‘it is a form of heart failure that affects breathing’. (I looked it up on my phone a few minutes later.)

So, to get mum out, I had to arrange a 24/7 care package, and nowhere was open until the next day.

Trying to escalate care isn’t going to be easy, even in the short term. My emails to ‘usually very good’ social care people achieved nothing to start with on Tuesday, and their phone was on the ‘out of hours’ setting for evenings and weekends, despite being during the day of a normal working week.

Eventually I was told that there would be nothing to achieve until the hospital processed the correct exit papers to Social Care.

When we went in to the hospital (on Tuesday) a more senior nurse was on duty. She explained that mum was now medically fit to leave hospital if care can be re-established. I told her that I was trying to set up 24/7 care as advised. She looked through the notes and said 24/7 care was not needed, the normal 4 x a day was enough. (She was clearly angry).

I then explained that the newly diagnosed COPD may be part of the problem, she said that she’s worked with COPD patients for 16 years, and mum definitely doesn’t have COPD. While she was amending the notes, I noticed that mum’s allergy to aspirin wasn’t there, despite us advising that on entry. The nurse also explained that as the hospital is in one county, but almost half their patients are from another, they are always stymied on ‘joined up working’

While we were talking with mum, her meds came round and she was only given paracetamol for her pain, but NOT her meds for Parkinson’s. I asked that nurse why that was the case, and she said that was not on her meds sheet. So I went back to the more senior nurse, she checked the meds as ordered and Parkinson’s was required 4 x a day, but it was NOT transferred onto the administration sheet. The doctor next to us said she would do it straight away, and I was told, “Thank God you are here to get this right!”

Mum was given her food, it consisted of some soup, which she couldn’t spoon due to lack of meds and a dry tough lump of gammon and some mashed sweet potato, which she couldn’t chew.

When I asked why meds were given at five, after the delivery of food, they said ‘That’s our system!’, when I suggested that administering Parkinson’s meds an hour before food would increase the ability to eat the food they said “that’s a really good idea, we should do that!”

On Wednesday I spoke with Social Care to try to re-start care to enable mum to get out. At that time the social worker could neither get through to the hospital nor the carers. We spoke again after I had arrived in hospital, but before I could do anything.

On arrival at the hospital I was amazed to see the white-board declaring that mum would be discharged for noon on Monday (in five days-time!). I spoke with the assigned staff nurse who said, “That’s the earliest that her carers can re-start, and anyway its nearly the weekend”.

I said that “mum was medically OK for discharge on Tuesday, after only two days in the hospital, and you are complacent to block the bed for another six days, have you spoken with the discharge team?”

She replied, “No they’ll have gone home by now, and I’ve not seen them all day” I told her that they work shifts, and that they will be here, and made it quite clear if she didn’t contact SHEDs that I’d go walkabout to find them. A few minutes later she told me a SHED member would be with me in 20 minutes.

While the hospital had resolved her medical issues, she was stuck in a ward, with no help to walk, the only TV via a complex pay-for system she had no hope of understanding, with no day room, so no entertainment, no exercise, just boredom encouraged to lay in bed, wear a pad because she won’t be taken to the loo in time.

When the SHED worker arrived I explained the staff nurse attitude, she said she would try to improve those thinking processes. She took lots of details, then said that so long as mum can walk with assistance, she could be released after noon, to have NHS carer support, 4 times a day, from the afternoon. She walked around the ward for the first time since being admitted, and while shaky was fine.

Hopefully all will be better now?”


This story is not exceptional … I have heard it many times from many people in many different parts of the UK.  It is the norm rather than the exception.

It is the story of a fragmented and fractured system of health and social care.

It is the story of frustration for everyone – patients, family, carers, NHS staff, commissioners, and tax-payers.  A fractured care system is unsafe, chaotic, frustrating and expensive.

There are no winners here.  It is not a trade off, compromise or best possible.

It is just poor system design.


What we want has a name … it is called a Frail Safe design … and this is not a new idea.  It is achievable. It has been achieved.

http://www.frailsafe.org.uk

So why is this still happening?

The reason is simple – the NHS does not know any other way.  It does not know how to design itself to be safe, calm, efficient, high quality and affordable.

It does not know how to do this because it has never learned that this is possible.

But it is possible to do, and it is possible to learn, and that learning does not take very long or cost very much.

And the return vastly outnumbers the investment.


The title of this blog is Righteous Indignation

… if your frail elderly parents, relatives or friends were forced to endure a system that is far from frail safe; and you learned that this situation was avoidable and that a safer design would be less expensive; and all you hear is “can’t do” and “too busy” and “not enough money” and “not my job” …  wouldn’t you feel a sense of righteous indignation?

I do.


For more posts like this please vote here.
For more information please subscribe here.