Repeatedly, published evaluations show that community/primary care services interventions with a stated intention to reduce total (or forecast total) emergency admissions to hospital don’t achieve the expected result **.

It’s a well-documented conundrum: They sometimes produce evidence to show that they avoided admission for ‘Mr Bloggs’ and others (‘if service x wasn’t in place, Mr Bloggs would have been admitted for sure’), and yet there is no observed population level effect. i.e the total level of admissions to hospital stays the same or grows. And yet a linear relationship between these types of interventions and total levels of admissions features routinely as an expectation in every round of detailed planning demanded of the NHS.

Of course, there are many other dimensions for considering the success or otherwise of these initiatives, and published evaluations show all sorts of positive effects, particularly on patient and staff satisfaction. But I am addressing the objective of reducing overall levels of emergency admissions because that seems to be the expectation currently attached to any such intervention by much of the management of the NHS and the policy makers and regulators that they respond to.

What I wanted to do was to try to think through why this conundrum occurs, and to do so by digging away at the core logic. This seemed worth doing and sharing because maybe, in amongst an attempt at explanation, there may be pointers at what we might do differently or some prompts for useful debate. And I didn’t want to just say to myself ‘well, of course it’s a complex adaptive system’ and leave it there, as that didn’t seem as useful.  

So what follows is work in progress. Its intended to start a conversation, one that I hope might point at what systems need to think about when they make these sorts of interventions.  It certainly isn’t perfect and maybe others better qualified have already trodden this ground more fully (systems theory). But if they have, it hasn’t managed to break through into everyday thinking, as only this week I met the latest clinician who on setting out plans for a sophisticated web of actions intended to join services up locally was met with the question ’what % reduction in emergency admissions will we see in 6 months’.

‘Its working really…it will just take time to rebalance things’

One common explanation I have heard offered by system leaders for why community alternatives with a stated primary goal of reducing the overall level of emergency admissions to hospital don’t seem to have worked is that they have ‘uncovered unmet need’. What’s more, the argument often goes, having got over that hump in demand, they will go on to then generate overall reductions somewhere down the track.

But let’s examine the logic of the second stage of this argument.

The notion that there is a quantum of specific unmet need out there which an initiative can ‘find and remove‘ conceives of demand in a strangely static fashion. It seems to suggest that this Arthurian Horde of people with the unmet need have had it for ages already; then get found and admitted (consistent with usual criteria); but once that horde has been dealt with,’ job done’ and the initiative can go back to delivering to the original plan’s trajectory for reduction!

But are populations like that? Is there any sensible reason for seeing populations and demand as static over time? To accept this explanation surely you would have to believe that the ‘unmet need’ was exclusive to the discovered group, and that the underlying fixed quantum of need can return to ‘normal’ as soon as they are dealt with. Hmmmmm!!

So, let’s try again! This time we will just go to the heart of the matter - the overall logic of demand reduction.

Urgent care demand reduction and logic

The idea that we might reduce overall emergency activity through alternative provision/ early intervention (typically in the community) seems to rely on two assumptions:  

  1. The intervention is effective (ie it’s significantly more effective than usual care/doing nothing). If the intervention offered (to x people) is effective, then some proportion of x (p.x) would be admissions avoided.   
  2. That there is a fixed quantum of demand (n) that warrants admission, so that post the intervention in 1, this quantum, this level of admissions, becomes n-p.x

Yet these two assumptions are problematic as their logic seems in turn to rest on the following:

  • Demand/need is a fixed quantum- and can be assessed and ranked objectively 
  • Presentation of demand is systematic and ordered
  • Admission criteria are objective, standardised, inflexible and consistently understood and applied by every actor in the system
  • Capacity in all parts of system and availability has no bearing on presentation of demand or decisions on admission
  • There is no suboptimal utilisation at present
  • Chance plays no part 

Indeed, when one thinks about it, the idea that introducing a new service line into a complex system would reduce overall demand seems odd (think motorways)……and listing the assumptions out above does rather make me wonder why we EVER THOUGHT that stopping Betty being admitted over here would impact on whether Doris gets admitted over there. Or at least why we would think it if we can’t make the assumptions listed above (more) true.

The real-world dynamics of urgent care defy simplistic cause and effect ‘zero sum’ planning. And yet, for many years, the NHS has demanded just such planning, and an industry has been created to produce demand and capacity models that rest on such assumptions and promise ever more granular outputs (I saw one offering weekly numbers 5 years hence the other day!).

Is it too strong to at least ask whether this represents a collective failure of critical thinking?

What does this all mean?

This could be a rather depressing picture! Cherished ‘transformational plans’ seem to be stubbornly unable to deliver what was claimed. Plans based on reducing overall admissions by implementing community alternatives targeted at admissions avoidance must now be seen as highly risky, given the evidence to date. This might be problematic for many or most STPs.

And yet I think that there are chinks of light if we are prepared to think differently, perhaps thinking about what might make those assumptions set out above start to appear more reasonable than they do now? Or perhaps instead thinking about what it would mean to give primacy to different objectives?

I’ve listed a few suggestions below, but I would be interested to hear those of others:

  • Would it be possible to agree some objective criteria for emergency admission to hospital? Building whole system clinical consensus and putting effort into sharing that with the public might pay dividend. Or, what would we learn by putting effort into careful analysis of admission criteria and simply sharing that and discussing it with clinical colleagues (primary, community, mental health, acute… together) in the spirit of collective improvement? (The Strategy Unit have developed methods that can explore admission thresholds analytically…this could be developed further)
  • Can we do more to understand the interplay between admission thresholds and capacity/ availability?  The Strategy Unit have recently undertaken some analysis for one hospital that defies their wish to believe that their own admission thresholds are standardised over a 7- day week, 24 hours a day. As a result, they are starting to think why that might be and what they might do about it.
  • Would spending more time understanding localised ‘health seeking behaviours’ be a more productive approach than trying to measure ‘avoided admissions’?
  • Should we (we really should!) put a stop to producing (and demanding) simplistic plans that build in false causal links between actions in what is a complex system? Rather than seek ‘do x‘ = ‘achieve y‘  plans, we should be far more nuanced and think in terms of direction of intent and ranges of potential. Of course, that may not ’satisfy the Treasury’ etc etc…but perhaps the job of leadership, especially ‘system leadership’, is to manage that translation and dialogue away from the operational front line, rather than to drop it on it? (real ‘systems leadership’ is about removing constraints, contrary to how the term seems to be used in some circles of the NHS who see it as a nest of accountability layers bearing down on the front line)
  • Should we reflect more deeply on how we set objectives for initiatives and how we measure success? Badly chosen objectives risk skewing effective implementation, and creating the inevitable perceived failure and demotivation (sometimes ’switching things off’ on poor grounds). Do we need to be more open to proxy objectives? For example,
    • We will aim to improve staff morale and team dynamics because it’s a good in its own right but also because evidence suggests a positive correlation over time between staff morale and quality of care.
    • We believe that improving quality of care in this way will in turn reduce the likelihood of an emergency admission for many people.
    • We will study over time whether we are succeeding in reducing admissions for particular cohorts in the population, without making that our primary objective and measure of success, especially in the short term. The real success is admitting those who need it, when they need it.
  • Should we adopt a different philosophy for implementing change initiatives? Should we ditch the tyranny of the traditional business case? Might the better way be to:
    • Accept that the evidence is equivocal and there will rarely(never) be an obvious single ‘right thing’ to do;
    • Agree that the best way forward is by reaching consensus to try something reasonable with conviction that draws on evidence, local experience and local motivation; and
    • Impose a requirement that we measure carefully whether the desired effects are being achieved, and that we conduct such evaluations so they can inform course correction and, if necessary, cancellation if untoward consequences arise. Formative evaluation is a skilled function that should be an indivisible aspect of any intervention (i.e. it is about effective ‘doing’, not a ‘nice to have’ bolt on).

Starting a conversation

This is meant to be the start of a conversation. It’s perhaps easier to challenge the established ways of doing things than to suggest workable alternatives that can survive the unavoidable compromises that come with operating as a universal publicly funded and politically accountable health service. We at the Strategy Unit are thinking about what a ‘different way’ might look like. We would like to hear the views of others.


 

 

 

 

Footnotes:

* In my title,  I am deliberately misusing ‘the butterfly effect’ as if it implies that we can choose the tiny event that will spiral to deliver the large result we seek. That is NOT what the science is about. Those interested in complex interactive systems and what the ‘butterfly’ effect is really telling us might start with this. If nothing else, read the last section which will make your hair stand on end! But if you read the rest of it, you might also have cause to pause for thought re why we in the NHS are so excited by ‘risk stratification’ and other predictive promises!

**A summary of some of the most recent published evidence can be found here:

  1. Improvement Analytics Unit (2018) The impact of integrated care teams on hospital use in North East Hampshire and Farnham. Health Foundation. Available at: https://www.health.org.uk/publications/impact-integrated-care-teams-hospital-use-north-east-hampshire-and-farnham

Evaluation of integrated care teams (ICTs) implemented as part of the Happy, Healthy, at Home vanguard shows that during the first 23 months of the programme’s implementation, patients referred to the ICTs attended A&E more frequently, and were admitted as an emergency more often, than the control group.

  1. Bower P et al (2018) Improving care for older people with long-term conditions and social care needs in Salford: the CLASSIC mixed-methods study, including RCT. Health Services and Delivery Research. 6:31. Available at: https://www.journalslibrary.nihr.ac.uk/hsdr/hsdr06310/#/abstract

Evaluation of Salford Integrated Care Programme found that compared with the general trend, the programme led to increases in the number of A&E attendances, particularly for those referred from health and social care providers. The intervention also led to increases in the number of emergency admissions, mostly driven by admissions through A&E.

  1. Snooks H et al (2018) Predictive risk stratification model: a randomised stepped-wedge trial in primary care (PRISMATIC). Health Services and Delivery Research. 6:1. Available at: https://www.ncbi.nlm.nih.gov/books/NBK475998/pdf/Bookshelf_NBK475998.pdf.

Evaluation of a Predictive RIsk Stratification Model (PRISM) in primary care in a large urban area in Wales found that the introduction of the intervention was followed by increased emergency episodes, hospitalisation and costs across, and within, risk levels without clear evidence of benefits to patients.