A good idea can be ruined by over-selling. The NHS has a tendency to adopt ideas and then move rapidly to wanting them to become certainties.

What begins as a proposition rapidly becomes an assertion, a statement of fact, a policy, a target, a line in a mandated planning template…an obligatory mention in every sentence for the aspiring manager.

Things like ‘risk stratification’, ‘integration’ and various forms of ‘early intervention’ are obvious examples where what should have been ‘might work’ became ‘will work’ and with detrimental effect. I fear that our whole national approach to ‘big data’ and ‘tech’ and indeed to ‘integrated care systems’ could get caught in this space as well.

Learning stops as soon as something is pronounced as ‘will work’. That can be because the usual impositions that go with ‘will work‘ – such as targets and ‘development’ programmes where attendance is made compulsory -generate an environment that leads to distortion of the evidence and no incentive for honest reflection and learning.

The overriding imperative becomes to assert that it works, at least until everyone can safely shift (via the usual ‘good practice case studies’) to whatever is the next big thing.

The NHS says that it aspires to be a ‘learning system’. Critical to being open to learning is to be willing to acknowledge and embrace what we don’t know. So I believe that, by default, we should adopt the language of ‘might work…’

The evidence base will only ever support propositions, never certainty. And, as soon as we adopt the language of ‘might…’ then obvious and powerfully useful questions start to flow: Why do we think it might work? What is our theory of change? What mechanisms do we think are likely to be key to that? Which elements of our theory are we more, or less, confident about, and why? Can we quantify the extent of that uncertainty? Which questions contribute most to that uncertainty and what does that tell us? Do our colleagues and partners see any flaws in our theory, and do we understand why? What would we expect to see if starts to ‘work’ and how might we think to best track that (and also track the most likely types of unintended consequences?).

What factors can we see up front that might derail our theory in practice and what does that lead us to in terms of implementation strategy? (A ‘pre-mortem’ exercise can help here). How long are we willing to let this run before we decide to stop/change tack/continue, and what will be the trigger for that?

Thinking this way encourages us to think ‘experiment’ rather than ‘proof’. And, as my dear colleague Professor Mohammed Mohammed said to me recently, ‘if something doesn’t stack up in theory, it’s unlikely to do so in practice’.

‘Might…’ opens up a universe of possibilities and learning. It isn’t something that should stop progress, initiative or ambition. Quite the opposite.  It is just a different and far more effective way to proceed in making change happen. It has the golden benefit of being profoundly honest and authentic. It also reduces the gap between proclamation and reality: a gap occupied by cynicism.

Decision processes will also then embrace uncertainty and seek to understand it, bound it and address it. Planning processes will need to do the same - and we really should have no more ‘single point’ delivery plans. Rather we should have plans focussed on well understood ranges, and on how those ranges have been addressed through specific flexibilities and resilience.

The leadership we need will have the confidence and the insight to know when to say ‘might…’ rather than ‘will’. We need leadership that embraces uncertainty, rather than wishing it away. Those leaders will be trusted more and will likely achieve more as a result.

This is the one instance where ‘might is right’.