A key reason behind the creation of ICSs was to give local systems more accountability and encourage more effective collaboration across the different providers of health, care and community services. Clearly this represents a dramatic shift in ways of working – and thinking – for many people and organisations up and down the country. That said, it’s also a real opportunity to try out new approaches and learn from both the successes and the failures. But how can we make sure that we do learn and that the learning is shared as far and wide as possible? A team from the Strategy Unit has an idea. Here Project Director Karen Bradley tells us more. 

In October 2022 we were approached at the Strategy Unit by NHS England to help them create a peer review methodology for long COVID services here in the midlands. The idea was that a peer review ‘toolkit’ could help drive learning and continual improvement in services both within and between systems. At the Strategy Unit we are in a unique position to take on a project like this. While we are very much part of the NHS, we don’t have the same pressure of delivering services and are able to step back and take a holistic view. 

Peer review processes do already exist within the NHS and are well used in some areas. However, they are mostly focused around more established areas of medicine that are already heavily standards driven and often have professional bodies in place. Clearly, long COVID services are a little different. For one, they were stood up rapidly and have existed for a maximum of three years now. They are also hugely varied for many different reasons, such as reflecting the needs and demographics of the patients they have treated, availability of resources and funding and any existing local provision of services, as well as the background of those involved in setting up the services.  

Why peer review? 

For long COVID services there is limited guidance or clinical standards on which to base the service offering and assess its effectiveness. This means it can be challenging to identify opportunities for improvement. 

There are numerous approaches out there for driving and maintaining continual learning and improvement loops. One of the best things about peer review is that there is also something in it for the individuals involved. They get to shout about and share their successes, which can be extremely motivating, but they also get to learn from others at the same time. Nobody is perfect and there is always something to learn, even if it is how to make good even better.  

In this project, we’ve also focused on making the process light-touch financially as well as in terms of the time commitment required from participants. The NHS faces many challenges and we wanted this to be as useful as possible with the minimum burden on resources. We also want to avoid it becoming a tick-box style exercise and are more focused on helping develop continual exploratory learning loops. 

In a nutshell we ask participants to provide a written description of their service and then person A visits person B to find out more, B visits C, C visits D and so on with more and more learning being shared as the daisy chain progresses. Each visit results in a report and then, at the end of the year, a learning event brings all the learnings together to ensure everyone in the chain has an opportunity to see the full picture. The intention is that each person involved only has to take part in two visits – one as the host and one as the visitor – to limit the amount of time they have to give up. 

It is a relatively informal process but does require some admin and oversight to make it happen. For clinicians, however, it presents an opportunity to break out of their natural networks and enables them to build relationships with other people who are also trying to find new approaches to solving old problems when data and evidence doesn’t really tell the whole story. 

How could the toolkit be applied elsewhere? 

While our work here is based upon and will be piloted within long COVID services, we realised that the end product could potentially help ICBs to develop peer review processes for all kinds of services that don’t have a lot of – or any – set clinical standards to measure against.  

This is particularly relevant when you think about the expectation placed on ICBs to drive new ways of working to improve services for local people, often with very little evidence-base to guide their ideas. A peer review methodology for less established clinical areas could help systems share learning and innovation, improve collaboration and speed up the adoption of new ways of doing things. The project involves a wide multidisciplinary team, and the end solution is being codesigned with a range of clinicians, patient representatives, peer review experts and many of our knowledgeable team at the Strategy Unit. Their enthusiasm and willingness to engage with the project has been fantastic and we’re incredibly grateful for their vital contribution. 

What’s next? 

I expect the end tool to be something quite humble that can be easily adapted to fit different situations. We’ll continue to refine and adapt the methodology with a view to piloting it with three ICBs in around June this year. After that we’ll hand it over to NHS Midlands for them to take forward, but we are quite certain that it could have many potential applications beyond long COVID services. 

Interestingly, the Hewitt review, expected to be published towards the end of March, has been set up to consider how integrated care systems can be supported to succeed through oversight and governance. One of the questions asked in the review’s call for evidence was: ‘what mechanisms outside of national targets could be used to support performance improvement?’. The examples cited included peer support, peer review and shared learning mechanisms. So, it seems peer review could have a more significant role to play in the future of integrated care. 

Thank you to everyone who has been involved in the project so far.