In March of 2015 as part of the Five-Year Forward View for the NHS, the DoH announced the creations of 29 “Vanguard” Sites for a new care models programme and in July and September a further 21 were added to the list.
There are three model formats: MCPs (multi- specialty community providers); PACs (primary and acute care systems) and Enhanced Health In Care Homes.
The common thread for all the models was that they each offered a more joined –up care platform which would improve efficiency, cut costs, improve staff motivation, and enhance the patient experience for some 6 million patients who lived within the specified Vanguard region.
The Evaluation:
I was fortunate enough to be part of a great day of debate and discussion held by @NuffieldTrust #NTevaluation that explored these issues, and allowed me to think further about about how we are evaluating the Connecting Care for Children integrated child health system in North West London.
The debates about RCTs, matched controlled methods and other methodologies were thoughtfully framed by Prof Nick May’s introduction, who cautioned against an over-emphasis on study design. He also pointed out that setting up pilots always takes longer than expected, and that any one evaluation is very unlikely to produce the definitive answers.
The discussion moved on to the challenge of translating some of the more complex academic evaluation methods into practical ways of getting evaluation done. How, for example, could an NHS analyst working in a Commissioning Support Group, or a clinician researcher working with a clinical department, access and utilise appropriate methods? The importance of using control groups dominated conversations, but it was acknowledged that how one matches in an accurate and meaningful way can be a significant challenge.
Qualitative research has an important role to play; particularly in working with patients and other stakeholders to develop interventions, in exploring how an intervention actually works and in determining the feasibility and acceptability of delivering an intervention. It seems key that qualitative and quantitative research results are thoughtfully presented together, but many journals with traditional editorial approaches and tight word counts make this difficult. Action research seems to have dropped off the agenda, and there was discussion that maybe this is something that needs to be looked at again?
How you evaluate something that is constantly changing is a key challenge in evaluating integrated or complex care. We were introduced to Nick Tilley’s realist evaluation philosophy “what works for whom and under what circumstances?” This does not require extensive theoretical expertise but seeks to explore some of the visible and hidden forces that impact on outcomes, and feels like a helpful and pragmatic approach to explore.
It was also great to hear about work on the use of ‘webs of care’ and the development of ‘I statements’ by National Voices and other partners
One aim here is to develop a user-held tool to evaluate the co-ordination of care within integrated care projects, and the hope is that this tool will be available for wider use in the months ahead.
Prof Martin Marshall @MarshallProf introduced the idea of a ‘researcher-in-residence’ model to democratise and mobilise knowledge by bringing learning and evidence back into the front line of healthcare. Central to this role is an ongoing process of negotiation between all of the different stakeholders within the system. Advice from @laura_eyre about her experiences in this role included highlighting the importance of getting connected, becoming embedded in the space between strategy and delivery and learning to be comfortable about working on the boundaries between practice and research. This led to some great discussions around the importance of openness and objectivity, being visible (physically and electronically) to the wide range of stakeholders and to use the role to better understand the problems of effective intervention.
We also had the opportunity to hear from Charles Tallack, NHS England’s Head of Operational Research and Evaluation, giving an update on the evaluation thinking around the Five Year Forward View New Care Models (Vanguard) programme. An important programme aim is to spread learning and best practice so it is key that the ‘active ingredients’ are ascertained and described, in order for the high impact components to be replicated. It sounds as if there is currently a strong emphasis on each of the Vanguard sites completing a logic model, and we had a great discussion about how these and other evaluation methodology learning could be shared more widely in the months ahead.
It was very positive to hear of the collaborative and adaptive approach being taken in evaluating the Vanguard sites; it is clear that the thinking is continuing to evolve. An example would be the discussions about a proposed national approach to patient-centred measures within the Vanguard evaluation, with the aim that this is coproduced with local evaluators and researchers. Might this sort of national-level input act as a catalyst to strengthening of evaluation capabilities across the healthcare system?
Hearing from some of the local case studies, and talking to some of the other delegates, it is clear that there are a multitude of fast-moving pilots and programmes focused on complex and integrated care being run across the UK. The challenge for all of these programmes is to establish meaningful evaluation that can keep pace with the intervention and provide helpful answers and learning that can iterate and shape the intervention, as well as being shared more widely. Balancing the immediate need for positive messages and results while arguing for realistic timescales (ie 2, 3 and 5 years) within which to measure impact and return on investment seems to be a complex conundrum.
It is great that the Nuffield Trust and other organisations with evaluation expertise are grappling with these issues. Building collaborations, developing local capacity for evaluation, giving permission to do things differently and sharing evaluation expertise across the whole system are key if we are to make sense of this all.