Evaluation framework

LEAP aims to bring about positive change for children and families by enhancing existing and implementing new evidence-based, science-based and innovative interventions. Through changing the ways in which services and professionals work together and with families and communities, LEAP aims to deliver more effective, family-friendly support for Lambeth. 

To achieve this, LEAP is adopting a ‘test and learn’ approach to delivering services, with evaluation sitting at the heart of this approach. This aims to ensure that the best methods and interventions for bringing about positive change in families’ lives are adopted by LEAP. 

LEAP is focused on achieving a number of key over-arching outcomes for pre-school age children in the LEAP area by the end of our 10 year period. In order to monitor progress, LEAP has a number of long-term outcomes, which represent key programme performance indicators (KPIs), as well as a range of intermediate outcomes, against which the main outcomes of the programme will be measured.

The Big Lottery commissioned a research consortium led by Warwick University to evaluate the A Better Start (ABS) programme as a whole.  In contrast, the LEAP Programme evaluation is focused on understanding how, why, for whom, and under what circumstances LEAP has brought about the identified and desired outcomes for children, parents and families within the LEAP area. By using this learning and evidence to strengthen and develop local services, LEAP is able to support wider policy and service areas.

 

Evaluation

When we talk about evaluation, we are talking about the systematic process for determining the worth or merit of an intervention, programme or approach. The LEAP evaluation will address each of these elements:

  • Measure how powerfully it works (outcome)
  • Describe how it works (process)
  • Understand why it works (theory)
  • Determine the return on investment(economics)

LEAP recognises the value of evaluation to improve the quality of our work. It is also to provide evidence that will inform decision-making, to evidence the difference the programme and its individual interventions make, to further the development of the evidence-base for individual interventions and approaches and to generate and spread learning and good practice.

 

Co-production

LEAP takes a co-production approach to designing and developing its interventions. Services undergo a process of constant refinement and development in response to emerging learning and evidence. 

 Some description

 The diagram shows LEAP’s planning and evaluation cycle, which uses feedback, allowing for ongoing learning and development to be fed into the programme on an ongoing basis. 

 

How an intervention is chosen

In order for LEAP to generate evidence to support effective decision-making, the LEAP partnership must make informed decisions about the most effective allocation of evaluation resources. The LEAP interventions have been chosen based on a number of factors, including:

  • The evidence base that exists for the intervention,
  • the intended outcomes the intervention brings about,
  • and the intervention’s suitability to LEAP requirements and to the LEAP community.

 In deciding upon interventions to prioritise for more in-depth evaluation work, criteria to be considered included:

  • The cost of the intervention, the existing evidence base for intervention (including consideration of whether it is defined as ‘evidence-based’, ‘science-based’, or ‘innovation’),
  • the reach of the intervention,
  • and the potential for scale-up and spread of the intervention.

 

Evaluation structure

In order to ensure a successful LEAP evaluation system, the process needs to be:

  • Robust, valid and hold up to scrutiny.
  • Informed by relevant partners and local priorities.
  • LEAP and the evidence it generates is relevant to the broader early years system.

Therefore an evaluation governance structure is required. This consists of a yearly review of evaluation work that involves partners with relevant expertise and knowledge. In addition, there is a monitoring and evaluation working group (operationally-driven group) that meets quarterly to examine LEAP monitoring data and evidence, and makes recommendations to the Implementation leadership scene (ITL).

LEAP values independent evaluation of its work. It must be, and must be seen to be, objective and credible.  However, LEAP will only commission evaluation when we are clear about the purpose of the evaluation. We want to ensure that the work undertaken is of the highest quality and represents value for money.  To ensure that we achieve this, we will develop a local preferred suppliers list of potential evaluators, and undertake a tendering process for much of the evaluation work. 

An evaluation dissemination strategy will support the dissemination of findings from LEAP evaluation work across the LEAP partnership, Lambeth, the ABS community, and nationally, as appropriate.

In terms of emergent evidence from evaluation work, the ITL would take the lead on examining the findings and would make recommendations to the board related to programme-level changes, such as significant changes to programme KPIs, or the decommissioning of interventions.

 

Overview of LEAP 10 year evaluation framework

Some description

Join the LEAP evaluation and research preferred suppliers List

The call for suitably qualified and experienced individuals and organisations to join the LEAP Evaluation and Research Preferred Suppliers List has gone live.

Find out more.