Evaluation in the public sector can be a highly complex and challenging process, but if done right can provide valuable independent evidence of either the VFM or lack of it of a service or policy intervention.

Doing it right is the key. RSM uses best-practice evaluation techniques (based on for example MAGENTA, OASIS and GCS Evaluation frameworks). These include tools such as the Theory of Change – mapping how your programme expects to deliver the required changes. And in doing so identifying the areas to measure.  There are a range of methods that can be used to test the intervention. The most robust are experimental or quasi experimental methods. Experimental or Randomised Controlled Trials (RCTs), randomly assigning participants to either a treatment group that receives an intervention or a control group that does not and then compares the results. This is not always feasible or appropriate. An alternative is to use a quasi-experimental approach. These come in different forms but often are performed retrospectively, and involve matching those in a treatment group with a control group found from the non-treated population who have the same observed baseline characteristics as the treated group. The differences in measured outcomes between the two groups can be reliably attributed to the intervention, not other factors.

How we help

We have experienced statisticians and economists who can design either experimental or quasi experimental methods to meet the needs of an evaluation. Options include for example:

  • counterfactual groups will be identified to estimate the impact of an intervention (comparing the treatment and non treatment groups);
  • comparing the outcomes before and after the intervention on the group (Interrupted Time Series Analysis uses time-series data to test whether there is a change in the trend of outcomes following the introduction of an intervention);
  • setting up a Synthetic Control Group which uses historical data to construct what would have happened without the intervention; and
  • comparing the difference in outcomes delivered between two groups;

The exact methods depend on the data available, the time period for the evaluation and the budget. The more comprehensive the evaluation can be, the more reliable the results.

Jenny Irwin
Jenny Irwin
Partner, Strategy, Economics and Policy Consulting
Jenny Irwin
Jenny Irwin
Partner, Strategy, Economics and Policy Consulting