Realist Evaluation for Impact Evaluation in South Africa
Monitoring and evaluation of government performance and programmes have become less of a haphazard knee-jerk reaction, a nice-to-have or an afterthought, and more of a deliberate, systematic and planned process. This is primarily because taxpayers are increasingly vocal, demanding that government must account for the use of public funds. The public is increasingly dissatisfied with huge budgets that are spent on public programmes that do not effectively demonstrate conclusive results and clear impact.
It used to be that government would periodically report that millions, even billions of South African Rands have been spent on certain ‘big ticket’ programmes resulting in certain outputs. That was generally where the story ended. Now, increasingly the public is demanding value for money and asking ‘so what’. Accountability is demanded for the vast resources spent, and for demonstration of the ultimate outcomes and impacts in the lives of the intended beneficiaries, some of which are the poorest of the poor, and the most marginalised.
We hear a lot about the ‘results agenda’ and ‘impact agenda’, first driven by the Millennium Development Goals and now by the Sustainability Development Goals. It used to be that in the public sector international development cooperation and donor funds drove these agendas. Now, globally, public sectors are taking leadership in monitoring and evaluation as a means of being an accountable state. This has led to the development of ‘country-driven’ rather than ‘donor driven’ monitoring and evaluation systems.
In this vein, the South African public sector has over the past few years been successfully leading and embedding its country-led M&E system in government. A key aspect of the system is the evaluation of programmes and policies of government as informed by the National Evaluation Policy Framework which also identifies impact evaluation as one of the main evaluation foci. The framework prescribes a National Evaluation System(NES) that implements and provides oversight over public sector evaluations.
Evaluating impact
Evaluation of the impact of programmes is important. The ‘impact agenda’ calls for impact evaluations that are relevant to policy-making, and that communicate evidence that provides clear policy direction. It encourages progress from the monitoring of programme outputs, towards the evaluation of outcomes and impact. This has resulted in the focus on programme impact evaluation, driven by calls for evidence of ‘what works for whom, why, how, when and under what circumstances’.
Internationally, the appropriate methodological approaches for impact evaluation remains a hotly debated issue. This is because some of the methodological approaches have been found wanting – by some – when used for the assessment of attribution and causality, and ultimately finding out ‘what works’. As a result, policy-makers are left in the dark as to the identification of the key drivers of programme success, or lack thereof. This had implications for programme applications in other settings. In order to judge a programme’s impact, strong evidence of what works in terms of programme efficacy has become a necessity.
Impact evaluation in South Africa
Within this context, I recently conducted a forthcoming study to explore the methodological approaches applied in impact evaluations in South Africa. These as well as the views of policy decision-makers, commissioners and implementers of evaluations were investigated in order to establish the usefulness of the evaluation results in offering new insights. In addition, the study sought to establish the potential value of Realist Evaluation as a suitable approach in the methodological toolbox of impact evaluations in the South African public sector.
Emerging research findings indicate that impact evaluations are as yet not widely represented within the National Evaluation System. The most prevalent type of evaluations currently carried out in the public sector are implementation and diagnostic evaluations. Impact evaluations that have been done of social programmes commissioned by the South African public sector usually adopted experimental designs which, by virtue of their design, meant that there was a lack of focus on coherent programme theories of change.
A key finding from this study was that the expertise for the design of impact evaluations specifically for complex interventions is mostly outside the public sector. Impact evaluations that have been completed have largely been led by multinational expert teams who had the skills and know-how to design highly complex evaluations.
In addition, commissioners and implementers of evaluations in the public sector have indicated that the evaluation methodologies and the way evaluations are designed posed critical limitations. They are not always appropriate to inform the needs of policy-makers. Some of these factors included the absence of the theory change that establish how the programme works, in what context and under what conditions. Other factors highlighted the limited utilisation of evaluation evidence in policy-making, as the policy cycle often progresses without diffusing the available evidence into policy-making. More critically, public sector budgetary constraints impact on whether, and what type of evaluations are actually done.
The potential value of Realist Evaluation
Realist Evaluation (also called “Realistic Evaluation) is located in the ‘methods branch’ of evaluation schools of thought. It is a theory-driven method which makes explicit a programme’s theory of change. It does this through an explanatory focus which seeks to understand and interrogate ‘what works, for whom, in what context and in what respects’. It specifies how the combination of the programme’s context, in what is believed to be a complex social system, and the programme’s mechanism of change contribute to the observed outcomes.
Contextual conditions under which programmes are implemented are critical. Social programmes are influenced by their surrounding social environments. The same programme will thrive in one social environment and fail in another setting due to surrounding circumstances and other contextual factors. The programme’s mechanism of change is largely influenced by the reasoning of the intended stakeholders of the intervention. Stakeholders include the intended programme beneficiaries, the programme staff, policymakers and other actors who are involved in programme implementation.
When a programme is planned, its theory of change should explicitly indicate the pathways to change. If these propositions are accurately predicted the observed outcomes should be more or less as envisaged and in sync with the programme’s overarching aims. However, if the programme stakeholders do not respond in accordance with this programme theory, the integrity of the supposed programme implementation chain is weakened.
Therefore, within the wider international evidence-based policy-making arena, Realist Evaluation was found to have emerged as key contributor in the systematic review of policy evidence. Whilst in some instances poor application and misinterpretation of the method persist, Realist Evaluation is increasingly applied in public sector interventions across all policy environments. It has been found most suitable in complex interventions where gaining insights into the programme mechanisms and programme efficacy is a key objective.
These were some of the conclusions drawn by the UK DFID’s 2012 commissioned report entitled ‘Broadening the Range of Designs and Methods for Impact Evaluations’.
Realist Evaluation, like all other types of theory-driven approaches, enhances methodological rigour. It requires advanced understanding of programme theory and research skills, and is time- and resource-intensive.
A Realist Evaluation design cannot be a regular part of all evaluations. Some evaluations do not require such a level of depth and rigour to answer the evaluation questions and may require less probing strategies. A Realist Evaluation design can be gainfully adopted especially in the following cases:
First, where evaluation questions are asked that seek to find knowledge and insight about the workings of a programme.
Second, where a programme is being implemented in a new context with no previous evidence of how it might work.
Third, where a programme is being adapted in a different context.
Fourth, in instances where outcome patterns are contradicting prior implementations.
The approach may serve to confirm and provide empirical evidence of how the programme works, why, under what circumstances, and who can most benefit from it.
Conclusions
The small number of impact evaluations conducted annually is a shortcoming in the National Evaluation System in South Africa, as it is one of the evaluation types prescribed in National Evaluation Policy Framework. This might point to the capacity and capability challenges when conducting impact evaluations, which are arguably the most theoretically rigorous and resource-intensive type of evaluations.
There are also indications that there is currently a strong focus on programme design and implementation. This symbolises that the NES is attentive in improving the performance information emanating from the policies and programmes of government. Quality baseline data from programme monitoring systems should enhance programme impact evaluations. This then should provide policy makers with the requisite evidence to accurately assess programme failures and successes.
This is specifically relevant when considering the importance of the South African Government Outcomes Approach which seeks to ensure that the measurement of outcomes and impacts remain the key focus of government, rather than activities and outputs. In this environment it is expected that there will be an increasing demand for impact evaluations. It is therefore important to prioritise country-led and country developed capacity and capability to conduct these evaluation designs.
In this context, where the capabilities of home-grown evaluators have to be strengthened and nurtured, high level international expertise in impact evaluation should be sought and utilised where appropriate. However, this should be balanced with evaluation skills transfer and development that fosters the country’s evaluation know-how.
Impact evaluation methodologies that enlighten policy makers as to how and why programmes work should be preferred. Theory-based methods, such as Realist Evaluation, serve to open the ‘black box’ and provide the ‘enlightenment’ aspect that is sorely missing in many programme evaluations.
Lastly, the observation from Pawson and Tilley (1997:147) is illustrative in this regard:
Evaluation reports simply indicating whether or not there has been a change associated with the introduction of a programme should not be commissioned or accepted by policy-makers. They are of no value, since nothing can be learned from them about what and what not to do in the future. Evaluation reports must identify not only the changes associated with the introduction of a programme but also what brought them about.
Discover more topics