Greg Armstrong --
Change is complex -- but this is not news. Is it reasonable to expect international development programmes and projects working in complex situations to report on results? A brief comment on recent discussions.
[Edited to update links, June 2018]
A recent post titled "Pushing Back Against Linearity" on the Aid on the Edge of Chaos Blog described a discussion among 70 development professionals at the Institute on Development Studies "...to reflect on and develop strategies for ’pushing back’ against the increasingly dominant bureaucratisation of the development agenda."
This followed a May 2010 conference in Utrecht, exploring the issues of complexity and evaluation, particularly the issue of whether complex situations, and the results associated with projects in such situations, are susceptible to quantitative impact evaluations. That conference has been described in a series of blog postings at the Evaluation Revisited website and in two blog postings by Sarah Cummings at The Giraffe.
The more recent meeting described by the Aid on the Edge of Chaos blog, and in a very brief report by Rosalind Eyben of the IDS Participation, Power and Social Change Team, which can be found at Aid on the Edge of Chaos site, appears to have focused on "pushing back" against donors insisting on results-based reporting in complex social transformation projects. [Update 2018: These posts no longer seem to be available] This report, given its brevity, of necessity did not explore in detail all of the arguments against managing for results in complex situations but a more detailed exposition of some of these points can be found in Rosalind Eyben's earlier IDS publication Power, Mutual Accountability and Responsibility in the Practice of International Aid: A relational Approach.
I think it would be a mistake to assume that the recent interest in impact evaluation means that most donors are ignorant of the complexity of development. Certainly, impact evaluations have real problems in incorporating, and "controlling for" the unknown realities and variables of complex situations. But I know very few development professionals inside donor agencies who actually express the view that change is in fact a linear process. Most agree that the best projects or programmes can do is make a contribution to change, although the bureaucratic RBM terms, often unclear results terminology and the results chains and results frameworks used by these agencies often obscure this.
Change is, indeed, complex, but this is not news. The difficulties of implementing complex innovations have been studied for years - in agricultural innovation, then more broadly in assessments of implementation of public policy in the Johnson Administration's Great Society Programmes. People like Pressman and Wildavsky, and more recently Michael Fullan have been working within this complexity for years, to find a way to achieve results in complex contexts, and to report on them.
It is reasonable that anyone using funds provided by the public think and plan clearly, and explain in a similarly clear manner what results we hope, and plan, for. When assessing results, certainly, we often find that complex situations provide unexpected results, which are also often incomplete. But at the very least we have an obligation, whatever our views of change, and of complexity, to explain what we hope to achieve, later to explain what results did occur, and whether there is a reasonable argument to be made that our work contributed to this. Whether we use randomized control groups in impact evaluation, Network Analysis, contribution analysis, the Most Significant Change process, participatory impact assessment, or any of a number of other approaches, some assessment and some reasonable attempt at reporting coherently, has to be done.
The report on the Big Push Back meeting, cites unreasonable indicators ("number of farmers contacted, number of hectares irrigated") as arguments against the type of reporting aid agencies require, but these examples are unconvincing, because, of course, they are not indicators of results at all, but indicators of completed activities. The interim results would be changes in production, and the long term results the changes in nutrition, or health to which these activities contributed, or possibly unanticipated and possibly negative results such as economic or social dislocation, that can only be reported by villagers or farmers themselves, probably using coherent participatory or qualitative research.]
Qualitative data are, indeed, often the best sources of information on change, and I say this as someone who has used this as my primary research approach over 30 years, but they should not be used casually, on the assumption that using qualitative methods provides a quick and easy escape from reporting with quantitative data. When qualitative data are used responsibly it is in a careful and comprehensive manner, and if they are used, to be credible, they should be presented as more than simply anecdotal evidence from isolated individuals. Sociologists have been putting qualitative data together in a convincing manner for decades and so too have many development project managers.
The bottom line:
This is certainly not a one-sided debate. To a limited extent, the meeting notes reflect this, and Catherine Krell's July 2010 article on the sense of disquiet she felt, after the initial conference in Utrecht about how to balance an appreciation of complexity with the need to report on results, discusses several of the issues that must be confronted if the complexity of development is to be reflected in results reporting. It is worth noting that at this date, hers is the most recent post on the Evaluation Revisited website.
The report on the "Push Back" conference notes that one participant in the meeting "commented that too many of us are ‘averse to accounting for what we do. If we were more rigorous with our own accountability, we would not be such sitting ducks when the scary technocrats come marching in’."
Whatever process is used to frame reporting questions, collect data and report on results, the process of identifying results is in everybody's interest. It will be interesting to follow this debate.
Alternative Approaches to the Counterfactual (2009) - A very brief summary, arising from a group discussion, of 4 "conventional" approaches to Impact evaluation using control groups, and 27 possible alternatives where the reality of programme management makes this impractical.
Change is complex -- but this is not news. Is it reasonable to expect international development programmes and projects working in complex situations to report on results? A brief comment on recent discussions.
[Edited to update links, June 2018]
Are Aid Agency Requirements for Reporting on Complex Results Unreasonable?
A recent post titled "Pushing Back Against Linearity" on the Aid on the Edge of Chaos Blog described a discussion among 70 development professionals at the Institute on Development Studies "...to reflect on and develop strategies for ’pushing back’ against the increasingly dominant bureaucratisation of the development agenda."
This followed a May 2010 conference in Utrecht, exploring the issues of complexity and evaluation, particularly the issue of whether complex situations, and the results associated with projects in such situations, are susceptible to quantitative impact evaluations. That conference has been described in a series of blog postings at the Evaluation Revisited website and in two blog postings by Sarah Cummings at The Giraffe.
The more recent meeting described by the Aid on the Edge of Chaos blog, and in a very brief report by Rosalind Eyben of the IDS Participation, Power and Social Change Team, which can be found at Aid on the Edge of Chaos site, appears to have focused on "pushing back" against donors insisting on results-based reporting in complex social transformation projects. [Update 2018: These posts no longer seem to be available] This report, given its brevity, of necessity did not explore in detail all of the arguments against managing for results in complex situations but a more detailed exposition of some of these points can be found in Rosalind Eyben's earlier IDS publication Power, Mutual Accountability and Responsibility in the Practice of International Aid: A relational Approach.
Reporting on Complex Change
I think it would be a mistake to assume that the recent interest in impact evaluation means that most donors are ignorant of the complexity of development. Certainly, impact evaluations have real problems in incorporating, and "controlling for" the unknown realities and variables of complex situations. But I know very few development professionals inside donor agencies who actually express the view that change is in fact a linear process. Most agree that the best projects or programmes can do is make a contribution to change, although the bureaucratic RBM terms, often unclear results terminology and the results chains and results frameworks used by these agencies often obscure this.
Change is, indeed, complex, but this is not news. The difficulties of implementing complex innovations have been studied for years - in agricultural innovation, then more broadly in assessments of implementation of public policy in the Johnson Administration's Great Society Programmes. People like Pressman and Wildavsky, and more recently Michael Fullan have been working within this complexity for years, to find a way to achieve results in complex contexts, and to report on them.
It is reasonable that anyone using funds provided by the public think and plan clearly, and explain in a similarly clear manner what results we hope, and plan, for. When assessing results, certainly, we often find that complex situations provide unexpected results, which are also often incomplete. But at the very least we have an obligation, whatever our views of change, and of complexity, to explain what we hope to achieve, later to explain what results did occur, and whether there is a reasonable argument to be made that our work contributed to this. Whether we use randomized control groups in impact evaluation, Network Analysis, contribution analysis, the Most Significant Change process, participatory impact assessment, or any of a number of other approaches, some assessment and some reasonable attempt at reporting coherently, has to be done.
The report on the Big Push Back meeting, cites unreasonable indicators ("number of farmers contacted, number of hectares irrigated") as arguments against the type of reporting aid agencies require, but these examples are unconvincing, because, of course, they are not indicators of results at all, but indicators of completed activities. The interim results would be changes in production, and the long term results the changes in nutrition, or health to which these activities contributed, or possibly unanticipated and possibly negative results such as economic or social dislocation, that can only be reported by villagers or farmers themselves, probably using coherent participatory or qualitative research.]
Qualitative data are, indeed, often the best sources of information on change, and I say this as someone who has used this as my primary research approach over 30 years, but they should not be used casually, on the assumption that using qualitative methods provides a quick and easy escape from reporting with quantitative data. When qualitative data are used responsibly it is in a careful and comprehensive manner, and if they are used, to be credible, they should be presented as more than simply anecdotal evidence from isolated individuals. Sociologists have been putting qualitative data together in a convincing manner for decades and so too have many development project managers.
The bottom line:
This is certainly not a one-sided debate. To a limited extent, the meeting notes reflect this, and Catherine Krell's July 2010 article on the sense of disquiet she felt, after the initial conference in Utrecht about how to balance an appreciation of complexity with the need to report on results, discusses several of the issues that must be confronted if the complexity of development is to be reflected in results reporting. It is worth noting that at this date, hers is the most recent post on the Evaluation Revisited website.
Whatever process is used to frame reporting questions, collect data and report on results, the process of identifying results is in everybody's interest. It will be interesting to follow this debate.
Further reading on complexity, results and evaluation:
There is a lot of material available on these topics, in addition to those referenced above, but the following provide an overview of some of the issues:Alternative Approaches to the Counterfactual (2009) - A very brief summary, arising from a group discussion, of 4 "conventional" approaches to Impact evaluation using control groups, and 27 possible alternatives where the reality of programme management makes this impractical.
Designing Initiative Evaluation: A Systems-oriented Framework for Evaluating Social Change Efforts (2007) - A Kellogg Foundation summary of four approaches to evaluation complex initiatives.
A Developmental Evaluation Primer (2008) by Jamie A.A. Gamble, for the McConnell Foundation, explains Michael Quinn Patton's approach to evaluation of complex organizational innovations.
Using Mixed Methods in Monitoring and Evaluation (2010) by Michael Bamberger, Vijayendra Rao and Michael Woolcock -- A World Bank Working Paper that explores how combining qualitative and quantitative methods in impact evaluations can mitigate the limitations of quantitative impact evaluations.