google-site-verification: googlefccb558c724d02c7.html

Translate

Friday, December 31, 2010

26 lessons about RBM from the 1990's remain valid today

Greg Armstrong --

Lessons learned about RBM in the last century remain valid in 2011.

Implementing Results-Based Management: Lessons from the Literature – Office of the Auditor-General of Canada

Level of difficulty: Moderate
Primarily useful for: Senior Managers of partner Ministries and aid agencies
Length: Roughly 18 pages (about 9,000 words)
Most useful section: Comments on the need for a performance management culture
Limitations: Few details on implementation mechanisms

The Office of the Auditor-General of Canada deals with the practical implications of results-based management, or with the failure of agencies to use RBM appropriately, as it conducts performance audits of a large number of Canadian government agencies. The Auditor-General's website, particularly the Audit Methodology section  holds several documents in the “Discussion papers” and “Studies and Tools” that are a reminder that many of the lessons learned fifteen years ago about RBM remain relevant today.


Who this is for:
26 Lessons on RBM - Reviewed by Greg Armstrong 


The paper Implementing Results-Based Management: Lessons from the Literature  provides a concise and relatively jargon-free summary of lessons from practical experience about how to implement results-based management. Its purpose, as the introduction notes, was to “assess what has worked and what has not worked with respect to efforts at implementing results-based management”.   It is shorter and more easily read than some of the useful but much longer publications on RBM produced since, and could be useful to agency leaders wanting a reminder of where the major pitfalls lie as they attempt to implement results-based management and results-based monitoring and evaluation systems.

The lessons reported on here about implementation of results based management remain as valid today as they were in 1996 when a first draft was produced, and in 2000, when this document was released.


Lessons learned about RBM in the last century remain relevant in 2010


Many of the lessons described briefly here are derived from study of field activities of agencies from North America, Europe, and the Pacific, going back at least twenty years. The 2000 paper is based on reviews of 37 studies on lessons learned about RBM which were themselves published between 1996-1999, and builds on the earlier study, referenced briefly here, which reviewed 24 more studies produced between 1990-1995.

More recent reviews of how RBM or Management for Development Results are -- or should be --implemented in agencies such as the United Nations, such as Jody Kusek and Ray Rist’s 2004 Ten Steps to a Results-Based Monitoring and Evaluation System  and Alexander McKenzie’s 2008 study on problems in implementing RBM at the UN country level  build on, and elaborate many of the points made in these earlier studies, moving from generalities to more specific suggestions of how to make operational changes.

The 2000 paper from Canada's Office of the Auditor-General lists 26 lessons on how to make RBM work, and many of them repeat, and elaborate on the lessons learned earlier. The lessons on effective results-based management as they are presented here are organized around three themes:

  • Promoting favourable conditions for implementation of results-based management
  • Developing a results-based performance measurement system
  • Using performance information

A brief paraphrased summary of these lessons will make it obvious where there are similarities to the more detailed work on RBM and results-based monitoring and evaluation done in subsequent years. My comments are in italics:



Promoting Favourable Implementation Conditions for RBM



1. Customization of the RBM system: Simply replicating a standardised RBM system won’t work. Each organization needs a system customized to its own situation.

  • The literature on implementation of innovations, going back to the 1960’s confirms the need for adaptation to local situations as a key element of sustained implementation.

2. Time required to implement RBM: Rushing implementation of results-based management doesn’t work. The approach needs to be accepted within the organization, indicators take time to develop, data collection on the indicators takes more time, and results often take more time to appear than aid agencies allocate in a project cycle.

  • Many of the current criticisms of results-based management in aid agencies focus on the difference between the time it takes to achieve results, and aid agencies’ shorter reporting timelines.


3. Integrating RBM with existing planning: Performance measures, and indicators, should be integrated with strategic planning, tied to organizational goals, and management needs, and performance measurement and monitoring need high-level endorsement from policy makers.

  • Recent analyses of problems in the UN reporting systems repeat what was said in articles published as long ago as 1993.  These lessons have evidently not been internalised in some agencies.

4. Indicator data collection: We should build management systems that support indicator data collection and results reporting and, where possible, build on existing data collection procedures.

5. Costs of implementing RBM: Building a useful results-based management system is not free. The costs need to be recognised and concrete budget support provided from the beginning of the process.

  • This is something most aid agencies have still not dealt with. They may put in place substantial internal structures to support results reporting, but shy away from providing implementing agencies with the necessary resources of time and money for things such as baseline data collection.

6. Location for RBM implementation:  There are mixed messages on where to locate responsibility for coordinating implementation of RBM.  Some studies suggested that putting control of the performance measurement process in the financial management or budget office, “may lead to measures that will serve the budgeting process well but will not necessary be useful for internal management".  Others said that responsibility for implementation of the RBM system should be located at the programme level to bring buy-in from line managers, and yet another study made the point that the performance management system needs support from a central technical agency and  leadership from senior managers.

  • The consensus today is that -- obviously in a perfect world -- we need all three:  committed high level leadership, technical support and buy-in from line managers.

7. Pilot testing a new RBM system: Testing a new performance management system in a pilot project can be useful before large-scale implementation – if the pilot reflects the real-world system and participants.

8. Results culture: Successful implementation requires not simply new administrative systems and procedures but the development of a management culture, values and behaviour that really reflect a commitment to planning for and reporting on results.


9. Accountability for results: Accountability for results needs to be redefined, holding implementers responsible not just for delivering outputs, but at least for contributing to results, and for reporting on what progress has been made on results, not just on delivery of outputs.

  • The need to focus on more than just deliverable outputs to make results-based management a reality, was mentioned in some articles in the early 1990’s, reiterated in OECD documents ten years later, yet remains an resolved issue for some aid agencies which require still, just reports on deliverables, rather than on results.


10. Who will lead implementation of RBM: Strong leadership is needed from senior managers to sustain implementation of a new performance management system.

  • This remains a central concern in the implementation of results based management and performance assessment.  Strong and consistent leadership, committed to, and involved in the implementation of a new RBM system, remains in recent reviews of aid agency performance, such as the evaluation of RBM at UNDP, a continuing issue.

11. Stakeholder participation: Stakeholder participation in the implementation of RBM  -- both from within and from outside of the organization – will strengthen sustainability, by building commitment, and pointing out possible problems before they occur.

  • There is now a general acceptance – in theory – of the need for stakeholder participation in the development of a results-based performance management system but, in practice, many agencies are unwilling to put the resources – again, time and money – into genuine involvement of stakeholders in analysis of problems, collection of baseline data on the problems, specification of realistic results, and ongoing data collection, analysis and reporting.


12. Technical support for RBM: Training support is needed if results-based systems are to be effectively implemented, because many people don’t have experience in results-based management. Training can also help change the organizational culture, but training also takes time. Introducing new RBM concepts can be done through short-term training and material development, but operational support for defining objectives, constructing performance indicators, using results data for reporting, and evaluation, takes time, and sustained support.

  • A fundamental lesson from studies dating back to the 1970’s on the implementation of complex policies and innovations, is that we must provide technical support if we want a new system, policy or innovation to be sustained – We can’t just toss it out and expect everyone else to adopt it, and use it.
  • Some aid agencies have moved to create internal technical support units to help their own staff cope with the adoption and implementation of results-based management, but few are willing to provide the same technical support to their stakeholders and implementation partners.


13. Evaluation expertise: Find the expertise to provide this support for management of the RBM process on a continuous basis during implementation. Often it can be found within the organization, particularly among evaluators.

14. Explain the purpose of performance management: Explain the purpose of implementing a performance management system clearly. Explain why it is needed, and the role of staff and external stakeholders.


Developing Performance Measurement Systems



15. Keep the RBM system simple: Overly complex systems are one of the biggest risks to successful implementation of results-based management. Keep the number of indicators to a few workable ones but test them, to make sure they really provide relevant data.

  • Most RBM systems are too complex for implementing organizations to easily adopt, internalize and implement. Yet, they need not be. Results themselves may be part of a complex system.  But  simpler language can be used to explain the context, problems and results, and jargon discarded, where it does not translate -- literally in language but also to real world needs of implementers and ultimately the people who are supposed to benefit from aid.


16. Standard RBM terms: Use a standard set of terms to make comparison of performance with other agencies easier.


  • The OECD DAC did come up with a set of harmonized RBM definitions in 2002, but donors continue to use the terms in different ways, and, as I have noted in earlier posts, have widely varying standards (if any) on how results reporting should take place.  So simply using standardised terms is not itself sufficient to make performance comparisons easy.


17. Logic Models: Use of a Logic Chart helps participants and stakeholders understand the logic of results, and identify risks.

  • Logic Models (as some agencies refer to them) were being used, although somewhat informally, 20 years ago, in the analysis of problems and results for aid programmes. Some agencies such as CIDA have now brought the visual Logic Model to the centre of project and programme design, with some positive results. The use of the logic model does indeed make the discussion of results much more compelling for many stakeholders, than did the use of the Logical Framework.

18. Accountability for results: Make sure performance measures and reporting criteria are aligned with decision-making authority and accountability within the organization. Indicator data should not be so broad that they are useless to managers. If managers are accountable for results, then they need the power and flexibility to influence results. Managers and staff must understand what they are responsible for, and how they can influence results. If the performance management system is not seen as fair, this will undermine implementation and sustainability of results based management.


19. Credible indicator data:   Data collected on indicators must be credible -- reliable and valid.   Independent monitoring of data quality is needed for this.

  • This remains a major problem for many development projects, where donors often do not carefully examine  or verify the reported indicator data.

20. Set targets:  Use benchmarks and targets based on best practice to assess performance.

  • Agencies such as DFID and CIDA are now making more use of targets in their performance assessment frameworks.

21. Baseline data:   Baseline data are needed to make the results reporting credible, and useful.

  • Agencies such as DFID are now concentrating on this. But many other aid agencies continue to let baseline data collection slide until late in the project or programme cycle when it is often difficult or impossible to collect.  Some even focus on the reconstruction of baseline data during evaluations – a sometimes weak and often ultimately last-ditch attempt to salvage credibility from inconsistent, and unstructured results reporting.
  • Ultimately, of course, it is the aid agencies themselves which should collect the baseline data as they identify development problems.  What data do international aid agencies have to support the assumptions that first, there is a problem, and second that a problem is likely to be something that could usefully be addressed with external assistance? All of this logically should go into project design. But once again, most aid agencies will not put the resources of time and money into project or programme design, to do what will work.

Using Performance Information



22. Making use of results data: To be credible to staff and stakeholders, performance information needs to be used – and be seen to be used. Performance information should be useful to managers and demonstrate its value.

  • The issue of whether decisions are based on evidence or on political or personal preferences remains important today, not just for public agencies but, as it has been recently argued, for private aid.


23. Evaluations in the RBM context: Evaluations are needed to support the implementation of results based management. “Performance information alone does not provide the complete performance picture”. Evaluations provide explanations of why results are achieved, or why problems occur. Impact evaluations can help attribute results to programmes. Where performance measurement is seen to be too costly or difficult, more frequent evaluations will be needed, but where evaluations are too expensive, a good performance measurement system can provide management with data to support decision making.

  • Much of this is more or less accepted wisdom now.  The debate over the utility of impact evaluations, primarily related to what are sometimes their complexity and cost, continues, however.

24. Incentives for implementing RBM: Some reward for staff – financial or non financial – helps sustain change. This is part of the perception of fairness because “accountability is a two way street”. The most successful results based management systems are not punitive, but use information to help improve programmes and projects.

25. Results reporting schedule: Reports should actually use results data and regular reporting can help staff focus on results. But “an overemphasis on frequent an detailed reporting without sufficient evidence of its value for public managers, the government, parliament and the public will not meet the information needs of decision-makers.”


26. Evaluating RBM itself: The performance management system itself needs to be evaluated at regular intervals, and adjustments made.


Limitations:

 This study is a synthesis (as have been many, many studies that followed it) of secondary data, a compilation of common threads, not a critical analysis of the data and not, itself, based on primary data.

It is only available, apparently, on the web page, not as a downloadable document. If you print it or convert it to an electronic document, it runs about 18 pages.

The bottom line:

The basic lessons about implementation of RBM were learned, apparently, two decades ago, and continue to be reflected throughout the universe of international aid agency documents, such as the Paris Declaration on Aid Effectiveness, but concrete action to address these lessons has been slow to follow.

This article still provides a useful summary of the major issues that need to be addressed if coherent and practical performance management systems are to be implemented in international aid organizations, and with their counterparts and implementing organizations.


Further reading on Lessons learned about RBM



OECD’s 2000 study: Results-based Management in the Development Cooperation Agencies: A review of experience (158 p), summarizes much of the experience of aid agencies to that point, and for some agencies not much has changed since then.

The World Bank's useful 2004, 248-page Ten Steps to a Results-Based Monitoring and Evaluation system written by Jody Kusek and Ray Rist, is a much more detailed and hands-on discussion of what is needed to establish a functioning performance management system. [Once available for free download, it can now be read, but not apparently downloaded, from the OECD website. It is available for purchase at several sites.]

John Mayne’s 22-page 2005 article Challenges and Lessons in Results-Based Management  summarises some of the issues arising between 200-2005.  He contributed to the earlier Auditor-General's report, and many others. [Update, June 2012: This link works sometimes, but not always.]

The Monitoring for Development Results website, has three reports on lessons learned at the country level, during the implementation of results-based management, the most recent published in 2008.

The 2009 IBM Center for the Business of Government’s 32-page Moving Toward Outcome-Oriented Performance Measurement Systems written by Kathe Callahan and Kathryn Kloby provides a summary of lessons learned on establishing results-oriented performance management systems at the community level in the U.S., but many of the lessons would be applicable on a larger scale and in other countries.

Simon Maxwell’s October 21, 2010 blog, Doing aid centre-right: marrying a results-based agenda with the realities of aid  provides a number of links on the lessons learned, both positive and negative, about results-based management in an aid context.




Thursday, September 30, 2010

Reporting on complex results: A short comment

Greg Armstrong --

Change is complex -- but this is not news.  Is it reasonable to expect international development programmes and projects working in complex situations to report on results?  A brief comment on recent discussions.

Are Aid Agency Requirements for Reporting on Complex Results Unreasonable?


A recent post titled "Pushing Back Against Linearity" on the Aid on the Edge of Chaos Blog described a discussion among 70 development professionals at the Institute on Development Studies "...to reflect on and develop strategies for ’pushing back’ against the increasingly dominant bureaucratisation of the development agenda."

This followed a May 2010 conference in Utrecht, exploring the issues of complexity and evaluation, particularly the issue of whether complex situations, and the results associated with projects in such situations, are susceptible to quantitative impact evaluations. That conference has been described in a series of blog postings at the Evaluation Revisited website and in two blog postings by Sarah Cummings at The Giraffe.

The more recent meeting described by the Aid on the Edge of Chaos blog, and in a very brief report by Rosalind Eyben of the IDS Participation, Power and Social Change Team, which can be found at Aid on the Edge of Chaos site, appears to have focused on "pushing back" against donors insisting on results-based reporting in complex social transformation projects.  This report, given its brevity, of necessity did not explore in detail all of the arguments against managing for results in complex situations but a more detailed exposition of some of these points can be found in Rosalind Eyben's earlier IDS publication Power, Mutual Accountability and Responsibility in the Practice of International Aid: A relational Approach.



Reporting on Complex Change



I think it would be a mistake to assume that the recent interest in impact evaluation means that most donors are ignorant of the complexity of development.  Certainly, impact evaluations have real problems in incorporating, and "controlling for" the unknown realities and variables of complex situations.  But I know very few development professionals inside donor agencies who actually express the view that change is in fact a linear process.  Most agree that the best projects or programmes can do is make a contribution to change, although the bureaucratic RBM terms, often unclear results terminology and the results chains and results frameworks used by these agencies often obscure this.

Change is, indeed, complex, but this is not news.  The difficulties of implementing complex innovations have been studied for years -  in agricultural innovation, then more broadly in assessments of implementation of public policy in the Johnson Administration's Great Society Programmes.  People like Pressman and Wildavsky, and more recently Michael Fullan have been working within this complexity for years, to find a way to achieve results in complex contexts, and to report on them.

It is reasonable that anyone using funds provided by the public think and plan clearly, and explain in a similarly clear manner what results we hope, and plan, for.  When assessing results, certainly, we often find that complex situations provide unexpected results, which are also often incomplete.  But at the very least we have an obligation, whatever our views of change, and of complexity, to explain what we hope to achieve, later to explain what results did occur, and whether there is a reasonable argument to be made that our work contributed to this.  Whether we use randomized control groups in impact evaluation, Network Analysiscontribution analysis, the Most Significant Change process, participatory impact assessment, or any of a number of other approaches, some assessment and some reasonable attempt at reporting coherently, has to be done.

The report on the Big Push Back meeting, cites unreasonable indicators ("number of farmers contacted, number of hectares irrigated") as arguments against the type of reporting aid agencies require, but these examples are unconvincing, because, of course, they are not indicators of results at all, but indicators of completed activities.  [The interim results would be changes in production, and the long term results the changes in nutrition, or health to which these activities contributed, or possibly unanticipated and possibly negative results such as economic or social dislocation, that can only be reported by villagers or farmers themselves, probably using coherent participatory or qualitative research.]

Qualitative data are, indeed, often the best sources of information on change, and I say this as someone who has used this as my primary research approach over 30 years, but they should not be used casually, on the assumption that using qualitative methods provides a quick and easy escape from reporting with quantitative data.  When qualitative data are used responsibly it is in a careful and comprehensive manner, and  if they are used, to be credible, they should be presented as more than simply anecdotal evidence from isolated individuals.  Sociologists have been putting qualitative data together in a convincing manner for decades and so too have many development project managers.


The bottom line:

This is certainly not a one-sided debate.  To a limited extent, the meeting notes reflect this, and Catherine Krell's July 2010 article on the sense of disquiet she felt, after the initial conference in Utrecht about how to balance an appreciation of complexity with the need to report on results, discusses several of the issues that must be confronted if the complexity of development is to be reflected in results reporting.  It is worth noting that at this date, hers is the most recent post on the Evaluation Revisited website.

The report on the "Push Back" conference notes that one participant in the meeting "commented that too many of us are ‘averse to accounting for what we do. If we were more rigorous with our own accountability, we would not be such sitting ducks when the scary technocrats come marching in’."

Whatever process is used to frame reporting questions, collect data and report on results, the process of identifying results is in everybody's interest.  It will be interesting to follow this debate.

I will review the literature on impact evaluation in a future post.



Further reading on complexity, results and evaluation:

There is a lot of material available on these topics, in addition to those referenced above, but the following provide an overview of some of the issues:

Alternative Approaches to the Conventional Counterfactual (2009) - A very brief summary, arising from a group discussion, of 4 "conventional" approaches to Impact evaluation using control groups, and 27 possible alternatives where the reality of programme management makes this impractical.

Designing Initiative Evaluation: A Systems-oriented Framework for Evaluating Social Change Efforts (2007) - A Kellogg Foundation summary of four approaches to evaluation complex initiatives.

A Developmental Evaluation Primer (2008) by Jamie A.A. Gamble, for the McConnell Foundation, explains Michael Quinn Patton's approach to evaluation of complex organizational innovations.

Using Mixed Methods in Monitoring and Evaluation (2010) by Michael Bamberger, Vijayendra Rao and Michael Woolcock -- A World Bank Working Paper that explores how combining  qualitative and quantitative methods in impact evaluations can mitigate the limitations of quantitative impact evaluations.

Related articles by Zemanta

Enhanced by Zemanta

Friday, August 27, 2010

Bilateral Results Frameworks --2: USAID, DFID, CIDA and EuropeAid

Greg Armstrong --

[Edited to update links August 2016]
Here's the question: Will bilateral aid agencies hold the multilaterals to account for results or not? UN agencies’ results reporting is inconsistent, and the results frameworks of SIDA, AUSAID and DANIDA remain ambiguous.  This post reviews results frameworks from USAID, DFID, CIDA and EuropeAid.

Level of Difficulty: Moderate to complex
Primarily useful for:  Project managers, national partners
Length:  23 documents, 1,444 p.
Most useful: CIDA RBM Guide, DFID LFA Guide and EuropeAid Capacity Development toolkit
Limitations: Mounds of bureaucratic language in many of the bilateral documents make it difficult to identify and take effective guidance from potentially useful material.

Who these materials are for

Project managers, evaluators and national partners trying to understand how USAID, DIFD, CIDA and EuropeAid define results and frame their approaches to RBM.

Background: Ambiguous results chains at the UN, and some bilateral agencies.


In my previous three posts, I examined how vague UN RBM frameworks can provide the rationale for some agencies to avoid reporting on results, to focus instead simply on describing their completed activities, and how similar ambiguities in the results frameworks and definitions from AusAid, DANIDA and SIDA , would make it difficult for them to hold the UN agencies to higher standards of results reporting.  The fourth and final post in this series briefly surveys how the results frameworks of four more bilateral agencies compare to those of the OECD/DAC and UNDAF. This review is only, as I noted in past posts, of those agencies where information, in English, could be obtained in reasonable time from their own or associated --  and publicly accessible -- websites.   For those who want more detail, links to the relevant donor agency RBM sites can be found at the bottom of this article.

I am proceeding from the premise, again, that “cause and effect” is not a reasonable description of what is intended by a results chain, but rather that it is a notional concept of how activities can contribute to intended results.

The USAID Results Framework



Length: 4 documents, 248 p.
Most useful:  29 pages of links and references in the Guide on Planning.

Despite its commitment to improved knowledge management, noted in the OECD DAC peer review of the United States [99 p.], last conducted in 2006, USAID remains, at least at this writing in 2010, one of the most difficult of the bilateral agencies for which to find clear RBM guidelines.  USAID has also recently come under criticism for its severe editing of reports from collaborating partners.

And it is not just USAID itself which is difficult to understand, but to some extent also organizations such as the Millennium Challenge Corporation, which, again at this writing in 2010, has a largely unintelligible list of indicators for what it calls results – listing the number of farmers trained, or millions of dollars in irrigation contracts funded--in other words, completed activities--rather than listing the changes (results) these activities lead to.  If you can make sense, for example, of the Tanzanian results table, you are smarter than I am, or at least you have a lot of free time, or both. [Editing note, January 2012:  That results table is, as of January 2012, no longer available, and a greater emphasis now appears to be put on longer-term results, and impact evaluations at the MCC website.  But, that is a review for another day.]

Among the hundreds of documents on the USAID website, there are only a few which have any kind of clear results definitions and those definitions are buried deep in bureaucratic jargon.  

The tasks for USAID are, of course, complicated by the integration of its work planning with that of the State Department, and the added complexities of integrating humanitarian assistance with military operations – with all of the ambiguity that creates about the nature of results.

The USAID Guide on Planning [77 p.], updated in 2010, presents its Results Framework as moving from activities to Outputs, to Intermediate Results, and finally to Assistance Objectives -- what other agencies might call Impacts or Ultimate Outcomes. So, the USAID results chain would appear to look like this:

ActivitiesOutputsIntermediate ResultsAssistance Objectives

The 2006 DAC peer review noted that the USAID reporting system

“…focuses mainly on “physical deliverables” (e.g. numbers of schools, numbers of clinics, etc.). With the new orientation in US foreign policy, there is an opportunity to measure development assistance performance more in outcomes than in physical deliverables.” 

By 2010, some progress may have been made on this.  The Automated Directives System 200 USAID Introduction to Programming Policy [ 71 p.], revised in 2010 defines an Output as:

“A tangible, immediate, and intended product or consequence of an activity within USAID control. Examples of outputs include people fed, personnel trained, better technologies developed, and new construction.”  [p. 67]

This definition seems to clearly define Outputs as completed activities.

While the USAID “Guide on Planning” definition of Outputs differs from this slightly and leaves some room for possible results as “people able to exercise a specific skill, buildings built, or better technologies developed and implemented…”, it also notes that 

“…it is important to understand the difference between Outputs and results….In differentiating outputs from results, it can be useful to think of results as developmentally significant changes that affect a broad segment of society, while outputs are lower-level steps that are essential in achieving these changes.” [p. 27]

A similar distinction between results and Outputs occurs in the USAID Introduction to Programming Policy where results are defined as:  

“A significant, intended, and measurable change in the condition of a customer, or a change in the host country, institutions, or other entities that will affect the customer directly or indirectly. Results are typically broader than USAID-funded outputs….”[p. 70]

This suggest that simply reporting on completed activities or products would not be acceptable within the evolving USAID context. Given that the United States provides such a large percentage of the assistance available to agencies such as UNDP, it remains to be seen whether this focus on results has in any way been communicated to the UN agencies.

The DFID Results Framework


Length: 5 documents, 378 p.
Most useful: Guidance on Using the Revised LFA

The DfID results chain appears, from the documents I have seen, to look this way:

ActivitiesOutputsPurposeGoal

 DFID in 2005 had in its DFID “Guidance on Evaluation and Review” for staff  [87.p] a definition of Outputs similar to those of the OECD / DAC and UN:
 “The products, capital goods and services which result from a development intervention; may also include changes resulting from the intervention which are relevant to the achievement of outcomes.” 

While not easily available on DFID’s own website (this link comes from the Monitoring and Evaluations News archives), the 2009 DFID How-to Note: “Guidance on Using the Revised LFA[37 p.] changed the definition of Outputs to focus on deliverables:
“Outputs are the specific, direct deliverables of the project. These will provide the conditions necessary to achieve the Purpose”. 

Examples provided in this DFID LFA guide lead to the conclusion that there is still room for looking at Outputs in the DFID context, as both completed activities and as results.  

For example – deliverables or completed activities:

Output 1: All health professionals in selected Central and District Hospitals trained on revised curriculum for patient-centred clinical care “

Possible deliverable, but also possibly a near-term result:

“Output 2: In 4 target districts Ministry of Health professionals delivering all aspects of Primary Health Care (PHC) services in partnership with NGOs and Village Health Committees “

Definitely a longer-term result, probably at the Outcome or Purpose level:

Output 3:  Selected Central and District Hospitals achieving year on year improvements in national assessments of patient-centred clinical care “ [p. 12]

This is an unnecessarily confusing mixing of real results and completed activities, in the use of one term -- “Outputs”.

According to the DFID revised LFA guide, all DFID projects are now supposed to collect baseline data on indicators for Outputs, Purpose and Goal before approval [p. 33] and to assign to each Output a weight that will “provide a clearer link to how output performance relates to project purpose performance”.  This suggests that while not completely responsible for achievement of results at the purpose level, project managers are expected to report on progress at that level and assess the continuing likelihood that Outputs are making a contribution to broader achievement of results. 

Of course, the fact that this document is not easily available from DFID leaves open the possibility that the ideas in it may not be in universal favour within DFID.

But there are other documents which suggest that reporting on results, and not just on activities, is important to DFID. The 2010 synthesis review of 970 DFID Project Completion Reports   [124 p.] ---which may have laid the groundwork for recent criticisms of DFID performance by the new British government  by criticising the lack of assessment of how projects contributed to goals, did note that “The main concern in the [Project Completion Report] process is to assess performance (achievement of the stated purpose),”  [p. 36] This is clearly a focus on results, and not just on delivery of products, or completion of activities.

And as an indicator that the focus on results is being taken seriously at the political level, a recently leaked DFID memorandum on which programmes or projects should be cut, suggests that the government should be focusing on projects that can be defended  “as outcome focused as possible, and will deliver value for money”, and that DFID “ will only judge ourselves against commitments and outcomes that we assess pass the fitness test.”  

As Philip Dearden  noted in a discussion about implementation of the new DFID  logical framework on the Monitoring and Evaluation News website : “Its very important to remember that many DFID programmes are now spending huge amounts of money and we need to know what changes the money is actually going to bring about.”

The April 2010 DAC OECD peer review on British aid  [130 p.]  concluded that 

“DFID has a strong results-based management framework, and this – combined with a purpose and performance-driven organisational culture and cohesion at the senior level – is important in ensuring effective delivery of the aid programme.”  

And, even before the new British Government’s multilateral aid review, there were signs that DFID was concerned about results reporting in multilateral agencies. The 2009 Guide on Using the Revised LFA put it quite clearly:

“…DFID will have to work with the fact that multiple partners mean differences in terminology and approaches. 
DFID has played a leading role in ensuring harmonisation of approaches, and is committed to continuing in this vein. However, it is important that in pursuing a harmonisation agenda, we do not relax our requirements for robust monitoring and evaluation tools. 
Differences in language and approach should not be an excuse for gaps in information. In fact, the revised logframe format has already been used by DFID teams when negotiating with partners. DFID needs the information in the logframe in order to report to UK taxpayers that funds are being used in the best possible way and delivering measurable results.” [p. 16]
In part because of this, and with the UK review of aid which is about to begin, there can only be more pressure forthcoming on project managers under DfID funding to go beyond reporting on activities and products, and to monitor progress on results.

It will be interesting to see if this applies to multilateral agencies using DFID funds.   [Update note, August 13, 2011:  The DFID Multilateral Aid Review was completed in March 2011.]

The Global Affairs Canada (CIDA) Results Framework

[ Edited to update links April 2017]
Length:   103 p.

Getting a clear idea of the CIDA Results-Based Management framework is relatively easy in comparison to some other agencies, as the  CIDA RBM system, revised in 2008, is being implemented, and most of the definitions [were] included in 3 documents available online at the CIDA website [Edit: now the Global Affairs Canada development website]  The most important of these was the CIDA RBM Guide [ This is now the primary RBM Guide for Global Affairs Canada "Results-based management for International assistance programming: A how-to guide" (PDF)]. 

In 2008, CIDA made changes to its results-based management policy, dropping the Log Frame (although not, as has been suggested, the Logical Framework Approach) and replacing it with a Logic Model  The former CIDA results chain, in general use up until the end of 2009 looked like this: 

ActivitiesOutputsOutcomesImpact, 

The new CIDA results chain (which started implementation in 2010) is as follows:

Activities OutputsImmediate OutcomesIntermediate OutcomesUltimate Outcomes.

There have also, as a quick glance at these two will show, been changes to the RBM terminology CIDA/Global Affairs Canada uses. 
In the past CIDA regarded Outputs as near-term results -- for example,  changes in understanding arising out of training activities.  But, as this document noted “given the almost universally accepted definition of “outputs” by donors, OECD DAC, and [The Canadian Treasury Board Secretariat] as products or services, it is necessary to readjust CIDA’s former term [for near-term results] to “immediate outcome.”

The change is useful because it makes a very clear distinction between completed activities and results.  But it is based on the misconception  that other donors in fact do have a clear definition of Outputs as completed activities, and do not permit results to be included in the term “Outputs”.  This is something that in practice is clearly not the case for many UN agencies, for SIDA, AusAid, DANIDA, USAID and even for DFID.

The [current} CIDA/Global Affairs Results-Based Management framework, like most other current aid agency documents now defines Outputs as “Direct products or services stemming from the activities of an organization, program, policy or initiative”.  

It is worth noting, however, that “changes” are not included in the definition of "Outputs", and one explanation of why Outputs have been redefined as products and completed activities is that the new definition, as a previous CIDA document explained

Clearly splits development results from products and services (outputs). This distinction should strengthen performance reporting by partners, given that it is now clear they will have to report on both outputs and outcomes” 

The examples provided in [both past and current] CIDA/Global Affairs Canada Results-Based Management publications also make it clear that Outputs are completed activities or products, but not results. For example, 

Activity:  Build wells
Output: Wells built
Outcome: Increased access to water

Activity:  Develop and deliver training on well maintenance
Output: Training on well maintenance developed and delivered
Outcome: Increased ability to maintain wells.

Results in the CIDA/Global Affairs Canada context are now described this way:

Results are the same as outcomes. An outcome is a describable or measurable change that is derived from an initiative's outputs or lower-level outcomes. [ p. 8]

  • Immediate Outcomes are near-term results phrased as changes – increases in understanding,  skills or access.
  • Intermediate Outcomes are mid-term changes, “expected to logically occur” within the life of a project, if Immediate Outcomes are achieved, including things such as increased use of clean water, or improved trust in government.  
  • Ultimate Outcomes are hoped for long-term changes, the justification for the project, but unlikely to be achieved during the life of a project.  These refer to things such as improved health status, or reduced vulnerability of children in conflict areas.

These results categories are illustrated in much more detail in the Global Affairs RBM Guide.

The 2007 OECD DAC peer review of Canada’s aid programme  [107 p.] noted that the Results-Based Management and internal audit processes in CIDA were:
“...cumbersome, with limited differentiation in the indicators required and the processes involved for large and small programmes. While this helps to compare results among different activities, efficiency is compromised. The system might also be used to justify risk aversion rather than risk management, especially in those areas where it is more challenging to articulate measurable results (e.g. in governance)” [p. 49]

The OECD/DAC review also noted the relatively strict CIDA application of RBM to small agencies:
“ For example, an application for a small workshop organised by an NGO in Canada has to set out development results as if it were equivalent to a major bilateral programme in a partner country, with requirements to provide an impact evaluation. While providing discipline for NGO proposals may appear reasonable in theory, the practice can appear unnecessarily burdensome to the applicant.” [footnote 37]

The new CIDA/Global Affairs RBM system is unlikely to relieve small agencies of the requirement to justify activities in terms of results, but with its simpler approach to visualising results in a Logic Model, it may be an attempt to address the issues raised by the peer review, about the unwieldy nature of the process, and to make the process of identifying results more intuitive.

While the current CIDA/.Global Affairs results framework is clear and logical, there are, of course, questions about whether it will actually be implemented, not just in plans, but in reports from project managers that are clearer and more useful than those produced before the new framework was put in place.  CIDA projects have often had real difficulties in producing baseline data, and it will be interesting to see if the new framework stimulates more attention from Global Affairs Canada's own managers and from project directors, to this important element of Results-Based Management.

It will also be interesting to see if CIDA applies its standards on accountability and results reporting to multilateral agencies.

[Edit:  I will review the most recent 2016 Global Affairs Canada Results-Based Management Guide in a forthcoming post]


The EuropeAid Results Frameworks

Length:  10 documents, 710 p.
Most useful: Results Oriented Monitoring Handbook

[2016 Edit:  Many of the EuropeAid documents have moved, but if readers use the "advanced search" tool in the "Library" tab, and narrow the search further by opening the "categories tab under that, it is possible to still find many, although not all, of the documents referred to below]

The EuropeAid document search function


Despite the obvious inference that the EU aid programme could be treated as a multilateral exercise, for the purpose of this discussion, I am treating EuropeAid as a bilateral agency. But, of course, EuropeAid itself has to account for the multiple results frameworks of its component members, and as the 2007 OECD DAC peer review of the European Union's aid programme  [114 p.] noted,
"Because the Community functions both as a donor agency and as a multilateral recipient of Member State funds it is understandable that it does not allocate a large proportion of its funds to other multilateral institutions...." [p. 42]
But because it works with a multitude of member countries, it has to work with differing perspectives on results, and this is reflected in the many different results chains in EuropeAid and related documents, five of which are discussed here.

The EuropeAid site has a few pages which summarize a lot of data.  This included [Edit: in the past]  the EuropeAid glossary of terms related to results-based management  in which the EuropeAid results chain takes this form:

Activities OutputsResultsImpact

The EuropeAid glossary of RBM terms makes it clear that it neither defines, nor uses, the term “Outcome”. It simply refers to results. 

EuropeAid Outputs are defined as   “Goods and / or services produced / delivered by the intervention (e.g. rehabilitated road).”

EuropeAid Results are defined as “Initial change attributable to the intervention (e.g. reduced transport time and cost)” leading to

EuropeAid Impacts, defined as “Further long term change attributable to the intervention (e.g. development of trade)”. 

The EuropeAid guide on Evaluation Methods [97 p.] provides examples for Outputs such as “teachers trained” or “”advice provided to groups of farmers” and for results, such as “girls benefitting from increased access to education” or “new breeding practices that prevent desertification”. 

Both of these examples, among others, indicate a clear difference in this EuropeAid framework between Outputs as completed activities, and results.  

But other documents, such as the EuropeAid Results Oriented Monitoring Handbook  [Edit: in the 2009 version, at least] clouded the picture on how “results” are defined.  The 2009 version of the handbook noted, for example that 

“Monitors have to fully understand the concepts and terminology used in ROM and to apply them in the correct and coherent manner. This is specially true for ‘efficiency’, ‘effectiveness’, “outcomes’ and ‘outputs’ as these terms might be used differently in other management and M&E systems.”  [ p. 48] 

The irony is that this document did use some words differently than those defined in the EuropeAid Glossary.  While the Glossary refers only to Outputs, Results and Impact, distinguishing between Outputs and Results, the Handbook on Results Oriented Monitoring said of Outputs that they are: 

“the goods and services produced; e.g. children vaccinated. In the EC’s Logframe structure these are referred to as ‘results’;” .[p. 29].  
[Edit:  the 2015 version of the handbook removes some of this ambiguity.]

The EuropeAid Glossary specifically avoided including Outputs in results, but notes that other EC documents such as the Handbook, may use the term “...  in the wider sense.”  

The EuropeAid Results Oriented Monitoring Handbook is, nevertheless, an interesting document, [Edit: and happily, one which is updated regularly] describing a very systematic and detailed framework for  monitoring which goes far beyond completion of activities, and includes assessments of Outcomes, even Impacts, and the acquisition of information from stakeholders on relevance, effectiveness and sustainability of results.  And within this document there is a distinction between 


  • EuropeAid Outputs as completed activities (eg. Training sessions”), 
  • EuropeAid Outcomes as intermediate results (“improved capacity of those who attended the training”) and 
  • EuropeAid Purpose, or longer-term results, (“improvements in area of intervention due to the improved capacity of the target group”) [p. 65], which are the “specific, central highest ranking objective of the project” [p. 71], the highest level on which a project reports, but not necessarily the highest level on which it is monitored.
  • EuropeAid Impact as the overarching result to which a project may contribute, and justification for a project.
[See the 2015 handbook, p. 34}
In answer to the question “How well is the Project achieving its planned results?” the Results Oriented Monitoring Handbook said in the 2009 version ‘It is crucial to understand that effectiveness in this part is concerned with outcomes, not with outputs (tangible goods and services).” [p. 71]

There is a clear, and logical results chain here then, and this is reflected in the differentiation between Outputs and changes -- as results -- in the detailed Background Conclusion Sheet [p. 65-74].

ActivitiesOutputsOutcomesPurposeImpact

The EuropeAid site website’s section on results monitoring , also has 6 very short synthesis reports on results, the most recent of which is for projects in 2007, but none of these documents makes it clear whether the monitoring is focusing on results as changes, or whether this is referring to completed activities or products.  There are references to “impacts” but only as potential results, so it is not clear from any of these short reports what “results” are actually being achieved.  Presumably the more detailed reports would clarify this. 

But also [Edit: in recent years] EuropeAid has put together, in its Tools and Methods series, two interesting, and potentially very useful documents suggesting that for capacity development EuropeAid is indeed focused on results.  These include the March 2009 EuropeAid  Guidelines on Making Technical Cooperation More Effective  [138 p.], and the EuropeAid Toolkit for Capacity Development [82 p.]  Both of these discuss a results chain, for technical cooperation, focused on capacity development in which the attention to results, not just to completed activities or delivery of products is clear.  The Toolkit for Capacity Development presents the results chain for capacity development interventions this way [p. 68-70]
[Edit:  Many of these documents are in 2016 difficult to find, but there are an increasing number of other guides and briefing notes in the EuropeAid international cooperation and development library]
Activitiessector capacity (capacity development outputs)sector outputs (capacity development Outcomes)Sector OutcomesSector Impacts

In this context as the Guidelines on Making Technical Cooperation More Effective say, “…logical frameworks for “[Capacity Development Technical Cooperation]” need to focus
on outputs and outcomes beyond the immediate deliverables by TC;” [p. 29]

It is clear from the discussions in these two guides that the sector capacity or capacity development outputs are real changes in the ability of host government agencies, for example, to deliver services more effectively.  These then, whatever they are called, are results.  These Guides themselves refer to the 2007 Guide on Support to Sector Programmes  [119 p.].  That Guide uses the simpler results chain of

ActivitiesOutputsOutcomesImpacts, 

Outputs in this document are clearly completed activities. [p. 89]

There are other, more complex, results chains referenced in five working papers on results indicators on transport, education, water and sanitation, health and agriculture, all produced in 2009, suggesting a 6-stage results chain:

ActivitiesOutputsOutcomesSpecific ImpactsIntermediate ImpactsGlobal Impacts.

This is focused at the agency level, rather than the project level.  These guides too, however, make it clear it is change that must be followed, not just completion of activities.

The European Community's aid volume is huge. As the 2007 OECD DAC peer review noted the "volume of Community ODA alone is larger than that of the World Bank’s International Development Association and several times that of the United Nations Development Programme" [p. 12] and much of it is administered or influence by the work of EuropeAID.  EuropeAid should, therefore, have a very big influence on how multilateral aid agencies treat results reporting.

If the EuropeAid Glossary of RBM terms definitions, and the types of capacity results described in the 2009 EuropeAid Tools and Methods guides  are used, there is a clear distinction between results and completed activities. It is also clear, despite labelling Outputs as results in the Results Oriented Monitoring Handbook, that the approach described there clearly focuses on, and in theory at least, takes the monitoring of results, not just activities, seriously.

The bottom line: Holding UN agencies to account


Despite anecdotal evidence that they are themselves trying to report on results, given the ambiguity of their own definitions of results SIDA, AusAID, DANIDA  would have some difficulty in holding UN agencies to higher RBM reporting standards.  

On the other hand, despite some inter-agency differences in terminology, there is now in the RBM frameworks of DFID, and CIDA, in EuropeAid's specific focus on capacity development, and perhaps, if we can trust the definitions on its website,  for USAID too, little of the confusion between completed activities and results that persists in the OECD DAC and UN definitions.  

There is also in these bilateral frameworks, and despite some criticisms of how they work, little space or excuse for project managers to avoid taking responsibility for reporting not just on completed activities, but on results -- explicitly as those things that have changed after the guides are produced, the training completed, or the schools or health clinics constructed.

Given the inconsistent results reporting by UN agencies described in some of my earlier posts, even those bilateral aid organizations such as DFID, CIDA, EuropeAid or USAID which do, or purport to, hold their own projects to relatively high reporting standards, could be open to criticism for providing funds to UN agencies without requiring from them similar standards on reporting.  

In this context, recent moves by DFID to assess multilateral aid could focus attention more clearly on the weak UN results culture, and how bilateral agencies deal with this.

Further reading on Results-Based Management at DFID, USAID, CIDA and EuropeAid :


Edited to update links in March 2011, January 2012. and August 2016





Enhanced by Zemanta

Friday, July 30, 2010

Bilateral Results Frameworks --1: SIDA, AusAid, DANIDA

Greg Armstrong --
[Edited to update links, December 2013 and July 2016]

As bilateral donors consider offloading responsibility for delivering aid programmes to multilateral agencies, it pays to examine what standards these UN agencies use for results reporting. Two earlier posts looked at the relatively weak results reporting in many UN agencies, and questioned why  the bilateral agencies don’t hold the UN agencies to higher standards. This third post looks at how three major bilateral donors define their results frameworks. 

Level of Difficulty:  Moderate to complex
Primarily useful for:  Project managers, national partners
Length:  23 documents, 1,337 p.
Most useful: SIDA, AusAid and DANIDA guides to the LFA
Limitations: Mounds of bureaucratic language in many of the bilateral documents make it difficult to identify and take effective guidance from potentially useful material.


Who these materials are for

Project managers, evaluators and national partners trying to understand how SIDA, AusAid and Danida define results and frame their approaches to RBM.

Background – Confusion in UN results definitions

In my previous two posts, I examined how UN Results-Based Management frameworks can provide the rationale for some agencies to avoid reporting on results, and to focus instead simply on describing their completed activities.  To a large extent this confusion arises out of the ambiguous terminology used in Results-Based-Management  – words that mean different things to different agencies even as they agree to use them in common.  This is complicated further in the UN case when agencies apply the same terms to near-term results at the project level as are applied at the aggregated country level. As I noted in the second post, vague definitions for results, even when they are ‘harmonised’, too frequently seem to lead unmotivated or inattentive agency heads and project managers to the path of least resistance: project reporting on completed activities and not results. 

This third and a subsequent fourth post in this series briefly survey how the results frameworks of 7 bilateral aid agencies compare to those of the OECD/DAC and UNDAF. This review is only of those agencies where information, in English, could be obtained in reasonable time from their own or associated --  and publicly accessible -- websites. Those already familiar with the discussion of the problems in the UN framework may want to skip directly to the review of the bilateral agencies.

I am assuming, for the moment, a general agreement that “cause and effect” is not a reasonable description of what is intended by a results chain, but rather that it is a notional concept of how activities can contribute to intended results.   In this context, this post examines how results chains are presented in documents from the OECD/DAC, SIDA, AusAid, and  Danida.   

USAID, DFID, CIDA and EuropeAid results frameworks will be analysed in the final post in this series.

OECD DAC RBM


The OECD DAC results chain is a common link between the bilateral and UN agencies, and it looks like this:

ActivitiesOutputsOutcomesImpact

"Outcomes", as used here, compare roughly with what the bilateral agencies work towards as development changes or results.  However, it is at the Output level that ambiguity and problems occur.  The approved OECD/DAC RBM terminology for Outputs, in French is “Extrants/Produit” and in Spanish it is “Producto”. In both cases, the intention is clear and stated this way, there is nothing obviously wrong with the results chain. 

But the definition of (the English) Outputs provided in the OECD/DAC glossary has two elements:

First: “The products, capital goods and services which result from a development intervention;” 

and then: “changes resulting from the intervention which are relevant to the achievement of outcomes.”  

It is this second clause -- on  “changes” -- that blurs the distinction between products or completed activities on the one hand, and results on the other.

And it is this essential lack of clarity in the original OECD/DAC definitions that is now reflected in the UN agency definitions for Outputs. Given that there are two choices here – defining Outputs as completed activities – or as real results (changes) --  and not making a clear distinction between the two, this ambiguity has provided unmotivated UN project directors the excuse to simply report on completed activities.

Bilateral Aid Agency Results Frameworks


Many of the bilateral donors reflect this same ambiguity in how they define Outputs, some specifying them as just completed activities or products, while others include the additional possibility of near-term change. 

Nevertheless, whether activities and products are labelled as completed activities or as Outputs, for some bilateral aid agencies, they are not considered sufficient to constitute results reporting in results based management terms.  

Thus, for example, an activity that trains 500 health workers may have Outputs (completed activities or products) of “training materials” and 500 “trained workers”.  But completion of the training would not  be considered a “result” by the tougher bilateral RBM standards. 

Instead, most agencies would look for reports of viable results at the primary or near-term level only  where it was clear what difference the training made.  In this context, the result of the activity would be visible if it could be demonstrated that a substantial number of the 500 health trainees
  • learned something they did not know before,
  • changed their attitudes, or
  • were now working more effectively.
In the longer term, results would presumably be sought also in more systemic types of change --- in changes seen in effectiveness and relevance of the professional development system or in health services delivery policy and practice, and eventually changes in health status for the population.


The SIDA Results Framework


Length: 7 publications, 507 p.
Most useful sections: SIDA Guidebook on the LFA

The 2004 SIDA Summary of the Theory Behind the LFA, [35 p.] which I reviewed several months ago presents the SIDA results chain this way:

ActivitiesResults/OutputsProject Objective/PurposeDevelopment objectives

This LFA guide describes Outputs as “actual, tangible results that are a direct consequence of the project’s activities”.  This, on the surface, seems similar to the vague UN definitions of Outputs sometimes being products and at other times low-level results. 

While the SIDA guide goes further in saying: “The outputs/results are a description of the value of the services/products produced by the project within the framework of what the project stakeholders can guarantee”, the examples the Guide provides are ambiguous. Some cite completed activities as Outputs e.g.:

Activity: “Repair old water points”; Output: “50% of existing water points…repaired”

On the other hand, other examples cite actual results as Outputs: 

Activity: “Train in hygiene”; Output: “Hygienic habits of the target group improved”

This may be why a 2008 SIDA evaluation of Policy Guidance and Results-Based Management for the SIDA education sector  [183 p.] noted in its comments on two programmes that:

“Even when civil society or university capacity building outcomes/ outputs are defined in log frames, performance indicators are frequently unclear and the results chain and causal relationship from activity, output and outcome are unclear.” [p.41]

A 2008 evaluation of SIDA Human Rights and Democratic Governance project results [44 p.] noted the ambiguity in how results were reported among projects – some reporting completed activities as results, under the “Output” label, and others reporting on Objectives.   

“Training is by far the most frequently mentioned output; it is an output of nearly half of all the projects covered by the sample. Apart from this, the list of outputs includes, but is not limited to: policies, guidelines, studies, publications, information, seminars, study tours, infrastructure, theatre productions, and funding. Some may find that outputs, such as those found in the evaluations and studies on SIDA’s support to HR&DG, are not results but an expression of the different activities initiated with support from Sida.
That is true, but they are, nevertheless, the most tangible and direct manifestations of the interventions. And, as mentioned above, listing these as “results” is in line with SIDA’s and DAC’s usage of the concept. [p. 11]

This report also noted that
“Rather surprisingly, the study team found that only four of the reports made a clear distinction between outputs and outcomes. In other words, the general picture of the 31 reports in the sample is that they apply this terminology rather imprecisely and give limited  room for drawing any conclusions whether projects and programmes differentiate between different result levels.”

This is really not dissimilar to the general thrust of a review of “The Management of Results Information at SIDA”  [18 p.] produced in 2001.  Since information on

“programme expenditure, activities and outputs are covered in reports on programme progress, SIDA should make sure that counterparts’ annual reports focus on information about programme outcomes and impact.” [p. 15]

The suggested format for this report allocated roughly 80% of reporting space to the results (Outcomes and Impacts) and only about 20% to Outputs.

But although there is a move to strengthen RBM in SIDA, the OECD/DAD 2009 peer review of SIDA   [123 p.] noted that  “at the time of the peer review visits, many staff remained unclear what results-based management really entails in practice.[p. 16]  

The peer review also noted that
“SIDA’s very detailed manual on managing individual projects and programmes focuses strongly on planning and approval processes but provides no guidance on and makes no mention of results-based management.[p. 59].  
 A new guide was planned for 2009, but I could find no sign of it on the SIDA site.

SIDA is not alone in its mixed use of Outputs, of course, and there are suggestions that it may be moving toward greater clarity. A 2007 paper by the SIDA Results Project Group “Strengthening SIDA Management for Development Results[20 p] suggests that the agency wants reports on results, not just activities.  And a 2009 study comparing accountability in four aid  agencies  [84 p.] found that after criticisms in 2009 by the Swedish Minister of International Cooperation that SIDA was not reporting adequately on results “the first report on Outcomes” was produced in 2009.

“SIDA is now further expected to include these new results data in its annual reports….Meanwhile the “cascading effect” of this results orientation seems to have altered the functioning of SIDA’s major partners responsible for implementing the projects: a number of interviews with these partners notably refer to the growing budget they allocate to measuring projects results.” [p. 56] 
One particularly telling suggestion in reference to joint projects and partner capacity for RBM was made in the 2007 Strengthening SIDA Management for Development Results: “In countries where the capacity to generate and report results information is weak, donors should support capacity building” [p. 9]. This is a good suggestion, because projects that build results-based management capacity are rare.


The AusAID Results Framework


Length: 11 documents, 412 p.
Most useful: AusGuide 3.3: The Logical Framework Approach [Editorial note, June 2012: This document has been removed from the AusAid website since this review was written in 2010, but fortunately it can still be found at the Design, Monitoring and Evaluation for Peacebuilding website.]

The AusAID site is not easy to work with, but with some searching it is possible to find three documents -- part of the AusGuide -- relevant to how the agency defines results.  The most useful of these is the 2005 AusGuide 3.3:  The Logical Framework Approach [37 p.] which illustrates the AusAid results chain as:

ActivitiesOutputs Component Objectives/Intermediate Results Purpose/OutcomeGoal/Impact.

The definition of Outputs here follows the standard OECD/DAC definition as: “… the tangible products (goods and services) produced by undertaking a series of tasks as part of the planned work of the activity” and the examples provided make it clear that these are simply completed activities or products:
“…irrigation systems or water supplies constructed, areas planted/developed, children immunised, buildings or other infrastructure built, policy guidelines produced, and staff effectively trained.”
Only the last, “staff effectively trained”, suggests something approaching a result by suggesting that it is not enough to train, but that there must be some measurement of whether this was done “effectively”. This opens the door to considering change: how well the staff learned something. None of the other examples provided in the guide make it clear that Outputs are more than delivered actions: a training strategy developed; new courses designed; a series of special courses delivered; books produced or distributed.

A further complication is AusAID’s differentiation between “project Outputs” and “contractible Outputs” described in the 2002 version of AusGuidelines on the LFA   [45 p] and referenced briefly in the 2005 version.  This leaves the impression that project managers may only be responsible for delivery of completed activities.

But another  2005 document [Editorial note: no longer available by 2012], an updated version of its  September 2000 “Promoting Practical Sustainability"  noted: “Monitoring and reporting frameworks based on tools such as the logical framework approach should look beyond the contracted activity and output levels and incorporate regular assessment of the movement towards achieving sustainable outcomes “ and followed this with some specific recommendations, including that
“The basis of payment should attach payments to outputs or milestones that are largely within the control of the contractor while encouraging the contactor to focus on their contribution to ‘outcomes’."
In theory, at least, this suggests that reports (and the projects themselves) should address much more than the simple completion of activities.

And a 2006 M&E Framework Good Practice Guide  [8 p.] distinguishes between Outputs as “products, capital goods and services delivered by a development intervention to direct beneficiaries” on the one hand, and Outcomes as “short-term and medium-term results of an intervention’s outputs.”

More recent publications from AusAid differ in how they address what results are to be reported.  The Office Of Development Effectiveness, established to monitor results of projects, does not make reference in its most recently published (2008)  Annual Review of Development Effectiveness [74 p] to the results chain found in the AusGuide.  It refers instead to the importance of achieving “objectives”, and rating their quality, through the AusAid Quality Reporting System.   This apparently involves ratings by “activity” managers of  “activity objectives" and “higher level strategy objectives” at entry, and later by independent evaluators, but makes no reference to any standard AusAid results chain or logic model.

Although the website entry for the Quality Reporting System said [when this review was written in 2010] it “helps to ensure reliable, valid and robust information is available”, in the only two guides to this system  that I could find, one an overview of the Quality Reporting System,  [6 p.] and the other Interim Guidelines for Preparing Completion Reports [10 p.] there are no references at all to indicators, and two references to “objectives” and “key results” which may or may not be the same thing.  In the ODE’s review of development effectiveness similarly, there were only three references to indicators in the context of the Australian aid programme, one of which noted that in support of rather vague “objectives” for Vietnam, “No specific indicators were identified to define the focus or scope of ambition of Australian support meant to meet these objectives.”

The December 2009 AusAid Civil Society Water, Sanitation and Hygiene Fund Guidelines [25 p.] however, clearly identified Outcomes as results, substantive changes such as “increased access to improved sanitation services” or “improved hygiene behaviour”.  The same guidelines defined Outputs as essentially completed activities, such as “provision of technical support” or “facilitation of dialogue”,  and provide direction to NGOs applying for funds to describe their approach to “monitoring and evaluation of Outcomes”. [Editorial Note, June 2012: A subsequent February 2011 evaluation confirmed the focus on Outcomes as substantive changes.]

All of this is frustrating for the outsider, because it is clear from the work it was doing on contribution analysis between 2005-2009, that AusAid was genuinely trying to find practical means to document progress towards real results.  As one report on the use of contribution analysis in the Fiji Education Sector program [35 p.]  in 2007 noted, where previously the Program’s indicators were primarily at the Output level, after using this approach, “the new indicators subsequently provide information on progress towards and contribution to outcomes,…”. [p. 33]  This report similarly provides examples of results chains somewhat different from those described in the 2005 AusGuide.


ActivitiesOutputsImmediate OutcomesIntermediate OutcomesEnd Outcomes.

This was evidently not a one-shot effort because terms of reference for an AusAid evaluation in May of 2010, made specific reference in describing research methods to “…a basic contribution analysis and counterfactual assessment ahead of the in-country visit.”

The 2008 Peer Review of Australian Development Assistance  [117 p.]  states that “Performance reporting processes have now been brought together into one coherent system” but it is difficult for the average person, among the vast array of available documents on AusAid and associated websites, to find the coherence.

There probably are guides on the AusAid intranet providing clearer guidance on whether managers should now be reporting on progress towards real results at the Outcome level,  but based on what is easily and publicly available, what results AusAid requires of projects, or what data it requires in support of these – whether it expects reports on completed activities and products, or on genuine results -- remains unclear.


The DANIDA Results Framework


Length: 5 documents, 418 p.
Most useful: Guide to the Logical Framework Approach

It is very difficult to get a comprehensive and updated picture of how DANIDA currently understands the results chain.  The Danida website has what looks like a practical and potentially useful guide to the Logical Framework Approach  [148 p.] but it was produced in 1996, and it remains to be seen if the results chain defined there as

Activities→Outputs→Immediate Objective→Development Objective

is still valid.

The Danida definition of Outputs in this document: “Outputs are the tangible, specific and direct products of activities which largely are within the control of project management”  is, nevertheless, very similar to what the OECD/DAC came up with 6 years later.

The examples provided, such as “Awareness campaign about hygiene conducted”, make it clear that outputs are seen as completed activities and products, not changes.  Changes (results),  in the Danida results framework, are at the Objective level statements like “Population has adequate hygiene practices”.

Somewhat ambiguously, the suggestion in the document is that project managers are responsible only for Outputs, not for Objectives, while at the same time project evaluations are to assess whether the logic of the project was valid, and whether the Outputs did contribute to the Objectives.

In this sense, the “results chain” expectation is that a project or programme will eventually report on how its Outputs contributed to Objectives.  The November 2009, DANIDA Analysis of Programme and Project Completion reports 2007-2008 [76 p.], lists in the annexes the format for these reports, and they include an assessment of
“…the extent to which the programme/component has achieved the general objectives as defined in the partner programme document, and discuss the contribution by Danida to achieving the objectives” .
It is the Project and Programme Managers who must do this reporting, so we would expect that this might generate some interest in collecting data not just on Outputs or completed activities but on the Objectives (results).

But in the Ministry of Foreign Affairs Guidelines for Programme Management [87 p.] updated in 2009, while there are some passing references to Immediate Objectives in the Template for Semi-Annual and Annual Reports [p. 66],  the emphasis is clearly on details about Outputs – or completed activities [p. 67-70].

The Ministry of Foreign Affairs Grant Administration Guideline Project status report formats for NGOs in 2010 asked for
“..an account of the project objectives, indicators related to objectives, preliminary outcomes and assessment of the potential of the project for realising the project outcomes(s) established.” 
Indicators in these reports were required for Outputs, but they were also, apparently, and somewhat ambiguously, to refer to Objectives and “Outcomes”. [Update, July 2016: An April 2011 version of this document referred to the requirements as "project objectives, indicators related to objectives, preliminary results and assessment of the potential of the project for realising its objectives. A February 2016 DANIDA Guidelines for Programmes or Projects up to 37 Million DKK, refers to the project level results chain as moving from Engagement Outputs to Engagement Outcomes and then to Impacts. - p. 20 ]

On the whole, it is not really clear from these sources then, to what extent DANIDA does push for reports on results, rather than completed activities.

It could be that as Denmark decentralizes responsibility for aid management, and aid reporting not just to country offices, but to Danish missions to multilateral agencies, the definitions, and interpretations of what results must be reported on, have become more diffuse.

The 2007 OECD DAC peer review of Denmark’s aid programme [103 p.] suggested that “Denmark could consider further rationalising this reporting system, as it involves many different tools and may be time consuming given embassies’ staffing constraints.” [p. 14]



The bottom line: Holding UN agencies to account

These three agencies, SIDA, AusAid and DANIDA, have a clear interest in increasing the proportion of their aid going through multilateral agencies, and are making real efforts to improve delivery of programmes.  Anecdotal evidence is that they take results reporting seriously in practice. But given the ambiguity of their own definitions of results, which often, in different documents from the same agencies, conflict, SIDA, AusAID and DANIDA would have some difficulty in holding UN agencies to higher results reporting standards.


Next post:   DFID, CIDA, EuropeAid and USAID


For further reading, and more original documents on SIDA, AusAID, DANIDA and the OECD DAC see:

[Some links updated, April, 2012, and June 2012 - The aid agencies remove documents regularly from their own websites, but readers may sometimes be able to find them at other sites, with some diligent online search. One site to try, if some of the above links do not work, is the Project Management for Development Organizations Open Library Site, which archives a lot of useful material.]
 
Read the latest posts