google-site-verification: googlefccb558c724d02c7.html


Friday, December 31, 2010

26 lessons about RBM from the 1990's remain valid today

[Updated November 2018]
Greg Armstrong --

Lessons learned about RBM in the last century remain valid in 2018.

Implementing Results-Based Management: Lessons from the Literature – Office of the Auditor-General of Canada

Level of difficulty: Moderate
Primarily useful for: Senior Managers of partner Ministries and aid agencies
Length: Roughly 18 pages (about 9,000 words)
Most useful section: Comments on the need for a performance management culture
Limitations: Few details on implementation mechanisms

The Office of the Auditor-General of Canada deals with the practical implications of results-based management, or with the failure of agencies to use RBM appropriately, as it conducts performance audits of a large number of Canadian government agencies. The Auditor-General's website, particularly the Audit Resources section  holds several documents in the “Discussion papers” and “Studies and Tools” that are a reminder that many of the lessons learned fifteen years ago about RBM remain relevant today.

Who this is for:
26 Lessons on RBM - Reviewed by Greg Armstrong 

The paper Implementing Results-Based Management: Lessons from the Literature  provides a concise and relatively jargon-free summary of lessons from practical experience about how to implement results-based management. Its purpose, as the introduction notes, was to “assess what has worked and what has not worked with respect to efforts at implementing results-based management”.   It is shorter and more easily read than some of the useful but much longer publications on RBM produced since, and could be useful to agency leaders wanting a reminder of where the major pitfalls lie as they attempt to implement results-based management and results-based monitoring and evaluation systems.

The lessons reported on here about implementation of results based management remain as valid today as they were in 1996 when a first draft was produced, and in 2000, when this document was released.

Lessons learned about RBM in the last century remain relevant in 2010

Many of the lessons described briefly here are derived from study of field activities of agencies from North America, Europe, and the Pacific, going back at least twenty years. The 2000 paper is based on reviews of 37 studies on lessons learned about RBM which were themselves published between 1996-1999, and builds on the earlier study, referenced briefly here, which reviewed 24 more studies produced between 1990-1995.

More recent reviews of how RBM or Management for Development Results are -- or should be --implemented in agencies such as the United Nations, such as Jody Kusek and Ray Rist’s 2004 Ten Steps to a Results-Based Monitoring and Evaluation System  and Alexander MacKenzie’s 2008 study on problems in implementing RBM at the UN country level  build on, and elaborate many of the points made in these earlier studies, moving from generalities to more specific suggestions of how to make operational changes.

The 2000 paper from Canada's Office of the Auditor-General lists 26 lessons on how to make RBM work, and many of them repeat, and elaborate on the lessons learned earlier. The lessons on effective results-based management as they are presented here are organized around three themes:

  • Promoting favourable conditions for implementation of results-based management
  • Developing a results-based performance measurement system
  • Using performance information

A brief paraphrased summary of these lessons will make it obvious where there are similarities to the more detailed work on RBM and results-based monitoring and evaluation done in subsequent years. My comments are in italics:

Promoting Favourable Implementation Conditions for RBM

1. Customization of the RBM system: Simply replicating a standardised RBM system won’t work. Each organization needs a system customized to its own situation.

  • The literature on implementation of innovations, going back to the 1960’s confirms the need for adaptation to local situations as a key element of sustained implementation.

2. Time required to implement RBM: Rushing implementation of results-based management doesn’t work. The approach needs to be accepted within the organization, indicators take time to develop, data collection on the indicators takes more time, and results often take more time to appear than aid agencies allocate in a project cycle.

  • Many of the current criticisms of results-based management in aid agencies focus on the difference between the time it takes to achieve results, and aid agencies’ shorter reporting timelines.

3. Integrating RBM with existing planning: Performance measures, and indicators, should be integrated with strategic planning, tied to organizational goals, and management needs, and performance measurement and monitoring need high-level endorsement from policy makers.
  • Recent analyses of problems in the UN reporting systems repeat what was said in articles published as long ago as 1993.  These lessons have evidently not been internalised in some agencies.
4. Indicator data collection: We should build management systems that support indicator data collection and results reporting and, where possible, build on existing data collection procedures.

5. Costs of implementing RBM: Building a useful results-based management system is not free. The costs need to be recognised and concrete budget support provided from the beginning of the process.

  • This is something most aid agencies have still not dealt with. They may put in place substantial internal structures to support results reporting, but shy away from providing implementing agencies with the necessary resources of time and money for things such as baseline data collection.

6. Location for RBM implementation:  There are mixed messages on where to locate responsibility for coordinating implementation of RBM.  Some studies suggested that putting control of the performance measurement process in the financial management or budget office, “may lead to measures that will serve the budgeting process well but will not necessary be useful for internal management".  Others said that responsibility for implementation of the RBM system should be located at the programme level to bring buy-in from line managers, and yet another study made the point that the performance management system needs support from a central technical agency and  leadership from senior managers.

  • The consensus today is that -- obviously in a perfect world -- we need all three:  committed high level leadership, technical support and buy-in from line managers.

7. Pilot testing a new RBM system: Testing a new performance management system in a pilot project can be useful before large-scale implementation – if the pilot reflects the real-world system and participants.

8. Results culture: Successful implementation requires not simply new administrative systems and procedures but the development of a management culture, values and behaviour that really reflect a commitment to planning for and reporting on results.

  • 15 years after this point was made in some analyses of implementation of results-based management, the lack of a results culture in many UN agencies was highlighted in the 2008 review of UN agency RBM at the country level, and the 2009 UNDP handbook on planning, monitoring and evaluating for development results, reiterates the old lesson that building this culture is still important for implementation of results-based management.

9. Accountability for results: Accountability for results needs to be redefined, holding implementers responsible not just for delivering outputs, but at least for contributing to results, and for reporting on what progress has been made on results, not just on delivery of outputs.

  • The need to focus on more than just deliverable outputs to make results-based management a reality, was mentioned in some articles in the early 1990’s, reiterated in OECD documents ten years later, yet remains an resolved issue for some aid agencies which require still, just reports on deliverables, rather than on results.

10. Who will lead implementation of RBM: Strong leadership is needed from senior managers to sustain implementation of a new performance management system.

  • This remains a central concern in the implementation of results based management and performance assessment.  Strong and consistent leadership, committed to, and involved in the implementation of a new RBM system, remains in recent reviews of aid agency performance, such as the evaluation of RBM at UNDP, a continuing issue.

11. Stakeholder participation: Stakeholder participation in the implementation of RBM  -- both from within and from outside of the organization – will strengthen sustainability, by building commitment, and pointing out possible problems before they occur.

  • There is now a general acceptance – in theory – of the need for stakeholder participation in the development of a results-based performance management system but, in practice, many agencies are unwilling to put the resources – again, time and money – into genuine involvement of stakeholders in analysis of problems, collection of baseline data on the problems, specification of realistic results, and ongoing data collection, analysis and reporting.

12. Technical support for RBM: Training support is needed if results-based systems are to be effectively implemented, because many people don’t have experience in results-based management. Training can also help change the organizational culture, but training also takes time. Introducing new RBM concepts can be done through short-term training and material development, but operational support for defining objectives, constructing performance indicators, using results data for reporting, and evaluation, takes time, and sustained support.

  • A fundamental lesson from studies dating back to the 1970’s on the implementation of complex policies and innovations, is that we must provide technical support if we want a new system, policy or innovation to be sustained – We can’t just toss it out and expect everyone else to adopt it, and use it.
  • Some aid agencies have moved to create internal technical support units to help their own staff cope with the adoption and implementation of results-based management, but few are willing to provide the same technical support to their stakeholders and implementation partners.

13. Evaluation expertise: Find the expertise to provide this support for management of the RBM process on a continuous basis during implementation. Often it can be found within the organization, particularly among evaluators.

14. Explain the purpose of performance management: Explain the purpose of implementing a performance management system clearly. Explain why it is needed, and the role of staff and external stakeholders.

Auditor-General of Canada web page, on lessons learned about implementing RBM
Click to go to the English version of  Auditor-General of Canada's website or here for French

Developing Performance Measurement Systems

15. Keep the RBM system simple: Overly complex systems are one of the biggest risks to successful implementation of results-based management. Keep the number of indicators to a few workable ones but test them, to make sure they really provide relevant data.

  • Most RBM systems are too complex for implementing organizations to easily adopt, internalize and implement. Yet, they need not be. Results themselves may be part of a complex system.  But  simpler language can be used to explain the context, problems and results, and jargon discarded, where it does not translate -- literally in language but also to real world needs of implementers and ultimately the people who are supposed to benefit from aid.

16. Standard RBM terms: Use a standard set of terms to make comparison of performance with other agencies easier.

  • The OECD DAC did come up with a set of harmonized RBM definitions in 2002, but donors continue to use the terms in different ways, and, as I have noted in earlier posts, have widely varying standards (if any) on how results reporting should take place.  So simply using standardised terms is not itself sufficient to make performance comparisons easy.

17. Logic Models: Use of a Logic Chart helps participants and stakeholders understand the logic of results, and identify risks.

  • Logic Models (as some agencies refer to them) were being used, although somewhat informally, 20 years ago, in the analysis of problems and results for aid programmes. Some agencies such as CIDA [now Global Affairs Canada]  have now brought the visual Logic Model to the centre of project and programme design, with some positive results. The use of the logic model does indeed make the discussion of results much more compelling for many stakeholders, than did the use of the Logical Framework.

18. Accountability for results: Make sure performance measures and reporting criteria are aligned with decision-making authority and accountability within the organization. Indicator data should not be so broad that they are useless to managers. If managers are accountable for results, then they need the power and flexibility to influence results. Managers and staff must understand what they are responsible for, and how they can influence results. If the performance management system is not seen as fair, this will undermine implementation and sustainability of results based management.

19. Credible indicator data:   Data collected on indicators must be credible -- reliable and valid.   Independent monitoring of data quality is needed for this.

  • This remains a major problem for many development projects, where donors often do not carefully examine  or verify the reported indicator data.

20. Set targets:  Use benchmarks and targets based on best practice to assess performance.

  • Agencies such as DFID and CIDA are now making more use of targets in their performance assessment frameworks.

21. Baseline data:   Baseline data are needed to make the results reporting credible, and useful.

  • Agencies such as DFID are now concentrating on this. But many other aid agencies continue to let baseline data collection slide until late in the project or programme cycle when it is often difficult or impossible to collect.  Some even focus on the reconstruction of baseline data during evaluations – a sometimes weak and often ultimately last-ditch attempt to salvage credibility from inconsistent, and unstructured results reporting.
  • Ultimately, of course, it is the aid agencies themselves which should collect the baseline data as they identify development problems.  What data do international aid agencies have to support the assumptions that first, there is a problem, and second that a problem is likely to be something that could usefully be addressed with external assistance? All of this logically should go into project design. But once again, most aid agencies will not put the resources of time and money into project or programme design, to do what will work.

Using Performance Information

22. Making use of results data: To be credible to staff and stakeholders, performance information needs to be used – and be seen to be used. Performance information should be useful to managers and demonstrate its value.

  • The issue of whether decisions are based on evidence or on political or personal preferences remains important today, not just for public agencies but, as it has been recently argued, for private aid.

23. Evaluations in the RBM context: Evaluations are needed to support the implementation of results based management. “Performance information alone does not provide the complete performance picture”. Evaluations provide explanations of why results are achieved, or why problems occur. Impact evaluations can help attribute results to programmes. Where performance measurement is seen to be too costly or difficult, more frequent evaluations will be needed, but where evaluations are too expensive, a good performance measurement system can provide management with data to support decision making.

  • Much of this is more or less accepted wisdom now.  The debate over the utility of impact evaluations, primarily related to what are sometimes their complexity and cost, continues, however.

24. Incentives for implementing RBM: Some reward for staff – financial or non financial – helps sustain change. This is part of the perception of fairness because “accountability is a two way street”. The most successful results based management systems are not punitive, but use information to help improve programmes and projects.

25. Results reporting schedule: Reports should actually use results data and regular reporting can help staff focus on results. But “an overemphasis on frequent an detailed reporting without sufficient evidence of its value for public managers, the government, parliament and the public will not meet the information needs of decision-makers.”

26. Evaluating RBM itself: The performance management system itself needs to be evaluated at regular intervals, and adjustments made.


 This study is a synthesis (as have been many, many studies that followed it) of secondary data, a compilation of common threads, not a critical analysis of the data and not, itself, based on primary data.

It is only available, apparently, on the web page, not as a downloadable document. If you print it or convert it to an electronic document, it runs about 18 pages.

The bottom line:

The basic lessons about implementation of RBM were learned, apparently, two decades ago, and continue to be reflected throughout the universe of international aid agency documents, such as the Paris Declaration on Aid Effectiveness, but concrete action to address these lessons has been slow to follow.

This article still provides a useful summary of the major issues that need to be addressed if coherent and practical performance management systems are to be implemented in international aid organizations, and with their counterparts and implementing organizations.

Further reading on Lessons learned about RBM

OECD’s 2000 study: Results-based Management in the Development Cooperation Agencies: A review of experience (158 p), summarizes much of the experience of aid agencies to that point, and for some agencies not much has changed since then.

The World Bank's useful 2004, 248-page Ten Steps to a Results-Based Monitoring and Evaluation system written by Jody Kusek and Ray Rist, is a much more detailed and hands-on discussion of what is needed to establish a functioning performance management system, but it is clear that some of their lessons, similar to those in the Auditor-General's report, have still not been learned by many agencies.

John Mayne’s 22-page 2005 article Challenges and Lessons in Results-Based Management  summarises some of the issues arising between 200-2005.  He contributed to the earlier Auditor-General's report, and many others. [Update, June 2012: This link works sometimes, but not always.]

The Managing for Development Results website, has three reports on lessons learned at the country level, during the implementation of results-based management, the most recent published in 2008.

The 2009 IBM Center for the Business of Government’s 32-page Moving Toward Outcome-Oriented Performance Measurement Systems written by Kathe Callahan and Kathryn Kloby provides a summary of lessons learned on establishing results-oriented performance management systems at the community level in the U.S., but many of the lessons would be applicable on a larger scale and in other countries.

Simon Maxwell’s October 21, 2010 blog, Doing aid centre-right: marrying a results-based agenda with the realities of aid  provides a number of links on the lessons learned, both positive and negative, about results-based management in an aid context.


Greg Armstrong is a Results-Based Management specialist who focuses on the use of clear language in RBM training, and in the creation of usable planning, monitoring and reporting frameworks.  For links to more Results-Based Management Handbooks and Guides, go to the RBM Training website

Thursday, September 30, 2010

Reporting on complex results: A short comment

Greg Armstrong --

Change is complex -- but this is not news.  Is it reasonable to expect international development programmes and projects working in complex situations to report on results?  A brief comment on recent discussions.

[Edited to update links, June 2018]

Are Aid Agency Requirements for Reporting on Complex Results Unreasonable?

A recent post titled "Pushing Back Against Linearity" on the Aid on the Edge of Chaos Blog described a discussion among 70 development professionals at the Institute on Development Studies " reflect on and develop strategies for ’pushing back’ against the increasingly dominant bureaucratisation of the development agenda."

This followed a May 2010 conference in Utrecht, exploring the issues of complexity and evaluation, particularly the issue of whether complex situations, and the results associated with projects in such situations, are susceptible to quantitative impact evaluations. That conference has been described in a series of blog postings at the Evaluation Revisited website and in two blog postings by Sarah Cummings at The Giraffe.

The more recent meeting described by the Aid on the Edge of Chaos blog, and in a very brief report by Rosalind Eyben of the IDS Participation, Power and Social Change Team, which can be found at Aid on the Edge of Chaos site, appears to have focused on "pushing back" against donors insisting on results-based reporting in complex social transformation projects. [Update 2018:  These posts no longer seem to be available] This report, given its brevity, of necessity did not explore in detail all of the arguments against managing for results in complex situations but a more detailed exposition of some of these points can be found in Rosalind Eyben's earlier IDS publication Power, Mutual Accountability and Responsibility in the Practice of International Aid: A relational Approach.

Reporting on Complex Change

I think it would be a mistake to assume that the recent interest in impact evaluation means that most donors are ignorant of the complexity of development.  Certainly, impact evaluations have real problems in incorporating, and "controlling for" the unknown realities and variables of complex situations.  But I know very few development professionals inside donor agencies who actually express the view that change is in fact a linear process.  Most agree that the best projects or programmes can do is make a contribution to change, although the bureaucratic RBM terms, often unclear results terminology and the results chains and results frameworks used by these agencies often obscure this.

Change is, indeed, complex, but this is not news.  The difficulties of implementing complex innovations have been studied for years -  in agricultural innovation, then more broadly in assessments of implementation of public policy in the Johnson Administration's Great Society Programmes.  People like Pressman and Wildavsky, and more recently Michael Fullan have been working within this complexity for years, to find a way to achieve results in complex contexts, and to report on them.

It is reasonable that anyone using funds provided by the public think and plan clearly, and explain in a similarly clear manner what results we hope, and plan, for.  When assessing results, certainly, we often find that complex situations provide unexpected results, which are also often incomplete.  But at the very least we have an obligation, whatever our views of change, and of complexity, to explain what we hope to achieve, later to explain what results did occur, and whether there is a reasonable argument to be made that our work contributed to this.  Whether we use randomized control groups in impact evaluation, Network Analysiscontribution analysis, the Most Significant Change process, participatory impact assessment, or any of a number of other approaches, some assessment and some reasonable attempt at reporting coherently, has to be done.

The report on the Big Push Back meeting, cites unreasonable indicators ("number of farmers contacted, number of hectares irrigated") as arguments against the type of reporting aid agencies require, but these examples are unconvincing, because, of course, they are not indicators of results at all, but indicators of completed activities.  The interim results would be changes in production, and the long term results the changes in nutrition, or health to which these activities contributed, or possibly unanticipated and possibly negative results such as economic or social dislocation, that can only be reported by villagers or farmers themselves, probably using coherent participatory or qualitative research.]

Qualitative data are, indeed, often the best sources of information on change, and I say this as someone who has used this as my primary research approach over 30 years, but they should not be used casually, on the assumption that using qualitative methods provides a quick and easy escape from reporting with quantitative data.  When qualitative data are used responsibly it is in a careful and comprehensive manner, and  if they are used, to be credible, they should be presented as more than simply anecdotal evidence from isolated individuals. Sociologists have been putting qualitative data together in a convincing manner for decades and so too have many development project managers.

The bottom line:

This is certainly not a one-sided debate.  To a limited extent, the meeting notes reflect this, and Catherine Krell's July 2010 article on the sense of disquiet she felt, after the initial conference in Utrecht about how to balance an appreciation of complexity with the need to report on results, discusses several of the issues that must be confronted if the complexity of development is to be reflected in results reporting.  It is worth noting that at this date, hers is the most recent post on the Evaluation Revisited website.

The report on the "Push Back" conference notes that one participant in the meeting "commented that too many of us are ‘averse to accounting for what we do. If we were more rigorous with our own accountability, we would not be such sitting ducks when the scary technocrats come marching in’."

Whatever process is used to frame reporting questions, collect data and report on results, the process of identifying results is in everybody's interest.  It will be interesting to follow this debate.

Further reading on complexity, results and evaluation:

There is a lot of material available on these topics, in addition to those referenced above, but the following provide an overview of some of the issues:

Alternative Approaches to the Counterfactual (2009) - A very brief summary, arising from a group discussion, of 4 "conventional" approaches to Impact evaluation using control groups, and 27 possible alternatives where the reality of programme management makes this impractical.

Designing Initiative Evaluation: A Systems-oriented Framework for Evaluating Social Change Efforts (2007) - A Kellogg Foundation summary of four approaches to evaluation complex initiatives.

A Developmental Evaluation Primer (2008) by Jamie A.A. Gamble, for the McConnell Foundation, explains Michael Quinn Patton's approach to evaluation of complex organizational innovations.

Using Mixed Methods in Monitoring and Evaluation (2010) by Michael Bamberger, Vijayendra Rao and Michael Woolcock -- A World Bank Working Paper that explores how combining  qualitative and quantitative methods in impact evaluations can mitigate the limitations of quantitative impact evaluations.

Friday, August 27, 2010

Bilateral Results Frameworks --2: USAID, DFID, CIDA and EuropeAid

Greg Armstrong --

[Most recent update:  July 2019]
Here's the question: Will bilateral aid agencies hold the multilaterals to account for results or not? UN agencies’ results reporting is inconsistent, and the results frameworks of SIDA, AUSAID and DANIDA remain ambiguous.  This post reviews results frameworks from USAID, DFID, CIDA and EuropeAid.

Level of Difficulty: Moderate to complex
Primarily useful for:  Project managers, national partners
Length:  23 documents, 1,444 p.
Most useful: CIDA RBM Guide, DFID LFA Guide and EuropeAid Capacity Development toolkit
Limitations: Mounds of bureaucratic language in many of the bilateral documents make it difficult to identify and take effective guidance from potentially useful material.

Who these materials are for

Project managers, evaluators and national partners trying to understand how USAID, DIFD, CIDA and EuropeAid define results and frame their approaches to RBM.

Background: Ambiguous results chains at the UN, and some bilateral agencies.

In my previous three posts, I examined how vague UN RBM frameworks can provide the rationale for some agencies to avoid reporting on results, to focus instead simply on describing their completed activities, and how similar ambiguities in the results frameworks and definitions from AusAid, DANIDA and SIDA , would make it difficult for them to hold the UN agencies to higher standards of results reporting.  The fourth and final post in this series briefly surveys how the results frameworks of four more bilateral agencies compare to those of the OECD/DAC and UNDAF. This review is only, as I noted in past posts, of those agencies where information, in English, could be obtained in reasonable time from their own or associated --  and publicly accessible -- websites.   For those who want more detail, links to the relevant donor agency RBM sites can be found at the bottom of this article.

I am proceeding from the premise, again, that “cause and effect” is not a reasonable description of what is intended by a results chain, but rather that it is a notional concept of how activities can contribute to intended results.

The USAID Results Framework

Friday, July 30, 2010

Bilateral Results Frameworks --1: SIDA, AusAid, DANIDA

Greg Armstrong --

As bilateral donors consider offloading responsibility for delivering aid programmes to multilateral agencies, it pays to examine what standards these UN agencies use for results reporting. Two earlier posts looked at the relatively weak results reporting in many UN agencies, and questioned why  the bilateral agencies don’t hold the UN agencies to higher standards. This third post looks at how three major bilateral donors define their results frameworks. 

Level of Difficulty:  Moderate to complex
Primarily useful for:  Project managers, national partners
Length:  23 documents, 1,337 p.
Most useful: SIDA, AusAid and DANIDA guides to the LFA
Limitations: Mounds of bureaucratic language in many of the bilateral documents make it difficult to identify and take effective guidance from potentially useful material.

[Some links updated, April, 2012, June 2012 and June 2018 - The aid agencies remove documents regularly from their own websites, but readers may sometimes be able to find them at other sites, with some diligent online search. One site to try, if some of the above links do not work, is the Project Management for Development Organizations Open Library Site, which archives a lot of useful material.]

Who these materials are for

Project managers, evaluators and national partners trying to understand how SIDA, AusAid and Danida define results and frame their approaches to RBM.

Background – Confusion in UN results definitions

In my previous two posts, I examined how UN Results-Based Management frameworks can provide the rationale for some agencies to avoid reporting on results, and to focus instead simply on describing their completed activities.  To a large extent this confusion arises out of the ambiguous terminology used in Results-Based-Management  – words that mean different things to different agencies even as they agree to use them in common.  This is complicated further in the UN case when agencies apply the same terms to near-term results at the project level as are applied at the aggregated country level. As I noted in the second post, vague definitions for results, even when they are ‘harmonised’, too frequently seem to lead unmotivated or inattentive agency heads and project managers to the path of least resistance: project reporting on completed activities and not results. 

This third and a subsequent fourth post in this series briefly survey how the results frameworks of 7 bilateral aid agencies compare to those of the OECD/DAC and UNDAF. This review is only of those agencies where information, in English, could be obtained in reasonable time from their own or associated --  and publicly accessible -- websites. Those already familiar with the discussion of the problems in the UN framework may want to skip directly to the review of the bilateral agencies.

I am assuming, for the moment, a general agreement that “cause and effect” is not a reasonable description of what is intended by a results chain, but rather that it is a notional concept of how activities can contribute to intended results.   In this context, this post examines how results chains are presented in documents from the 
  • OECD/DAC, 
  • SIDA, 
  • AusAid, and  
  • Danida.   

USAID, DFID, CIDA and EuropeAid results frameworks will be analysed in the final post in this series.

Monday, June 28, 2010

Results-Based Management at the United Nations -- 2: Ambiguous results definitions

--Greg Armstrong --

Problems with United Nations agency reporting on results can, in part, be attributed to ambiguous definitions of Outputs.

Level of Difficulty:  Moderate-complex
Primarily useful for:  Anyone trying to understand the UN’s inconsistent RBM system
Coverage:  13 papers, totalling  721 p.
Most useful components: Technical Briefs on Outcomes, Outputs, Indicators and Assumptions
Limitations: Large number of potential documents laden with bureaucratic language

Who this is for

This post, and the previous review of UN agency problems in reporting on results, is intended for bilateral aid agency representatives, national government representatives, project managers, monitors or evaluators, trying to understand the inconsistent reporting of project results by UN agencies.  Because the UN documents are often lengthy, and laden with bureaucratic language, it is unlikely to be of interest to people who don’t need to work with UN agency counterparts.

Background: Problems in UN RBM

This is the second of four posts assessing UN agency results chains, results definitions and problems in reporting on results, and (in the third and fourth posts) the results frameworks bilateral aid agencies use. In the previous post, I suggested that inconsistent UN agency results reporting could, in part, be attributable to a weak results culture, and sometimes weak leadership at the country level within some UN agencies.  This post reviews how ambiguous results definitions also undermine UN agencies’ credibility in results reporting.

A third post in this series will review results chains for 3 bilateral aid agencies – SIDA, AusAID, and DANIDA -- define results, and the fourth and final post will review how USAID, DFID, CIDA and EuropeAid define results and results chains.

How results are defined in the UN at the country level

Language matters.  I have argued elsewhere that the terminology used in Results-Based Management is dysfunctional, largely because the jargon (Outputs, Outcomes, Impact, Objectives, Purpose) can mean many different things and, that in the context of development programming, terms used for results are intended to mean something different than they mean in day to day usage.  What works in almost all project contexts, however, is to focus on change as the characteristic of a result.  This is a word and a concept that works in most languages, and appeals to people’s desire for common-sense terminology.

For most bilateral donors, whatever the specific terms they use, completed activities -- often referred to as Outputs -- are not sufficient for reporting purposes.  While bilateral project managers are obviously required to report on completion of activities, the real emphasis in project reports is expected to be on if and how these activities are contributing to significant changes in the short to mid-term -- changes to knowledge, attitudes, policy or professional practice.

In other words, results.

Some agencies refer to these results as Outputs, Outcomes and Impacts. Others refer to them as Objectives, or Purpose, or as Immediate, Intermediate and Ultimate Outcomes. But whatever the terms, the focus is clear: “Tell us about how the project or programme is contributing to change, not just about how you spent the money”.

The problem for bilateral and national agency partners trying to hold UN partners to reasonable standards of results based management lie, I think, in the vast number of documents dealing with results in the UN context; the ambiguity of the UN definitions of results; and the confusion about how results chains for projects relate to results chains at the country programme level for different agencies.

At the UN Development Assistance Framework level, results from different agencies are essentially being aggregated, and as the January 2010 UNDAF document "Standard Operational Format and Guidance for Reporting Progress on the UNDAF" [22 p.] made clear, the UNDAF report should be “focused on reporting results at a strategic level….”.  [Update:  That UNDAF guide no longer appears easily accessible on the UNDAF website, but was replaced apparently, in 2011 by a longer UNDG Results-Based Management Handbook.]

Unfortunately,  terms that define results for aggregations of projects at the strategic level do not necessarily work at the individual project level.

So, how did the UN, in the UNDAF and UNDG guides and technical briefs, deal with results?

UN Results Chains

Results chains describe the sequence, and the nature of links between, activities, completed activities, and near-term, mid-term and long-term results.  Most practitioners agree that a direct cause and effect between activities and results is unreasonable given the wide range of intervening variables that occur in real life, but that results chains describe the general sequence of how activities can contribute to change – or results.

The several UN documents reviewed here and in the previous post variously refer to results chains (sometimes in the same document) as


Activities→Outputs→Agency Outcomes→UNDAF Outcomes→National Priority

Activities→Outputs→Country Programme Outcomes→UNDAF Outcomes→National Priority

Activities→Outputs→Agency Outcomes→Country Programme Outcomes→UNDAF Outcomes→National Priorities

“Activity Results”→Outputs→Outcome→UNDAF Outcome→National Priority

UN Agency Outputs→UNDAF Output→National Outcome→National Goal

Looking at these it is no wonder that there are differences among implementing agencies in how results are explained for projects and programmes in the UN system.

UN Outcomes

Most UN agencies use as the basis for their own definitions of results, the 2003 harmonized  UNDG Results-Based Management Terminology  [3 p.]  which grew out of the OECD/DAC Glossary  of Key Terms in Evaluation and Results-Based Management  [37 p.]. “Outcomes,” the harmonized terminology states “represent changes in development conditions which occur between the completion of outputs and the achievement of impact.”

Of the hundreds of document available at the UNDG website,  the most frequently referenced for elaboration on RBM terms are four technical briefs produced in 2007.  These included technical briefs on Outcomes, on Outputs, on Indicators and on Assumptions and Risk [Update:  The Word file for these briefs was removed from the UNDG website, but is still available, as of June 2018, in a cached version on Google].

The 2007 Technical Brief on Outcomes, [7 p.] the apparent foundation for many of the other UNDG documents on Results-Based Management, however, explained Outcomes at the country level this way -- The UN country teams have two separate, but linked, Outcomes at the country level:

  • UNDAF Outcomes

  • Country Programme Outcomes -- which incorporate individual UN agency Outcomes. These are not as clearly defined in these documents as are UNDAF Outcomes, but they appear to be seen as the changes to things such as policy or legislation, needed to facilitate long-term institutional or behaviour change.  Bilateral donors might see these as mid-level results or Intermediate Outcomes,  achievable to some degree over the period of a project.

So, for example, two Country Programme Outcomes of adoption or passage of Human Rights legislation and then adequate budgeting for its implementation might - it was hoped - lead to longer term UNDAF Outcomes of improved human rights in the country.

As the useful checklist in this Technical Brief on Outcomes noted on page 4, a Country Programme Outcome  “…is NOT a discrete product or service, but a higher level statement of institutional or behavioural change.”  The same checklist adds that a Country Programme Outcome should describe “a change which one or more UN agencies is capable of achieving over a five year period.”

[Editorial note, January 2012]:  This document was replaced in February 2011 with the updated Technical Brief on Outcomes, Outputs, Indicators and Risks and Assumptions [21 p.], which maintains some, but not all, of the principles of the 2007 brief]  

All of this is fairly easily understandable, and  as long as the assumptions underlying all of these intended results are monitored, these definitions should open the door for UN agencies to collaborate with other donors and with national governments on solid Results-Based Management, and the reporting of results.  Many bilateral aid projects also have a 5-year term so it would be reasonable to see Outcomes occurring in that period.

However, this approach is not always applied, agency-by-agency, to UN results reporting at the project level.  The  2009 UNDP Handbook on RBM,  [221 p.] (updated in 2011) while it has many very useful components, says, of the scope of project evaluations, that the focus should be on:
"...Generally speaking, inputs, activities and  outputs (if and how project outputs were delivered within a sector or geographic area and if direct results occurred and can be attributed to the project)* "[p. 135] .
The footnote in that quote acknowledges, however, that some large projects may have Outcomes that could be evaluated. And while later the Handbook says of project reporting that it should include “An analysis of project performance over the reporting period, including outputs produced and, where possible, information on the status of the outcome “ [p. 115] it is clear that at the project level, the priority is on reporting of Outputs.

The UNDP Handbook has a very good discussion of problem identification and stakeholder involvement in the development of a results framework which, it says, “can be particularly helpful at the project level” [p. 53].  But while the UNDP Handbook reiterates the importance of attention to results at the country level, this is less obvious at the UNDP project level:

“Since national outcomes (which require the collective efforts of two or more stakeholders) are most important, planning, monitoring and evaluation processes should focus more on the partnerships, joint programmes, joint monitoring and evaluation and collaborative efforts needed to achieve these higher level results, than on UNDP or agency outputs. This is the approach that is promoted throughout this Handbook.” [p. 14]

The problem with this is that, if attention to results from the component parts of a development programme (i.e. the projects or activities) is missing, and if project results are not properly reported, then the foundation for country-level reporting will be, at best, hypothetical.

On the other hand, a revised draft ILO RBM Guide  [34 p.] noted that:
“Some mistakenly think that outputs are ends in themselves, rather than the various means to ends.  RBM reminds us to shift our focus away from inputs, activities and outputs—all of which are important in the execution and implementation of work—and place it instead on clearly defined outcomes....[p. 17]
[June 2018 update:  This draft is no longer available, but a new ILO RBM and M&E Manual was produced in 2016.]

Confusion Over UN Agency Outputs

Sunday, May 30, 2010

Results-Based Management at the United Nations – 1: Inconsistent RBM

--Greg Armstrong --

There is wide variation in the competence of UN agencies’ use of Results-Based Management. As bilateral aid agencies consider how they can offload responsibility for managing aid programmes to multilateral agencies, it is worth examining how the UN agencies stack up in terms of accountability for results. This is the first of four posts comparing how bilateral and UN aid agencies define results. It reviews publicly available guidelines and policy documents to try to understand the underlying causes for poor results reporting at the UN agencies.

[Edited to update links June 2018]

Level of Difficulty of the reviewed documents:  Moderate to complex
Primarily useful for: Those trying to understand the UN’s inconsistent RBM system
Coverage:  8 documents reviewed,  546 p.
Most useful: The Draft ILO RBM Guide
Limitations: Dense language in most of the documents, and unresolved ambiguities about what results mean in the UN context.

Who this post is for

This post is intended for bilateral aid agency representatives, host country government agencies, project managers, evaluators and monitors who want to know why there is such a wide variation in the standards applied by different UN agencies to managing for and reporting on results. While most readers will not end up any more satisfied with the UN approaches to RBM after reading these posts, they may understand why there is such variation in performance reporting.

One of the problems in reviewing donor agency policies on results reporting, and sometimes the guides on results based management, is the huge amount of essentially tedious material that the reader must wade through before getting to the heart of what each agency requires, and in particular what it means by “results”.  The sites and documents referred to in the first two posts of this series of four, are intended to be of use to people who need, for one reason or another, to make sense of, take guidance from or work within UN agency results frameworks.  These posts are likely to be of limited interest to readers who don’t need to worry about UN agency RBM, other than perhaps to gain some insight into why it can be difficult sometimes to pin down “what difference” development assistance is making.

Inconsistent UN application of Results-Based Management

Wednesday, April 28, 2010

Using Logic Models in Results-Based Management

--Greg Armstrong --

[Links updated 2018]

This website houses a large number of articles, some of them quite complex, on how to construct logic models, using proprietary software.

The Outcomes Theory Knowledge Base
Level of Difficulty:  Complex
Primarily useful for:  RBM specialists, academics
Length: 50-60 web pages
Most useful sections: Articles on evaluation
Designing Logic Models - Review by Greg Armstrong

The “Outcomes Theory Knowledge Base” is the title given to a compilation of more than 50 articles on what the author refers to as Outcomes Theory --  what many of the rest of us refer to as RBM, or management for development results. Most of the articles focus on how to use visual Logic Models for project management.

Who this is for

Readers will need to sift through 50+ articles, all written by Paul Duignan, to find what they need. But, although there is a lot of repetition in many of the articles, some of them could be useful to three groups:
  • Those interested in learning how a visual approach to results, through the development of logic models or outcome models can clarify results discussions.
  • Those who want an overview of some broad issues in evaluation.
  • Those people interested in an academic analysis of how results are viewed in a broad conceptual format.

For most field project managers, and host-country counterparts, the utility of many of the articles will be limited by the relatively dense language used to explain some common-sense ideas (for example, see the article “Problems faced when monitoring and evaluating programs which are themselves assessment systems”. Some simpler summaries on logic model development are also, however, available at a related commercial website,

The Utility of a Visual Logic Model

While there is considerable overlap in the ideas discussed in the more than four dozen articles originally included in what Google refers to as a “Knol” or a “unit of knowledge”, the reader who takes the time to work through these will find some useful material.

By my estimate at least 30 of the articles focus on the advantages to project managers, evaluators and monitors of using a visual approach to managing for results - Outcome Models, Logic Models or other visual representations about the relationship between activities and results. The core of these articles (although each puts these in a slightly different context) is based in some common-sense ideas that many RBM trainers, planners or evaluators may recognise from their own experience. Among these is that in planning, monitoring and evaluating for results, we should:
  1. Focus on results, not activities - and label results as “Outcomes”.
  2. Use a visual logic model to clarify results. This makes it easier to see the relationships between activities and different levels of results than is possible using a Logical Framework.
  3. Distinguish, in the logic model, between a) results and indicators for which an agency is directly responsible in the near term, and b) higher level results, for which attribution to the intervention (for success or failure) is not clear.
  4. Hold managers responsible for two primary tasks: a) Achieving results for which there are clear indicators and a reasonably clear and accepted causal relationship between activities and results; and b) Managing for development results at a higher level, in part by collecting and reporting on indicator data on results for which there is less certainty of attribution.
  5. Frame contracting, monitoring and evaluation within the context of the results, activities and indicators identified in the visual logic model.

Most of these articles also suggest that the proprietary software (DoView)  sold through a related website, can help us do all of these things more efficiently and creatively than we can by relying just on tables in word processing software. Taken together these four dozen articles also appear to form a help file for those using that software.

Evaluation issue summaries

At least ten of these articles focus specifically on evaluation. While the author obviously thinks that the visual logic model would assist in focusing evaluation questions, the articles go beyond this, and some provide what could be, for those looking for quick summaries, useful overviews of major evaluation issues.

Among those articles that could be useful to readers, whether they use the author’s software or not, are:

Greg Armstrong’s analysis

Key Resources on Evaluation and RBM?

While many of the articles on the logic model and on evaluation are useful, the article on “Key Outcomes, Results Management and evaluation resources” provides fewer useful links than the average reader might expect, from someone of the author’s experience.

The descriptive summary says it contains ”A summary list of key outcome theory related resources for working with outcomes, results management, evaluation, performance management, outcomes-focused contracting and evidence-based practice”.

“ Ahha!” I thought, “just what people who want to learn about RBM should have - ideas from the UN, DfID, CIDA, SIDA, Universities, government agencies, think-tanks, trainers and NGOs.” This could have been very useful to professionals seeking user-friendly tools on evaluation and RBM.

A quick review, however, shows it contains, at least at this writing in 2010, just 14 links - all of them to one of 8 of the author’s own websites, including his blog and twitter feed, and all with links to the sale of the logic modelling software. The author obviously has a history of work in evaluation, and presumably knows of other useful sites.  

Links to other relevant sites, such as, for example the Monitoring and Evaluation News, or the Centers for Disease Control's Evaluation Working Group resources would have been helpful to people looking for useful tools. 

The list of references to Outcome Theory  similarly lists 33 articles, all written by this author. Many of them are probably useful, but a broader net might have brought in ideas about the work other people have done on similar or related topics.

The Value of Logic Models

While there is some overlap in the content of these different articles, the basic point being made here is valid - using a Logic Model diagram as the focus for discussion can, indeed, as I have found recently in workshops in Vietnam, Cambodia, Indonesia and Thailand, clarify differences of perception over results, assumptions about cause and effect, and can energize discussions on project design and evaluation.

In one of the articles on this website, dealing with the value added of evaluation, to governance and policy making, using a visual logic model, the author writes:
Outcomes models need to be able to be used in all parts of the decision-making process. In order for them to be able to be used in this way, their visualizations needs to be portable across different media so that they can be used whenever and wherever they need to be used. For example, they should be able to be developed and used in real-time during meetings with high-level stakeholders, printed out in a report, and reproduced on an intranet or the internet. Meeting this criteria requires using appropriate software and laying out an outcomes model in a way that ensures that it is portable”.

Software Limitations for Logic Model Development

I facilitate workshops on developing results frameworks, logic models and indicator assessment several times a year, in almost all cases in countries where English is at best a second language, where internet access is often unstable, and in some cases where electrical power is unreliable.

I have not used the DoView software which many of these articles are linked to, in such workshops but I can see from its description that it could be helpful particularly during facilitation of logic model development workshops. Given that this software was developed particularly with results chains in mind, it could possibly have an advantage over other visual mapping software of a similar nature, such as Xmind, Vue or SmartDraw among many others. Like those other programmes, too, however, the software promoted here has limitations which would diminish its utility for facilitators working in the situations I work in.

At the end of a Logic Model development workshop, one important deliverable is a draft Logic Model and possibly an indicator assessment framework, which the users can take back to their many different offices, in different countries, different provinces or cities, a document they can distribute widely to their own colleagues and their own networks, for further critique and possible alteration.

The price for the DoView software is not high - roughly $35 per copy, cheaper than others that can run to several hundred dollars - but not as cheap obviously, as Vue, Xmind or others that are free. Even the free programmes have a problem with accessibility and portability however. Having used any of these programmes to engage people in a dynamic discussion of results, what do you do next, when they want to continue the discussion with their own partners? Do you ask them all to download and install the programmes?

It is not clear to me that the Logic Model diagrams from any of the visual mapping programmes I have seen can actually be edited with standard, commonly - available word-processing software such as Microsoft Word, OpenOffice Writer, or Google Docs. While Logic Models produced with DoView, Xmind, Vue, SmartDraw and many other similar programmes can be viewed using those programmes, or alternatively in PDF or on the web, and can be pasted into word processing programmes, in most cases they cannot be edited by people who do not have the original software in which the diagrams were originally produced -- making downstream participation very difficult.

For those programmes which are web-based, some editing can be done on the internet, but accessibility does not rest in “the cloud” for people where internet access is not always reliable.  The bottom line is that the utility of all of these mapping and diagramming programmes is limited where it is impractical to install specialised programmes on dozens of different computers.

If portability really is the criteria for assessing all of these programmes, then the priority should be not just the ability to view the results in a PDF file or on the internet, but the ability of partners to critique and edit the models.

No Painless Performance Indicators

Another issue, is that none of these programmes, with these limitations, will be easy to link to the other half of the results discussion - in many ways the most time consuming portion of results-based planning -- the assessment of the utility of indicators.

As anyone who has worked through the indicator development process knows, it can take days for project partners, working in groups, to sort through potential indicators, testing them for validity, for the existence of baseline data, for the availability and accessibility of reporting data, for the existence of appropriate research skills, and the time required for data collection and analysis.

While several of the articles in the Outcomes Theory Knowledgebase refer to the tongue-twisting “Non-output attributable intermediate outcome paradox”  and make a reasonable point about attribution, none of them makes the job of assessing indicators any easier, any faster or any more accessible for partners.

The Outcomes Theory Knowledgebase web site has many “how to” videos, hosted on YouTube, aimed primarily at helping people use the proprietary software. One of these is titled “Painless Performance Indicators: Using a Visual Approach”. This got my hopes up!

But, foiled again: What the video demonstrates is that if you have already done all of the hard work on indicators, having completed this assessment, you can insert a reference to the existence of the indicator, in the Logic Model, using the software. I am sure this is useful (although it can also be done with word processing programmes and the use of hyperlinks) but the point is that inserting indicators in a visual model is not the painful part of indicator development.

For the time being, until something new develops, I will be sticking with the basic word processing programmes which allow a facilitator to work with participants to develop a logic model (albeit without some of the ease of the mapping software) and then link and integrate it with an indicator assessment worksheet, as indicators are being proposed, tested, rejected, modified and accepted. But, I continue to live in hope, and may revisit the issue of software again later.

The bottom line: "The Outcomes Theory Database" includes articles with some useful arguments in favour of using a visual logic model approach, and some quick summaries of evaluation issues, but there is no magic bullet here.

Other resources on Logic Models:


Greg Armstrong is a Results-Based Management specialist who focuses on the use of clear language in RBM training, and in the creation of usable planning, monitoring and reporting frameworks.  For links to more Results-Based Management Handbooks and Guides, do to the RBM Training website

Monday, March 29, 2010

Applying RBM to Policy

--Greg Armstrong -- 

[Edited to update links and content August 2016]

Can policy making and policy advice be assessed by the same performance assessment standards and RBM methods that are applied to programmes? Mark Schacter’s views have evolved over the past five years.

Level of Difficulty:  Moderate to complex
Primarily useful for:  Policy makers, performance-management divisions
Length: 2 papers, 42 pages
Most useful section: What constitutes “good policy advice” p. 7-8 in “The Worth of a Garden”
Limitations:  Focused primarily on senior officials, unlikely to be useful to field workers, unless they are in a policy development project.
Mark Schacter's web page

Who this is for

Senior policy makers and performance management officials in the public service will recognise the issues Mark Schacter raises in all of his writing on RBM. Targeted primarily on Canadian public officials, the issues he raises are relevant to public servants everywhere. The policy discussions, however, are less likely to meet the needs of development field workers, or project managers, unless they are working specifically on policy development projects.

A decade of RBM analysis

Mark Schacter has been working on results-based management, training people on performance measurement and writing about it, since at least 1999. While this is certainly not the only thing he does his writing on RBM has been prolific, and influential.   The CIDA/Global Affairs Canada revised RBM terminology, formally adopted in 2008, for example, bears a striking resemblance to that used by Mark Schacter in his 2002 paper “Not a toolkit: Practitioner’s Guide to Measuring the Performance of Public Programs

First at the Institute on Governance, and more recently as a freelance consultant, he has published [at least 40] articles since 1998, focused on performance measurement and RBM. Some of these can be found on the Institute on Governance web site, but more are available directly on one of his own web sites: Mark Schacter Consulting.

[Editorial note March 2016: This review does not cover a number of new articles which can be found at the Mark Schacter publications page.]

Should Policy be judged by the same RBM standards as Projects or Programmes?

While the paper “Not a Tool Kit” provides a useful summary of the steps and issues in RBM for the public service, particularly the discussion of tradeoffs on indicators, the focus of this review is on how he treats policy making in the RBM context.

Two of his articles demonstrate how Mark Schacter's views shifted between 2002 and 2006, although I think, only marginally, on the important issue of whether standard performance measurement processes can -- or should -- be applied to policy development and public servants’ role in providing advice.

Is Policy Unique?

A strong advocate of intelligent application of RBM to the management of public programmes, in his 2002 article, What Will Be, Will Be The Challenge of Applying Results-based Thinking to Policy, he reviewed the standard arguments against applying results-based management to policy -- that policies are intangible things, subject to a variety of influences, such as politicians’ short-term political needs, that there is often a huge lag in time between policy development and any chance of seeing concrete results. The conclusions, for those who take this view -- and I have heard this recently -- is that policy is therefore “unique”, and that tracing the effect of advice on the success of policy is therefore in some way, “unfair”.

Schacter took the view, in this 2002 article, that intangibility and complexity are not unique to policy but occur often in programme implementation. He appeared to take the view that while these things present challenges to people attempting to assess performance, they also provide an opportunity to use performance measurement, and the critical examination of a logic model, to clarify assumptions and test the understanding of the intended results, implied or clearly stated by policy.

Evaluation and Performance Measurement

The link between the views Mark Schacter held in 2002, and those he expressed in 2006, is the role of evaluation in assessing the effectiveness of policy. Performance measurement, he wrote in 2002, looks at where we are today, and tries to assess how likely we are to achieve long-term results, by looking for evidence that we are making progress against shorter-term results. 

Evaluation, on the other hand, assesses not just whether results have been achieved, but whether they were the most appropriate results, why results were, or were not achieved, and whether alternative means of achieving them would have been more appropriate. [p. 17]

[2016 Edit:  In 2011 Schacter added a new paper - Tell Me What I Need to Know: A practical guide to program evaluation for public servants, which makes some useful distinctions between the policy requirements of monitoring and evaluation.]

The case for performance measurement of policy

Performance measurement, he wrote in 2002, has its limitations, particularly given the usual lag between policy development and achievement of long-term results. But, he continued:
“Sometimes a less-than-perfect instrument is, under the circumstances, the best one for the job at hand. Performance measurement is indeed a “second-best” instrument – but a very useful instrument nonetheless….Citizens have no less a right to be informed about the performance of policies than of programs. In order to explain and justify the allocation of resources to …any policy (or program) you need to have a way of connecting what you are doing now with where you want to be in the long term. This connection needs to be clear and must make sense not only in the minds of the people responsible for the policy, but also in the minds of external stakeholders (citizens, civic groups, private sector operators, politicians, etc.).
Performance measurement helps you make that connection. It helps you tell a believable and compelling story about why a policy was conceived in the first place, and whether or not it appears to be on the right track.” [p. 24-25]

The case for evaluation of policy

By 2006, in a paper for Canada’s Treasury Board, “The Worth of a Garden: Performance Measurement and Policy Advice in the Public Service” Mark Schacter had apparently come to the conclusion that measuring policy performance in the short term, might in fact be too challenging, and that an emphasis could probably be more productively put on longer-term evaluation.

He outlined two options for using performance measurement on a regular basis to assess progress towards policy results. The first is to assess what he called the “process and outputs standards for policy advice”, the second is essentially what he advocated in his 2002 article - to assess progress toward achievement of immediate and intermediate outcomes - whether policy advice was accepted and implemented. The conclusion he came to in 2006, however, differed from his earlier view:
“Low-specificity organizations and tasks pose especially difficult problems for performance measurement – problems so significant that it may be impractical (if not impossible) to apply standard performance measurement in a way that yields useful results. This does not mean that one should not attempt to assess the quality of a policy shop’s performance. But it does suggest that evaluation may be worth considering as a better tool than performance measurement for this particular task. Evaluation, though closely related to performance measurement, differs from it in ways that may provide a better fit with the subtleties and ambiguities of the policy-advice process.” [p. 11]

Greg Armstrong’s comments:

Are performance assessment and evaluation mutually exclusive?

At no point did Mark Schacter advocate abandoning the assessment of policy units’ performance. He has always maintained that at some point policy functions have to be assessed.

What is unclear to me, however, is why the assessment options -- regular performance assessment and eventual evaluation -- appeared to be regarded as mutually exclusive. [2016 edit: His 2011 paper - referenced above, sees them as complementary elements on an "evaluation continuum".]

It seems to me that combining a) an assessment of the quality of policy advice, and the processes which lead to it, with b) an assessment of interim results, and c) a longer-term evaluation, is a reasonable (if obviously not perfect) way of helping policy advisors, policy makers, legislators, and those who fund them, to understand the progress they are making toward long term results.

By 2008, Schacter was writing about other performance assessment issues, and in How Good is Your Government: Assessing the Quality of Public Management [2008] policy was mentioned only once, in passing. One type of information he proposed in that article, for assessing the efficient management of resources, however, was “Results-based performance information is used routinely as a basis for continuous improvement of program/policy performance.” [ p. 5] 

This suggests that he had not given up completely on the contribution to the policy function of regular performance assessment.

How RBM applies to Policy Projects in International Aid

It is important to note that none of Mark Schacter’s writing, at least between 2002-2006, was focused on whether performance assessment could be applied to improved capacity to provide policy advice. 

If, as I contend, there is a role for performance assessment in assessing progress on policy in general, there is surely a much clearer role for it, and for RBM in general, in planning, implementing and assessing results for international aid projects which focus on the development of capacity for policy research, policy formulation, and legislative capacity.

Mark Schacter maintained in 2006 that the provision of policy advice is essentially an output - a completed activity. But while there could be a case made that this is true for some policy functions, in the context for which he was writing -- and even that is not completely clear to me -- it is not true for policy capacity development. Improved quality of the policy making process, and improved quality of the advice provided, are both clearly interim results in capacity development terms, and therefore worth assessing on a regular basis.

In the 2006 paper, Schacter outlined the commonly regarded criteria for assessing the quality of the policy advice process, adopted in part from studies in Australia and New Zealand:

  • The timeliness of the advice for decision-makers
  • Relevance of the analysis to the current realities faced by decision-makers
  • Stakeholder consultation underlying the proposed policy
  • Clarity of purpose (essentially - does the policy itself rest on a solid logic model)
  • Quality of evidence, and the link between evidence, policy and purpose
  • Balanced range of alternative and viewpoints reflected in the analysis
  • Presentation of a range of viable options
  • Clarity in presentation
  • Pragmatic assessment of the potential problems of implementing the policy.

All of these, with some work, could form the basis for useful performance indicators for policy capacity projects or programmes and have, in many cases, been used for this purpose. Certainly as Mark Schacter observed, indicators relevant to these issues would provide qualitative data -- subjective in nature, and time consuming to collect.

But, in my experience, qualitative data are not necessarily any more time consuming to collect than quantitative data, and certainly not less valid if the intention is to assess the quality of the policy formulation process.

The bottom line:

Policy development is, indeed, sometimes an uncertain process, but there are ways of improving it, of building capacity and of assessing this capacity. Mark Schacter’s articles on the role of performance assessment in policy clearly outline the challenges, but also deliver some reasonable suggestions on how to deal with them.

Further reading:

[2016 - Other very useful more recent papers can be found on Mark Schacter's website, including several on evaluation, monitoring, the use of performance dashboards and risk assessment.]



Greg Armstrong is a Results-Based Management specialist who focuses on the use of clear language in RBM training, and in the creation of usable planning, monitoring and reporting frameworks.  For links to more Results-Based Management Handbooks and Guides, go to the RBM Training website

RBM Training

RBM Training
Results-Based Management

Share on LinkedIn

Subscribe to this blog by email

Enter your email address:

Delivered by FeedBurner

Read the latest posts