--Greg Armstrong --
Problems with United Nations agency reporting on results can, in part, be attributed to ambiguous definitions of Outputs.
Level of Difficulty: Moderate-complex
Primarily useful for: Anyone trying to understand the UN’s inconsistent RBM system
Coverage: 13 papers, totalling 721 p.
Most useful components: Technical Briefs on Outcomes, Outputs, Indicators and Assumptions
Limitations: Large number of potential documents laden with bureaucratic language
Who this is for
This post, and the previous review of UN agency problems in reporting on results, is intended for bilateral aid agency representatives, national government representatives, project managers, monitors or evaluators, trying to understand the inconsistent reporting of project results by UN agencies. Because the UN documents are often lengthy, and laden with bureaucratic language, it is unlikely to be of interest to people who don’t need to work with UN agency counterparts.
Background: Problems in UN RBM
This is the second of four posts assessing UN agency results chains, results definitions and problems in reporting on results, and (in the third and fourth posts) the results frameworks bilateral aid agencies use. In the previous post, I suggested that inconsistent UN agency results reporting could, in part, be attributable to a weak results culture, and sometimes weak leadership at the country level within some UN agencies. This post reviews how ambiguous results definitions also undermine UN agencies’ credibility in results reporting.
A third post in this series will review results chains for 3 bilateral aid agencies – SIDA, AusAID, and DANIDA -- define results, and the fourth and final post will review how USAID, DFID, CIDA and EuropeAid define results and results chains.
Language matters. I have argued elsewhere that the terminology used in Results-Based Management is dysfunctional, largely because the jargon (Outputs, Outcomes, Impact, Objectives, Purpose) can mean many different things and, that in the context of development programming, terms used for results are intended to mean something different than they mean in day to day usage. What works in almost all project contexts, however, is to focus on change as the characteristic of a result. This is a word and a concept that works in most languages, and appeals to people’s desire for common-sense terminology.
For most bilateral donors, whatever the specific terms they use, completed activities -- often referred to as Outputs -- are not sufficient for reporting purposes. While bilateral project managers are obviously required to report on completion of activities, the real emphasis in project reports is expected to be on if and how these activities are contributing to significant changes in the short to mid-term -- changes to knowledge, attitudes, policy or professional practice.
In other words, results.
Some agencies refer to these results as Outputs, Outcomes and Impacts. Others refer to them as Objectives, or Purpose, or as Immediate, Intermediate and Ultimate Outcomes. But whatever the terms, the focus is clear: “Tell us about how the project or programme is contributing to change, not just about how you spent the money”.
The problem for bilateral and national agency partners trying to hold UN partners to reasonable standards of results based management lie, I think, in the vast number of documents dealing with results in the UN context; the ambiguity of the UN definitions of results; and the confusion about how results chains for projects relate to results chains at the country programme level for different agencies.
At the UN Development Assistance Framework level, results from different agencies are essentially being aggregated, and as the January 2010 UNDAF document "Standard Operational Format and Guidance for Reporting Progress on the UNDAF" [22 p.] made clear, the UNDAF report should be “focused on reporting results at a strategic level….”. [Update: That UNDAF guide no longer appears easily accessible on the UNDAF website, but was replaced apparently, in 2011 by a longer UNDG Results-Based Management Handbook.]
Unfortunately, terms that define results for aggregations of projects at the strategic level do not necessarily work at the individual project level.
So, how did the UN, in the UNDAF and UNDG guides and technical briefs, deal with results?
Results chains describe the sequence, and the nature of links between, activities, completed activities, and near-term, mid-term and long-term results. Most practitioners agree that a direct cause and effect between activities and results is unreasonable given the wide range of intervening variables that occur in real life, but that results chains describe the general sequence of how activities can contribute to change – or results.
The several UN documents reviewed here and in the previous post variously refer to results chains (sometimes in the same document) as
Activities→Outputs→Outcomes→Impacts
Activities→Outputs→Agency Outcomes→UNDAF Outcomes→National Priority
Activities→Outputs→Country Programme Outcomes→UNDAF Outcomes→National Priority
Activities→Outputs→Agency Outcomes→Country Programme Outcomes→UNDAF Outcomes→National Priorities
“Activity Results”→Outputs→Outcome→UNDAF Outcome→National Priority
UN Agency Outputs→UNDAF Output→National Outcome→National Goal
Looking at these it is no wonder that there are differences among implementing agencies in how results are explained for projects and programmes in the UN system.
Most UN agencies use as the basis for their own definitions of results, the 2003 harmonized UNDG Results-Based Management Terminology [3 p.] which grew out of the OECD/DAC Glossary of Key Terms in Evaluation and Results-Based Management [37 p.]. “Outcomes,” the harmonized terminology states “represent changes in development conditions which occur between the completion of outputs and the achievement of impact.”
Of the hundreds of document available at the UNDG website, the most frequently referenced for elaboration on RBM terms are four technical briefs produced in 2007. These included technical briefs on Outcomes, on Outputs, on Indicators and on Assumptions and Risk [Update: The Word file for these briefs was removed from the UNDG website, but is still available, as of June 2018, in a cached version on Google].
The 2007 Technical Brief on Outcomes, [7 p.] the apparent foundation for many of the other UNDG documents on Results-Based Management, however, explained Outcomes at the country level this way -- The UN country teams have two separate, but linked, Outcomes at the country level:
So, for example, two Country Programme Outcomes of adoption or passage of Human Rights legislation and then adequate budgeting for its implementation might - it was hoped - lead to longer term UNDAF Outcomes of improved human rights in the country.
As the useful checklist in this Technical Brief on Outcomes noted on page 4, a Country Programme Outcome “…is NOT a discrete product or service, but a higher level statement of institutional or behavioural change.” The same checklist adds that a Country Programme Outcome should describe “a change which one or more UN agencies is capable of achieving over a five year period.”
All of this is fairly easily understandable, and as long as the assumptions underlying all of these intended results are monitored, these definitions should open the door for UN agencies to collaborate with other donors and with national governments on solid Results-Based Management, and the reporting of results. Many bilateral aid projects also have a 5-year term so it would be reasonable to see Outcomes occurring in that period.
However, this approach is not always applied, agency-by-agency, to UN results reporting at the project level. The 2009 UNDP Handbook on RBM, [221 p.] (updated in 2011) while it has many very useful components, says, of the scope of project evaluations, that the focus should be on:
The UNDP Handbook has a very good discussion of problem identification and stakeholder involvement in the development of a results framework which, it says, “can be particularly helpful at the project level” [p. 53]. But while the UNDP Handbook reiterates the importance of attention to results at the country level, this is less obvious at the UNDP project level:
The problem with this is that, if attention to results from the component parts of a development programme (i.e. the projects or activities) is missing, and if project results are not properly reported, then the foundation for country-level reporting will be, at best, hypothetical.
On the other hand, a revised draft ILO RBM Guide [34 p.] noted that:
It is at the Output level that the real confusion starts and, in turn, I think this undermines attempts to get some UN agencies to think about, or to report clearly on, results at the project level.
The 2003 document UNDG Results-Based Management Terminology -- at least in terms of Outputs -- improved upon the earlier OECD/DAC Glossary of Key Terms in Evaluation and Results-Based Management, when it defined Outputs as “The products and services which result from the completion of activities within a development intervention.”
The original OECD/DAC definition of Outputs was “ The products, capital goods and services which result from a development intervention; may also include changes resulting from the intervention which are relevant to the achievement of outcomes.”
The introduction of the “completion of activities” in the UNDG definitions, and the absence of any mention of “changes” opened the possibility that the UN would have a definition of Outputs that would help it discriminate between activities and products on the one hand (completed training, study tours, texts produced) and results (increased understanding or changed attitudes). This would be compatible with the view of many of the major bilateral donors which see Outputs as completed activities or products – and, while a necessary step in achieving results, not themselves results.
Unfortunately, the 2007 UNDAF Common Country Assessment guidelines, which are no longer available [76 p.] complicated the issue by defining Outputs as:
In the 2007 Technical Brief on Outputs [11 p.] , later available as part of an integrated package of technical briefs on RBM there is yet another definition -- and the line between necessary products and the results they contribute to has been further blurred:
In the Checklist for validating Outputs, this same document says that “The output is a new product or service, new skill or ability that can be developed and/or delivered by one UN agency working with its partners”.
The definition in the 2009 UNDP Handbook provides some conflicting examples of what Outputs are. In some cases they are clearly completed activities or products:
But in other examples, there is the hint of the changes (results) that could come out of completed activities:
The last example combines as an Output, a completed activity and a learning result.
The Issues Note on Results Based Management in UNDAFs [11 p.] analysed a number of results chains and came to the conclusion that they were often illogical, with Outputs more complex and difficult to achieve than supposedly subsequent, and more advanced, Outcomes. Downgrading the complexity of Outputs makes sense, in this context, at the country level, where many results are being aggregated, but when applied to the project level, it may provide an excuse for very limited reporting on results.
And this is the core of the problem.
If we must be limited to the simplistic three-level results chains, and if jargon like “Outputs” must be used (and I am not sure it must), then at the country level, saying that changes in skills or abilities – real development results – are part of the Output, may be legitimate. This may be particularly necessary if we are aggregating results from a large number of projects and trying to fit all of their multiple results into a three-stage description, where the only terms available are Outputs, Outcomes and Impact.
But why combine two different concepts – completed activities and the changes these produce -- under one label? Why not separate the completed activities from the change? The problem of confused labels leading to inadequate reporting, it seems, must be laid at the door of a harmonization process that permitted core RBM definitions to become -- and remain -- intellectually vapid.
When we allow mixing activities and results and labeling them both “Outputs” at the project level, and then tell project managers that their primary reporting responsibility is for Outputs, it provides two potentially dysfunctional things:
Neither response is valid, however, even in the context of the UN RBM guidelines. The same UN Technical Brief on Outputs in fact, clearly argues against such timidity -- and this is important for anyone who wants to hold UN agencies to higher standards of project reporting:
So, the intention was honest, and indeed some UN agencies do make a genuine attempt to report on real and tangible changes or results. ILO, for example, had in its 2005 self-directed learning module on results-based management [131 p] no longer readily available, a framework that viewed Outputs in part as a result. However, with what appears to be a new draft ILO RBM guide developed after 2007, the organization had changed the emphasis. While the definition of Outputs in this new ILO RBM guide was standard, combining activities and some degree of initial change, the document distinguished more clearly between completed activities and results, and more particularly between Outputs and results.
ILO also has its own version of the country programme results – in its Decent Country Work Programmes – where it insists that reporting at the project level focus on results and not just on completed activities.
Any ambiguity there may have been in the former ILO approach to Outputs has been reduced substantially with the injunction to “Remember that targets are connected to outcomes and indicators, and never to outputs.” [p. 13].
By 2011, version 2 of the ILO RBM Guide made it clear that Outputs were not results:
Other agencies, however, persist -- particularly at the project level -- in reporting only on how many people they have trained, how many guides they have produced, how many meetings have been convened. While it might be argued that the UN’s focus is at the country level, and thus Outputs must incorporate some element of change (and not be confined to completed activities), it is precisely at the level of change that project reporting is not, in actual practice, consistently taking place.
The most ironic example of this I have seen was a multi-donor project on aid effectiveness that focused on improving RBM capacity in the national aid coordinating agency. Managed by a UN implementing agency, the project year after year -- and despite bilateral pleas for reports on results -- continued to report on how many people had been trained, how many workshops held, how many reports produced. It never reported on indicator-based evidence of performance management capacity improving, or even that there had been some change in understanding or attitudes towards performance management. Both results could have been demonstrated with little real effort. However, the impression left by its failure to respond to Steering Committee requests for data on change, was that the implementing agency simply was not motivated to report on results.
While UNDP’s policies and procedures for closing a project may say that a final report “should look at sustainability of the results, including the contribution to related outcomes (and the status of these outcomes) and capacity development", it is reasonable to question how this might be done in any reasonable way without the project actually collecting data on Outcome indicators. [ Update 2018: The original link to this document was removed from the UNDP website, but the reference can, as long as the document remains available, be found on page 80 of a policy paper on procedures for national implementation of UNDP-supported projects. - PDF download]
Project managers may not, in some agencies, be responsible for achieving results (change, not just completed activities) at the Outcome level, but they should be held accountable for managing for, and reporting on, development results – at the very least attempting to influence results. But if we are to take management for development results seriously, and to report on this process, we need to monitor progress by looking at indicators of change – of results -- whatever label they are given.
There is a wide variation in how UN agencies treat Results-Based Management for projects -- with some reporting on actual results of their work with partners, while others limit themselves to the mundane details of how they delivered the activities. This variation might in part be a matter of confusion in the terminology of what they are responsible for. Clarifying this at the Output level, by clearly distinguishing between completed activities, and real results, is a necessary first step in improving the results culture in UN agencies.
Poor implementation of Results-Based Management must also be seen, however, as a matter of leadership. It is the UN agency heads who determine, through what they reward and what they ignore, whether managers will focus on generating and reporting results or settle for just reporting on completed activities. And it is the UN agency country leadership that sets the tone for whether project managers will take results reporting seriously.
Obscure results terminology has just allowed idle or unfocused UN agency leadership to take the path of least resistance, and report on completed activities rather than on results.
The bottom line:
The 2009 bilateral donor assessments of UNDP [46 p.] and UNICEF indicate, as I noted in my previous post, that Bilateral aid agencies’ perceptions of how UN agencies manage for development results are more critical than country partners’ views. But as the bilateral donors provide a huge amount of assistance through these agencies, it would be wise for the UN agencies to take the criticisms of how they manage for results into account. As DFID begins its multilateral spending review, careless results reporting practices may finally come under serious review by the bilateral aid agencies.
Next: The next two posts will look at Bilateral aid agency results chains, as reflected in SIDA, DANIDA, AusAID, USAID, DFID, CIDA and EuropeAid documents.
_____________________________________________________________
GREG ARMSTRONG
Greg Armstrong is a Results-Based Management specialist who focuses on the use of clear language in RBM training, and in the creation of usable planning, monitoring and reporting frameworks. For links to more Results-Based Management Handbooks and Guides, go to the RBM Training website. This post was edited on August 27, 2010, January 13, 2012, July 2014 and June 2018 to update links.
Problems with United Nations agency reporting on results can, in part, be attributed to ambiguous definitions of Outputs.
Level of Difficulty: Moderate-complex
Primarily useful for: Anyone trying to understand the UN’s inconsistent RBM system
Coverage: 13 papers, totalling 721 p.
Most useful components: Technical Briefs on Outcomes, Outputs, Indicators and Assumptions
Limitations: Large number of potential documents laden with bureaucratic language
Who this is for
This post, and the previous review of UN agency problems in reporting on results, is intended for bilateral aid agency representatives, national government representatives, project managers, monitors or evaluators, trying to understand the inconsistent reporting of project results by UN agencies. Because the UN documents are often lengthy, and laden with bureaucratic language, it is unlikely to be of interest to people who don’t need to work with UN agency counterparts.
Background: Problems in UN RBM
This is the second of four posts assessing UN agency results chains, results definitions and problems in reporting on results, and (in the third and fourth posts) the results frameworks bilateral aid agencies use. In the previous post, I suggested that inconsistent UN agency results reporting could, in part, be attributable to a weak results culture, and sometimes weak leadership at the country level within some UN agencies. This post reviews how ambiguous results definitions also undermine UN agencies’ credibility in results reporting.
A third post in this series will review results chains for 3 bilateral aid agencies – SIDA, AusAID, and DANIDA -- define results, and the fourth and final post will review how USAID, DFID, CIDA and EuropeAid define results and results chains.
How results are defined in the UN at the country level
Language matters. I have argued elsewhere that the terminology used in Results-Based Management is dysfunctional, largely because the jargon (Outputs, Outcomes, Impact, Objectives, Purpose) can mean many different things and, that in the context of development programming, terms used for results are intended to mean something different than they mean in day to day usage. What works in almost all project contexts, however, is to focus on change as the characteristic of a result. This is a word and a concept that works in most languages, and appeals to people’s desire for common-sense terminology.
For most bilateral donors, whatever the specific terms they use, completed activities -- often referred to as Outputs -- are not sufficient for reporting purposes. While bilateral project managers are obviously required to report on completion of activities, the real emphasis in project reports is expected to be on if and how these activities are contributing to significant changes in the short to mid-term -- changes to knowledge, attitudes, policy or professional practice.
In other words, results.
Some agencies refer to these results as Outputs, Outcomes and Impacts. Others refer to them as Objectives, or Purpose, or as Immediate, Intermediate and Ultimate Outcomes. But whatever the terms, the focus is clear: “Tell us about how the project or programme is contributing to change, not just about how you spent the money”.
The problem for bilateral and national agency partners trying to hold UN partners to reasonable standards of results based management lie, I think, in the vast number of documents dealing with results in the UN context; the ambiguity of the UN definitions of results; and the confusion about how results chains for projects relate to results chains at the country programme level for different agencies.
At the UN Development Assistance Framework level, results from different agencies are essentially being aggregated, and as the January 2010 UNDAF document "Standard Operational Format and Guidance for Reporting Progress on the UNDAF" [22 p.] made clear, the UNDAF report should be “focused on reporting results at a strategic level….”. [Update: That UNDAF guide no longer appears easily accessible on the UNDAF website, but was replaced apparently, in 2011 by a longer UNDG Results-Based Management Handbook.]
Unfortunately, terms that define results for aggregations of projects at the strategic level do not necessarily work at the individual project level.
So, how did the UN, in the UNDAF and UNDG guides and technical briefs, deal with results?
UN Results Chains
Results chains describe the sequence, and the nature of links between, activities, completed activities, and near-term, mid-term and long-term results. Most practitioners agree that a direct cause and effect between activities and results is unreasonable given the wide range of intervening variables that occur in real life, but that results chains describe the general sequence of how activities can contribute to change – or results.
The several UN documents reviewed here and in the previous post variously refer to results chains (sometimes in the same document) as
Activities→Outputs→Outcomes→Impacts
Activities→Outputs→Agency Outcomes→UNDAF Outcomes→National Priority
Activities→Outputs→Country Programme Outcomes→UNDAF Outcomes→National Priority
Activities→Outputs→Agency Outcomes→Country Programme Outcomes→UNDAF Outcomes→National Priorities
“Activity Results”→Outputs→Outcome→UNDAF Outcome→National Priority
UN Agency Outputs→UNDAF Output→National Outcome→National Goal
Looking at these it is no wonder that there are differences among implementing agencies in how results are explained for projects and programmes in the UN system.
UN Outcomes
Most UN agencies use as the basis for their own definitions of results, the 2003 harmonized UNDG Results-Based Management Terminology [3 p.] which grew out of the OECD/DAC Glossary of Key Terms in Evaluation and Results-Based Management [37 p.]. “Outcomes,” the harmonized terminology states “represent changes in development conditions which occur between the completion of outputs and the achievement of impact.”
Of the hundreds of document available at the UNDG website, the most frequently referenced for elaboration on RBM terms are four technical briefs produced in 2007. These included technical briefs on Outcomes, on Outputs, on Indicators and on Assumptions and Risk [Update: The Word file for these briefs was removed from the UNDG website, but is still available, as of June 2018, in a cached version on Google].
The 2007 Technical Brief on Outcomes, [7 p.] the apparent foundation for many of the other UNDG documents on Results-Based Management, however, explained Outcomes at the country level this way -- The UN country teams have two separate, but linked, Outcomes at the country level:
- UNDAF Outcomes
- Country Programme Outcomes -- which incorporate individual UN agency Outcomes. These are not as clearly defined in these documents as are UNDAF Outcomes, but they appear to be seen as the changes to things such as policy or legislation, needed to facilitate long-term institutional or behaviour change. Bilateral donors might see these as mid-level results or Intermediate Outcomes, achievable to some degree over the period of a project.
So, for example, two Country Programme Outcomes of adoption or passage of Human Rights legislation and then adequate budgeting for its implementation might - it was hoped - lead to longer term UNDAF Outcomes of improved human rights in the country.
As the useful checklist in this Technical Brief on Outcomes noted on page 4, a Country Programme Outcome “…is NOT a discrete product or service, but a higher level statement of institutional or behavioural change.” The same checklist adds that a Country Programme Outcome should describe “a change which one or more UN agencies is capable of achieving over a five year period.”
[Editorial note, January 2012]: This document was replaced in February 2011 with the updated Technical Brief on Outcomes, Outputs, Indicators and Risks and Assumptions [21 p.], which maintains some, but not all, of the principles of the 2007 brief]
All of this is fairly easily understandable, and as long as the assumptions underlying all of these intended results are monitored, these definitions should open the door for UN agencies to collaborate with other donors and with national governments on solid Results-Based Management, and the reporting of results. Many bilateral aid projects also have a 5-year term so it would be reasonable to see Outcomes occurring in that period.
However, this approach is not always applied, agency-by-agency, to UN results reporting at the project level. The 2009 UNDP Handbook on RBM, [221 p.] (updated in 2011) while it has many very useful components, says, of the scope of project evaluations, that the focus should be on:
"...Generally speaking, inputs, activities and outputs (if and how project outputs were delivered within a sector or geographic area and if direct results occurred and can be attributed to the project)* "[p. 135] .The footnote in that quote acknowledges, however, that some large projects may have Outcomes that could be evaluated. And while later the Handbook says of project reporting that it should include “An analysis of project performance over the reporting period, including outputs produced and, where possible, information on the status of the outcome “ [p. 115] it is clear that at the project level, the priority is on reporting of Outputs.
The UNDP Handbook has a very good discussion of problem identification and stakeholder involvement in the development of a results framework which, it says, “can be particularly helpful at the project level” [p. 53]. But while the UNDP Handbook reiterates the importance of attention to results at the country level, this is less obvious at the UNDP project level:
“Since national outcomes (which require the collective efforts of two or more stakeholders) are most important, planning, monitoring and evaluation processes should focus more on the partnerships, joint programmes, joint monitoring and evaluation and collaborative efforts needed to achieve these higher level results, than on UNDP or agency outputs. This is the approach that is promoted throughout this Handbook.” [p. 14]
The problem with this is that, if attention to results from the component parts of a development programme (i.e. the projects or activities) is missing, and if project results are not properly reported, then the foundation for country-level reporting will be, at best, hypothetical.
On the other hand, a revised draft ILO RBM Guide [34 p.] noted that:
“Some mistakenly think that outputs are ends in themselves, rather than the various means to ends. RBM reminds us to shift our focus away from inputs, activities and outputs—all of which are important in the execution and implementation of work—and place it instead on clearly defined outcomes....” [p. 17][June 2018 update: This draft is no longer available, but a new ILO RBM and M&E Manual was produced in 2016.]
Confusion Over UN Agency Outputs
It is at the Output level that the real confusion starts and, in turn, I think this undermines attempts to get some UN agencies to think about, or to report clearly on, results at the project level.
The 2003 document UNDG Results-Based Management Terminology -- at least in terms of Outputs -- improved upon the earlier OECD/DAC Glossary of Key Terms in Evaluation and Results-Based Management, when it defined Outputs as “The products and services which result from the completion of activities within a development intervention.”
The original OECD/DAC definition of Outputs was “ The products, capital goods and services which result from a development intervention; may also include changes resulting from the intervention which are relevant to the achievement of outcomes.”
The introduction of the “completion of activities” in the UNDG definitions, and the absence of any mention of “changes” opened the possibility that the UN would have a definition of Outputs that would help it discriminate between activities and products on the one hand (completed training, study tours, texts produced) and results (increased understanding or changed attitudes). This would be compatible with the view of many of the major bilateral donors which see Outputs as completed activities or products – and, while a necessary step in achieving results, not themselves results.
Unfortunately, the 2007 UNDAF Common Country Assessment guidelines, which are no longer available [76 p.] complicated the issue by defining Outputs as:
“The specific products, services, or changes in processes resulting from agency cooperation”
In the 2007 Technical Brief on Outputs [11 p.] , later available as part of an integrated package of technical briefs on RBM there is yet another definition -- and the line between necessary products and the results they contribute to has been further blurred:
“Outputs are deliverables. They normally relate to operational change: changes in skills or abilities, the availability of new products and services. They are the type of results over which managers have a high degree of influence. Failure to deliver outputs is, on the face of it, a failure of the programme or project.”
In the Checklist for validating Outputs, this same document says that “The output is a new product or service, new skill or ability that can be developed and/or delivered by one UN agency working with its partners”.
The definition in the 2009 UNDP Handbook provides some conflicting examples of what Outputs are. In some cases they are clearly completed activities or products:
- “Study of environment-poverty linkages completed.
- “Police forces and judiciary trained….”
But in other examples, there is the hint of the changes (results) that could come out of completed activities:
- “Systems and procedures implemented and competencies developed…” [p. 59]
The last example combines as an Output, a completed activity and a learning result.
The core of the UN RBM problem
The Issues Note on Results Based Management in UNDAFs [11 p.] analysed a number of results chains and came to the conclusion that they were often illogical, with Outputs more complex and difficult to achieve than supposedly subsequent, and more advanced, Outcomes. Downgrading the complexity of Outputs makes sense, in this context, at the country level, where many results are being aggregated, but when applied to the project level, it may provide an excuse for very limited reporting on results.
And this is the core of the problem.
If we must be limited to the simplistic three-level results chains, and if jargon like “Outputs” must be used (and I am not sure it must), then at the country level, saying that changes in skills or abilities – real development results – are part of the Output, may be legitimate. This may be particularly necessary if we are aggregating results from a large number of projects and trying to fit all of their multiple results into a three-stage description, where the only terms available are Outputs, Outcomes and Impact.
But why combine two different concepts – completed activities and the changes these produce -- under one label? Why not separate the completed activities from the change? The problem of confused labels leading to inadequate reporting, it seems, must be laid at the door of a harmonization process that permitted core RBM definitions to become -- and remain -- intellectually vapid.
When we allow mixing activities and results and labeling them both “Outputs” at the project level, and then tell project managers that their primary reporting responsibility is for Outputs, it provides two potentially dysfunctional things:
- A disincentive for timid or uncertain UN agency leadership to take responsibility for achieving anything risky in terms of change -- real results -- focusing instead on the logistics of organizing activities and the delivering products;
- An excuse for these same agencies, if they want to, to limit themselves to generating data and reporting on the “product or service” delivery part of the definition of Outputs, rather than on genuine changes –results-- or lack of them.
Neither response is valid, however, even in the context of the UN RBM guidelines. The same UN Technical Brief on Outputs in fact, clearly argues against such timidity -- and this is important for anyone who wants to hold UN agencies to higher standards of project reporting:
“You may be tempted to list things like workshops and seminars as outputs. After all, they are deliverable and some workshops can be strategic if they gather decision takers in one room to build consensus. But, in most cases, workshops and seminars are activities rather than outputs. And remember that outputs are not completed activities – they are the tangible changes in products and services, new skills or abilities that result from the completion of several activities.”
So, the intention was honest, and indeed some UN agencies do make a genuine attempt to report on real and tangible changes or results. ILO, for example, had in its 2005 self-directed learning module on results-based management [131 p] no longer readily available, a framework that viewed Outputs in part as a result. However, with what appears to be a new draft ILO RBM guide developed after 2007, the organization had changed the emphasis. While the definition of Outputs in this new ILO RBM guide was standard, combining activities and some degree of initial change, the document distinguished more clearly between completed activities and results, and more particularly between Outputs and results.
“RBM is also significant in that it represents a shift away from the narrow focus on inputs, activities and outputs, which are factors internal to organizations. RBM moves the focus outwards towards results, which are the external changes that organizations are working to achieve and the fundamental reason why organizations exist to begin with.” [p. 4]
ILO also has its own version of the country programme results – in its Decent Country Work Programmes – where it insists that reporting at the project level focus on results and not just on completed activities.
“At the ILO, outcomes are developed: at the organization level and appear in the [programme and budget]; at the country level and appear in [country work programmes]; and at the project level and appear in various project reports and evaluations.” [ILO draft RBM Guide p. 7].
Any ambiguity there may have been in the former ILO approach to Outputs has been reduced substantially with the injunction to “Remember that targets are connected to outcomes and indicators, and never to outputs.” [p. 13].
By 2011, version 2 of the ILO RBM Guide made it clear that Outputs were not results:
"Outcomes are significant changes (policies, knowledge, skills, behaviors or practices, etc.)that are intended to occur as a result of actions taken by constituents with the Office’s support, whether independently or in collaboration with other partners.
ILO outcomes state changes that are expected to occur as a direct result of the ILO interventions.They correspond to real-world results to which the ILO’s contribution is direct and verifiable, for which it can be reasonably held accountable, and against which is performance is assessed and reported." [p. 5]
Other agencies, however, persist -- particularly at the project level -- in reporting only on how many people they have trained, how many guides they have produced, how many meetings have been convened. While it might be argued that the UN’s focus is at the country level, and thus Outputs must incorporate some element of change (and not be confined to completed activities), it is precisely at the level of change that project reporting is not, in actual practice, consistently taking place.
The most ironic example of this I have seen was a multi-donor project on aid effectiveness that focused on improving RBM capacity in the national aid coordinating agency. Managed by a UN implementing agency, the project year after year -- and despite bilateral pleas for reports on results -- continued to report on how many people had been trained, how many workshops held, how many reports produced. It never reported on indicator-based evidence of performance management capacity improving, or even that there had been some change in understanding or attitudes towards performance management. Both results could have been demonstrated with little real effort. However, the impression left by its failure to respond to Steering Committee requests for data on change, was that the implementing agency simply was not motivated to report on results.
While UNDP’s policies and procedures for closing a project may say that a final report “should look at sustainability of the results, including the contribution to related outcomes (and the status of these outcomes) and capacity development", it is reasonable to question how this might be done in any reasonable way without the project actually collecting data on Outcome indicators. [ Update 2018: The original link to this document was removed from the UNDP website, but the reference can, as long as the document remains available, be found on page 80 of a policy paper on procedures for national implementation of UNDP-supported projects. - PDF download]
Conclusions: Responsibility for results reporting
Project managers may not, in some agencies, be responsible for achieving results (change, not just completed activities) at the Outcome level, but they should be held accountable for managing for, and reporting on, development results – at the very least attempting to influence results. But if we are to take management for development results seriously, and to report on this process, we need to monitor progress by looking at indicators of change – of results -- whatever label they are given.
There is a wide variation in how UN agencies treat Results-Based Management for projects -- with some reporting on actual results of their work with partners, while others limit themselves to the mundane details of how they delivered the activities. This variation might in part be a matter of confusion in the terminology of what they are responsible for. Clarifying this at the Output level, by clearly distinguishing between completed activities, and real results, is a necessary first step in improving the results culture in UN agencies.
Poor implementation of Results-Based Management must also be seen, however, as a matter of leadership. It is the UN agency heads who determine, through what they reward and what they ignore, whether managers will focus on generating and reporting results or settle for just reporting on completed activities. And it is the UN agency country leadership that sets the tone for whether project managers will take results reporting seriously.
Obscure results terminology has just allowed idle or unfocused UN agency leadership to take the path of least resistance, and report on completed activities rather than on results.
The bottom line:
The 2009 bilateral donor assessments of UNDP [46 p.] and UNICEF indicate, as I noted in my previous post, that Bilateral aid agencies’ perceptions of how UN agencies manage for development results are more critical than country partners’ views. But as the bilateral donors provide a huge amount of assistance through these agencies, it would be wise for the UN agencies to take the criticisms of how they manage for results into account. As DFID begins its multilateral spending review, careless results reporting practices may finally come under serious review by the bilateral aid agencies.
Next: The next two posts will look at Bilateral aid agency results chains, as reflected in SIDA, DANIDA, AusAID, USAID, DFID, CIDA and EuropeAid documents.
_____________________________________________________________