Greg Armstrong --
Level of Difficulty: Moderate to complex
Primarily useful for: Project managers, national partners
Length: 23 documents, 1,337 p.
Most useful: SIDA, AusAid and DANIDA guides to the LFA
Limitations: Mounds of bureaucratic language in many of the bilateral documents make it difficult to identify and take effective guidance from potentially useful material.
[Some links updated, April, 2012, June 2012 and June 2018 - The aid agencies remove documents regularly from their own websites, but readers may sometimes be able to find them at other sites, with some diligent online search. One site to try, if some of the above links do not work, is the Project Management for Development Organizations Open Library Site, which archives a lot of useful material.]
Who these materials are for
Length: 7 publications, 507 p.
One particularly telling suggestion in reference to joint projects and partner capacity for RBM was made in the 2007 Strengthening SIDA Management for Development Results: “In countries where the capacity to generate and report results information is weak, donors should support capacity building” [p. 9]. This is a good suggestion, because projects that build results-based management capacity are rare.
Length: 11 documents, 412 p.
Most useful: AusGuide 3.3: The Logical Framework Approach [Update: This document was removed from the AusAid (now DFAT) website since this review was written in 2010, but an archived copy is below].
The AusAID site is not easy to work with, but with some searching it is possible to find three documents -- part of the AusGuide -- relevant to how the agency defines results. The most useful of these is the 2005 AusGuide 3.3: The Logical Framework Approach [37 p.] which illustrates the AusAid results chain as:
Activities→Outputs →Component Objectives/Intermediate Results →Purpose/OutcomeGoal/Impact.
The definition of Outputs here follows the standard OECD/DAC definition as: “… the tangible products (goods and services) produced by undertaking a series of tasks as part of the planned work of the activity” and the examples provided make it clear that these are simply completed activities or products:
A further complication is AusAID’s differentiation between “project Outputs” and “contractible Outputs” described in the 2003 version of AusGuidelines 1 on the LFA [45 p] and referenced briefly in the 2005 version. This leaves the impression that project managers may only be responsible for delivery of completed activities.
But another 2005 document (no longer available), an updated version of its September 2000 “Promoting Practical Sustainability" noted: “Monitoring and reporting frameworks based on tools such as the logical framework approach should look beyond the contracted activity and output levels and incorporate regular assessment of the movement towards achieving sustainable outcomes “ and followed this with some specific recommendations, including that
And a 2006 M&E Framework Good Practice Guide [8 p.] distinguished between Outputs as “products, capital goods and services delivered by a development intervention to direct beneficiaries” on the one hand, and Outcomes as “short-term and medium-term results of an intervention’s outputs.”
More recent publications from AusAid differ in how they address what results are to be reported. The Office Of Development Effectiveness, established to monitor results of projects, did not make reference in its 2008 Annual Review of Development Effectiveness to the results chain found in the AusGuide. The 2008 review referred instead to the importance of achieving “objectives”, and rating their quality, through the AusAid Quality Reporting System. This apparently involves ratings by “activity” managers of “activity objectives" and “higher level strategy objectives” at entry, and later by independent evaluators, but makes no reference to any standard AusAid results chain or logic model.
Although the website entry for the Quality Reporting System said [when this review was written in 2010] it “helps to ensure reliable, valid and robust information is available”, in the only two guides to this system that I could find, one an overview of the Quality Reporting System, [6 p.] and the other Interim Guidelines for Preparing Completion Reports [10 p.] there were no references at all to indicators, and two references to “objectives” and “key results” which may or may not be the same thing. In the ODE’s review of development effectiveness similarly, there were only three references to indicators in the context of the Australian aid programme, one of which noted that in support of rather vague “objectives” for Vietnam, “No specific indicators were identified to define the focus or scope of ambition of Australian support meant to meet these objectives.”
The December 2009 AusAid Civil Society Water, Sanitation and Hygiene Fund Guidelines [25 p.] however, clearly identified Outcomes as results, substantive changes such as “increased access to improved sanitation services” or “improved hygiene behaviour”. The same guidelines defined Outputs as essentially completed activities, such as “provision of technical support” or “facilitation of dialogue”, and provide direction to NGOs applying for funds to describe their approach to “monitoring and evaluation of Outcomes”. [Editorial Note, June 2012: A subsequent February 2011 evaluation confirmed the focus on Outcomes as substantive changes.]
All of this is frustrating for the outsider, because it is clear from the work it was doing on contribution analysis between 2005-2009, that AusAid was genuinely trying to find practical means to document progress towards real results. As one report on the use of contribution analysis in the Fiji Education Sector program [35 p.] in 2007 noted, where previously the Program’s indicators were primarily at the Output level, after using this approach, “the new indicators subsequently provide information on progress towards and contribution to outcomes,…”. [p. 33] This report similarly provided examples of results chains somewhat different from those described in the 2005 AusGuide.
Activities→Outputs→Immediate Outcomes→Intermediate Outcomes→End Outcomes.
This was evidently not a one-shot effort because terms of reference for an AusAid evaluation in May of 2010, made specific reference in describing research methods to “…a basic contribution analysis and counterfactual assessment ahead of the in-country visit.”
The 2008 Peer Review of Australian Development Assistance [117 p.] stated that “Performance reporting processes have now been brought together into one coherent system” but it is difficult for the average person, among the vast array of available documents on AusAid and associated websites, to find the coherence.
There probably are guides on the AusAid intranet providing clearer guidance on whether managers should now be reporting on progress towards real results at the Outcome level, but based on what is easily and publicly available, what results AusAid requires of projects, or what data it requires in support of these – whether it expects reports on completed activities and products, or on genuine results -- remains unclear.
Length: 5 documents, 418 p.
Most useful: Guide to the Logical Framework Approach
It is very difficult to get a comprehensive and updated picture of how DANIDA currently understands the results chain. The Danida website has what looks like a practical and potentially useful guide to the Logical Framework Approach [148 p.] but it was produced in 1996, and it remains to be seen if the results chain defined there as
Activities→Outputs→Immediate Objective→Development Objective
is still valid.
The Danida definition of Outputs in this document: “Outputs are the tangible, specific and direct products of activities which largely are within the control of project management” is, nevertheless, very similar to what the OECD/DAC came up with 6 years later.
The examples provided, such as “Awareness campaign about hygiene conducted”, make it clear that outputs are seen as completed activities and products, not changes. Changes (results), in the Danida results framework, are at the Objective level statements like “Population has adequate hygiene practices”.
Somewhat ambiguously, the suggestion in the document is that project managers are responsible only for Outputs, not for Objectives, while at the same time project evaluations are to assess whether the logic of the project was valid, and whether the Outputs did contribute to the Objectives.
In this sense, the “results chain” expectation is that a project or programme will eventually report on how its Outputs contributed to Objectives. The November 2009, DANIDA Analysis of Programme and Project Completion reports 2007-2008 [76 p.], lists in the annexes the format for these reports, and they include an assessment of
But in the Ministry of Foreign Affairs Guidelines for Programme Management [87 p.] 2009 version, while there was some passing references to Immediate Objectives in the Template for Semi-Annual and Annual Reports , the emphasis is clearly on details about Outputs – or completed activities. [Update: The 2009 version is no longer available, but similar references can be found in the 2011 version of the Guidelines for Programme Management.]
The Ministry of Foreign Affairs Grant Administration Guideline Project status report formats for NGOs in 2010 asked for
On the whole, it is not really clear from these sources then, to what extent DANIDA does push for reports on results, rather than completed activities.
It could be that as Denmark decentralizes responsibility for aid management, and aid reporting not just to country offices, but to Danish missions to multilateral agencies, the definitions, and interpretations of what results must be reported on, have become more diffuse.
The 2007 OECD DAC peer review of Denmark’s aid programme [103 p.] suggested that “Denmark could consider further rationalising this reporting system, as it involves many different tools and may be time consuming given embassies’ staffing constraints.” [p. 14]
The bottom line: Holding UN agencies to account
These three agencies, SIDA, AusAid and DANIDA, have a clear interest in increasing the proportion of their aid going through multilateral agencies, and are making real efforts to improve delivery of programmes. Anecdotal evidence is that they take results reporting seriously in practice. But given the ambiguity of their own definitions of results, which often, in different documents from the same agencies, conflict, SIDA, AusAID and DANIDA would have some difficulty in holding UN agencies to higher results reporting standards.
Next post: DFID, CIDA, EuropeAid and USAID
For further reading, and more original documents on SIDA, AusAID, DANIDA and the OECD DAC see:
As bilateral donors consider offloading responsibility for delivering aid programmes to multilateral agencies, it pays to examine what standards these UN agencies use for results reporting. Two earlier posts looked at the relatively weak results reporting in many UN agencies, and questioned why the bilateral agencies don’t hold the UN agencies to higher standards. This third post looks at how three major bilateral donors define their results frameworks.
Level of Difficulty: Moderate to complex
Primarily useful for: Project managers, national partners
Length: 23 documents, 1,337 p.
Most useful: SIDA, AusAid and DANIDA guides to the LFA
Limitations: Mounds of bureaucratic language in many of the bilateral documents make it difficult to identify and take effective guidance from potentially useful material.
[Some links updated, April, 2012, June 2012 and June 2018 - The aid agencies remove documents regularly from their own websites, but readers may sometimes be able to find them at other sites, with some diligent online search. One site to try, if some of the above links do not work, is the Project Management for Development Organizations Open Library Site, which archives a lot of useful material.]
Who these materials are for
Project managers, evaluators and national partners trying to understand how SIDA, AusAid and Danida define results and frame their approaches to RBM.
Background – Confusion in UN results definitions
In my previous two posts, I examined how UN Results-Based Management frameworks can provide the rationale for some agencies to avoid reporting on results, and to focus instead simply on describing their completed activities. To a large extent this confusion arises out of the ambiguous terminology used in Results-Based-Management – words that mean different things to different agencies even as they agree to use them in common. This is complicated further in the UN case when agencies apply the same terms to near-term results at the project level as are applied at the aggregated country level. As I noted in the second post, vague definitions for results, even when they are ‘harmonised’, too frequently seem to lead unmotivated or inattentive agency heads and project managers to the path of least resistance: project reporting on completed activities and not results.
This third and a subsequent fourth post in this series briefly survey how the results frameworks of 7 bilateral aid agencies compare to those of the OECD/DAC and UNDAF. This review is only of those agencies where information, in English, could be obtained in reasonable time from their own or associated -- and publicly accessible -- websites. Those already familiar with the discussion of the problems in the UN framework may want to skip directly to the review of the bilateral agencies.
I am assuming, for the moment, a general agreement that “cause and effect” is not a reasonable description of what is intended by a results chain, but rather that it is a notional concept of how activities can contribute to intended results. In this context, this post examines how results chains are presented in documents from the
- OECD/DAC,
- SIDA,
- AusAid, and
- Danida.
USAID, DFID, CIDA and EuropeAid results frameworks will be analysed in the final post in this series.
OECD DAC RBM
The OECD DAC results chain is a common link between the bilateral and UN agencies, and it looks like this:
Activities→Outputs→Outcomes→Impact
"Outcomes", as used here, compare roughly with what the bilateral agencies work towards as development changes or results. However, it is at the Output level that ambiguity and problems occur. The approved OECD/DAC RBM terminology for Outputs, in French is “Extrants/Produit” and in Spanish it is “Producto”. In both cases, the intention is clear and stated this way, there is nothing obviously wrong with the results chain.
But the definition of (the English) Outputs provided in the OECD/DAC glossary has two elements:
First: “The products, capital goods and services which result from a development intervention;”
and then: “changes resulting from the intervention which are relevant to the achievement of outcomes.”
It is this second clause -- on “changes” -- that blurs the distinction between products or completed activities on the one hand, and results on the other.
And it is this essential lack of clarity in the original OECD/DAC definitions that is now reflected in the UN agency definitions for Outputs. Given that there are two choices here – defining Outputs as completed activities – or as real results (changes) -- and not making a clear distinction between the two, this ambiguity has provided unmotivated UN project directors the excuse to simply report on completed activities.
Bilateral Aid Agency Results Frameworks
Many of the bilateral donors reflect this same ambiguity in how they define Outputs, some specifying them as just completed activities or products, while others include the additional possibility of near-term change.
Nevertheless, whether activities and products are labelled as completed activities or as Outputs, for some bilateral aid agencies, they are not considered sufficient to constitute results reporting in results based management terms.
Thus, for example, an activity that trains 500 health workers may have Outputs (completed activities or products) of “training materials” and 500 “trained workers”. But completion of the training would not be considered a “result” by the tougher bilateral RBM standards.
Instead, most agencies would look for reports of viable results at the primary or near-term level only where it was clear what difference the training made. In this context, the result of the activity would be visible if it could be demonstrated that a substantial number of the 500 health trainees
- learned something they did not know before,
- changed their attitudes, or
- were now working more effectively.
In the longer term, results would presumably be sought also in more systemic types of change --- in changes seen in effectiveness and relevance of the professional development system or in health services delivery policy and practice, and eventually changes in health status for the population.
The SIDA Results Framework
Length: 7 publications, 507 p.
Most useful sections: SIDA Guidebook on the LFA
The 2004 SIDA Summary of the Theory Behind the LFA, [35 p.] which I reviewed several months ago presents the SIDA results chain this way:
Activities→Results/Outputs→Project Objective/Purpose→Development objectives
This LFA guide describes Outputs as “actual, tangible results that are a direct consequence of the project’s activities”. This, on the surface, seems similar to the vague UN definitions of Outputs sometimes being products and at other times low-level results.
While the SIDA guide goes further in saying: “The outputs/results are a description of the value of the services/products produced by the project within the framework of what the project stakeholders can guarantee”, the examples the Guide provides are ambiguous. Some cite completed activities as Outputs e.g.:
Activity: “Repair old water points”; Output: “50% of existing water points…repaired”
On the other hand, other examples cite actual results as Outputs:
Activity: “Train in hygiene”; Output: “Hygienic habits of the target group improved”
This may be why a 2008 SIDA evaluation of Policy Guidance and Results-Based Management for the SIDA education sector [183 p. -PDF] noted in its comments on two programmes that:
“Even when civil society or university capacity building outcomes/ outputs are defined in log frames, performance indicators are frequently unclear and the results chain and causal relationship from activity, output and outcome are unclear.” [p.41]
A 2008 evaluation of SIDA Human Rights and Democratic Governance project results [44 p. -PDF] noted the ambiguity in how results were reported among projects – some reporting completed activities as results, under the “Output” label, and others reporting on Objectives.
“Training is by far the most frequently mentioned output; it is an output of nearly half of all the projects covered by the sample. Apart from this, the list of outputs includes, but is not limited to: policies, guidelines, studies, publications, information, seminars, study tours, infrastructure, theatre productions, and funding. Some may find that outputs, such as those found in the evaluations and studies on SIDA’s support to HR&DG, are not results but an expression of the different activities initiated with support from Sida.
That is true, but they are, nevertheless, the most tangible and direct manifestations of the interventions. And, as mentioned above, listing these as “results” is in line with SIDA’s and DAC’s usage of the concept.” [p. 11]
This report also noted that
“Rather surprisingly, the study team found that only four of the reports made a clear distinction between outputs and outcomes. In other words, the general picture of the 31 reports in the sample is that they apply this terminology rather imprecisely and give limited room for drawing any conclusions whether projects and programmes differentiate between different result levels.”
This is really not dissimilar to the general thrust of a review of “The Management of Results Information at SIDA” [18 p.] produced in 2001. Since information on
“programme expenditure, activities and outputs are covered in reports on programme progress, SIDA should make sure that counterparts’ annual reports focus on information about programme outcomes and impact.” [p. 15].
The suggested format for this report allocated roughly 80% of reporting space to the results (Outcomes and Impacts) and only about 20% to Outputs.
But although there is a move to strengthen RBM in SIDA, the OECD/DAD 2009 peer review of SIDA [123 p.] noted that “at the time of the peer review visits, many staff remained unclear what results-based management really entails in practice.” [p. 16]
The peer review also noted that
SIDA is not alone in its mixed use of Outputs, of course, and there are suggestions that it may be moving toward greater clarity. A 2007 paper by the SIDA Results Project Group “Strengthening SIDA Management for Development Results” [20 p] suggests that the agency wants reports on results, not just activities. And a 2009 study comparing accountability in four aid agencies [84 p. - PDF] found that after criticisms in 2009 by the Swedish Minister of International Cooperation that SIDA was not reporting adequately on results “the first report on Outcomes” was produced in 2009.
“SIDA’s very detailed manual on managing individual projects and programmes focuses strongly on planning and approval processes but provides no guidance on and makes no mention of results-based management.” [p. 59].[Update, 2018: A new SIDA RBM Handbook produced in 2014 is now available on the SIDA publications site.]
SIDA is not alone in its mixed use of Outputs, of course, and there are suggestions that it may be moving toward greater clarity. A 2007 paper by the SIDA Results Project Group “Strengthening SIDA Management for Development Results” [20 p] suggests that the agency wants reports on results, not just activities. And a 2009 study comparing accountability in four aid agencies [84 p. - PDF] found that after criticisms in 2009 by the Swedish Minister of International Cooperation that SIDA was not reporting adequately on results “the first report on Outcomes” was produced in 2009.
“SIDA is now further expected to include these new results data in its annual reports….Meanwhile the “cascading effect” of this results orientation seems to have altered the functioning of SIDA’s major partners responsible for implementing the projects: a number of interviews with these partners notably refer to the growing budget they allocate to measuring projects results.” [p. 56]
The AusAID Results Framework
Length: 11 documents, 412 p.
Most useful: AusGuide 3.3: The Logical Framework Approach [Update: This document was removed from the AusAid (now DFAT) website since this review was written in 2010, but an archived copy is below].
The AusAID site is not easy to work with, but with some searching it is possible to find three documents -- part of the AusGuide -- relevant to how the agency defines results. The most useful of these is the 2005 AusGuide 3.3: The Logical Framework Approach [37 p.] which illustrates the AusAid results chain as:
Activities→Outputs →Component Objectives/Intermediate Results →Purpose/OutcomeGoal/Impact.
The definition of Outputs here follows the standard OECD/DAC definition as: “… the tangible products (goods and services) produced by undertaking a series of tasks as part of the planned work of the activity” and the examples provided make it clear that these are simply completed activities or products:
“…irrigation systems or water supplies constructed, areas planted/developed, children immunised, buildings or other infrastructure built, policy guidelines produced, and staff effectively trained.”Only the last, “staff effectively trained”, suggests something approaching a result by suggesting that it is not enough to train, but that there must be some measurement of whether this was done “effectively”. This opens the door to considering change: how well the staff learned something. None of the other examples provided in the guide make it clear that Outputs are more than delivered actions: a training strategy developed; new courses designed; a series of special courses delivered; books produced or distributed.
A further complication is AusAID’s differentiation between “project Outputs” and “contractible Outputs” described in the 2003 version of AusGuidelines 1 on the LFA [45 p] and referenced briefly in the 2005 version. This leaves the impression that project managers may only be responsible for delivery of completed activities.
But another 2005 document (no longer available), an updated version of its September 2000 “Promoting Practical Sustainability" noted: “Monitoring and reporting frameworks based on tools such as the logical framework approach should look beyond the contracted activity and output levels and incorporate regular assessment of the movement towards achieving sustainable outcomes “ and followed this with some specific recommendations, including that
“The basis of payment should attach payments to outputs or milestones that are largely within the control of the contractor while encouraging the contactor to focus on their contribution to ‘outcomes’."In theory, at least, this suggests that reports (and the projects themselves) should address much more than the simple completion of activities.
And a 2006 M&E Framework Good Practice Guide [8 p.] distinguished between Outputs as “products, capital goods and services delivered by a development intervention to direct beneficiaries” on the one hand, and Outcomes as “short-term and medium-term results of an intervention’s outputs.”
More recent publications from AusAid differ in how they address what results are to be reported. The Office Of Development Effectiveness, established to monitor results of projects, did not make reference in its 2008 Annual Review of Development Effectiveness to the results chain found in the AusGuide. The 2008 review referred instead to the importance of achieving “objectives”, and rating their quality, through the AusAid Quality Reporting System. This apparently involves ratings by “activity” managers of “activity objectives" and “higher level strategy objectives” at entry, and later by independent evaluators, but makes no reference to any standard AusAid results chain or logic model.
Although the website entry for the Quality Reporting System said [when this review was written in 2010] it “helps to ensure reliable, valid and robust information is available”, in the only two guides to this system that I could find, one an overview of the Quality Reporting System, [6 p.] and the other Interim Guidelines for Preparing Completion Reports [10 p.] there were no references at all to indicators, and two references to “objectives” and “key results” which may or may not be the same thing. In the ODE’s review of development effectiveness similarly, there were only three references to indicators in the context of the Australian aid programme, one of which noted that in support of rather vague “objectives” for Vietnam, “No specific indicators were identified to define the focus or scope of ambition of Australian support meant to meet these objectives.”
The December 2009 AusAid Civil Society Water, Sanitation and Hygiene Fund Guidelines [25 p.] however, clearly identified Outcomes as results, substantive changes such as “increased access to improved sanitation services” or “improved hygiene behaviour”. The same guidelines defined Outputs as essentially completed activities, such as “provision of technical support” or “facilitation of dialogue”, and provide direction to NGOs applying for funds to describe their approach to “monitoring and evaluation of Outcomes”. [Editorial Note, June 2012: A subsequent February 2011 evaluation confirmed the focus on Outcomes as substantive changes.]
All of this is frustrating for the outsider, because it is clear from the work it was doing on contribution analysis between 2005-2009, that AusAid was genuinely trying to find practical means to document progress towards real results. As one report on the use of contribution analysis in the Fiji Education Sector program [35 p.] in 2007 noted, where previously the Program’s indicators were primarily at the Output level, after using this approach, “the new indicators subsequently provide information on progress towards and contribution to outcomes,…”. [p. 33] This report similarly provided examples of results chains somewhat different from those described in the 2005 AusGuide.
Activities→Outputs→Immediate Outcomes→Intermediate Outcomes→End Outcomes.
This was evidently not a one-shot effort because terms of reference for an AusAid evaluation in May of 2010, made specific reference in describing research methods to “…a basic contribution analysis and counterfactual assessment ahead of the in-country visit.”
The 2008 Peer Review of Australian Development Assistance [117 p.] stated that “Performance reporting processes have now been brought together into one coherent system” but it is difficult for the average person, among the vast array of available documents on AusAid and associated websites, to find the coherence.
There probably are guides on the AusAid intranet providing clearer guidance on whether managers should now be reporting on progress towards real results at the Outcome level, but based on what is easily and publicly available, what results AusAid requires of projects, or what data it requires in support of these – whether it expects reports on completed activities and products, or on genuine results -- remains unclear.
The DANIDA Results Framework
Most useful: Guide to the Logical Framework Approach
It is very difficult to get a comprehensive and updated picture of how DANIDA currently understands the results chain. The Danida website has what looks like a practical and potentially useful guide to the Logical Framework Approach [148 p.] but it was produced in 1996, and it remains to be seen if the results chain defined there as
Activities→Outputs→Immediate Objective→Development Objective
is still valid.
The Danida definition of Outputs in this document: “Outputs are the tangible, specific and direct products of activities which largely are within the control of project management” is, nevertheless, very similar to what the OECD/DAC came up with 6 years later.
The examples provided, such as “Awareness campaign about hygiene conducted”, make it clear that outputs are seen as completed activities and products, not changes. Changes (results), in the Danida results framework, are at the Objective level statements like “Population has adequate hygiene practices”.
Somewhat ambiguously, the suggestion in the document is that project managers are responsible only for Outputs, not for Objectives, while at the same time project evaluations are to assess whether the logic of the project was valid, and whether the Outputs did contribute to the Objectives.
In this sense, the “results chain” expectation is that a project or programme will eventually report on how its Outputs contributed to Objectives. The November 2009, DANIDA Analysis of Programme and Project Completion reports 2007-2008 [76 p.], lists in the annexes the format for these reports, and they include an assessment of
“…the extent to which the programme/component has achieved the general objectives as defined in the partner programme document, and discuss the contribution by Danida to achieving the objectives” .It is the Project and Programme Managers who must do this reporting, so we would expect that this might generate some interest in collecting data not just on Outputs or completed activities but on the Objectives (results).
But in the Ministry of Foreign Affairs Guidelines for Programme Management [87 p.] 2009 version, while there was some passing references to Immediate Objectives in the Template for Semi-Annual and Annual Reports , the emphasis is clearly on details about Outputs – or completed activities. [Update: The 2009 version is no longer available, but similar references can be found in the 2011 version of the Guidelines for Programme Management.]
The Ministry of Foreign Affairs Grant Administration Guideline Project status report formats for NGOs in 2010 asked for
“..an account of the project objectives, indicators related to objectives, preliminary outcomes and assessment of the potential of the project for realising the project outcomes(s) established.”Indicators in these reports were required for Outputs, but they were also, apparently, and somewhat ambiguously, to refer to Objectives and “Outcomes”. [Update: An April 2011 version of this document referred to the requirements as "project objectives, indicators related to objectives, preliminary results and assessment of the potential of the project for realising its objectives. A February 2016 DANIDA Guidelines for Programmes or Projects up to 37 Million DKK, refers to the project level results chain as moving from Engagement Outputs to Engagement Outcomes and then to Impacts. - p. 20. The 2018 Guidelines for Programmes or Projects refers to a results chain moving from Outputs to Outcomes and Objectives.]
On the whole, it is not really clear from these sources then, to what extent DANIDA does push for reports on results, rather than completed activities.
It could be that as Denmark decentralizes responsibility for aid management, and aid reporting not just to country offices, but to Danish missions to multilateral agencies, the definitions, and interpretations of what results must be reported on, have become more diffuse.
The 2007 OECD DAC peer review of Denmark’s aid programme [103 p.] suggested that “Denmark could consider further rationalising this reporting system, as it involves many different tools and may be time consuming given embassies’ staffing constraints.” [p. 14]
The bottom line: Holding UN agencies to account
These three agencies, SIDA, AusAid and DANIDA, have a clear interest in increasing the proportion of their aid going through multilateral agencies, and are making real efforts to improve delivery of programmes. Anecdotal evidence is that they take results reporting seriously in practice. But given the ambiguity of their own definitions of results, which often, in different documents from the same agencies, conflict, SIDA, AusAID and DANIDA would have some difficulty in holding UN agencies to higher results reporting standards.
Next post: DFID, CIDA, EuropeAid and USAID
For further reading, and more original documents on SIDA, AusAID, DANIDA and the OECD DAC see:
- The Denmark Ministry of Foreign Affairs Aid Management Guidelines
- The AusAid publications website [Update: now DFAT publications]
- The SIDA publications page (difficult to navigate)
- OECD DAC peer reviews of aid agencies
_____________________________________________________________