google-site-verification: googlefccb558c724d02c7.html


Tuesday, August 15, 2017

Evaluability Assessments and Results-Based Management: 8 Guides

by Greg Armstrong

Evaluability Assessments answer the question:  Is there enough available information to justify the time and cost of a full-scale evaluation? If they are done early enough during implementation, they can identify basic problems in design which can guide remedial action.

One old study set the stage for 7 recent detailed Guides which synthesize earlier work and provide useful advice on whether, when and how to undertake an evaluability assessment – or to use other approaches to assessing project or programme design integrity.

Evaluability Assessment Guides
Evaluability Assessment Guides

Level of difficulty: Moderate
Length: 3-72 pages
Primarily useful for: Evaluation managers, RBM specialists doing evaluation assessments
Most useful: DFID Guide to Planning Evaluability Assessments
Most interesting:  Program Management and the Federal Evaluator (1974)
Limitations:  These guides tell us what needs to be done, but they require people with the process skills to do it all.

The History of Evaluability Assessments

Evaluability assessments, pre-assessments or exploratory evaluations, as they were known in some cases 40 years ago, have been used for many years in public health research where the term “evaluable” refers to ‘Patients whose response to a treatment can be measured because enough information has been collected”,  education,  and justice  (PDF) and the term has academic antecedents in testing mathematical propositions, far back into the 19th Century. 

The Foundation Evaluability Assessment Document

The foundation evaluability concepts for social programmes were initially presented, from what I can see in a 1974 Urban Institute study, one of many emerging during implementation studies of the Johnson administration's Great Society Program,  published as "Program Management and the Federal Evaluator" in the Public Administration Review, included in a 1977 volume of Readings in Evaluation Research, available through Google Books. 
Title page of "Program Management and the Federal Evaluator"
The Foundation Evaluability Assessment article
It advocated looking at issues which are at the heart of current guides to evaluability assessment - and for that matter, at the heart of solid results-based design:
  • Whether there is a clearly defined problem addressed in design
  • If the intervention is clearly defined
  • If the short and longer term results are defined clearly enough to be measurable
  • if “the logic of assumptions inking expenditure of resources, the implementation of a program intervention , the immediate outcome to be caused by that intervention, and the resulting impact” are “specified or understood clearly enough to permit testing them”.
  • Whether managers are capable of and motivated to using performance data for concrete management decisions.
While there are many other guides to evaluability assessment which are much more detailed of more practical utility today, the most important core concepts are here in this 1974 study, and this short, 43-year old article is still worth reading for its insights on clarity on language, assumptions and results.  
Leonard Rudman’s later 1980 book Planning Useful Evaluations - Evaluability Assessment  set out a detailed approach to dealing with all of these issues.

Changes in Evaluability Assessment Utilization

Evaluability Assessments started out as ways of improving the design of evaluations, to increase the chances they will be useful to the people who are funding and implementing activities, and in recent years, with a substantial increase in the number of evaluations of international development projects, many agencies, including UNICEF, the World Bank, UNDP and numerous bilateral donors have incorporated evaluability assessments as part of the project cycle, after design, during implementation and before a decision is made to pay for a full-scale evaluation.  

Many Evaluability Assessment Guides have been produced in recent years, but here are just a few worth noting.

1. The DFID Guide to Planning Evaluability Assessments, by Rick Davies

The DFID Guide to Planning Evaluability Assessments (PDF) (2013-56 pages)  produced by Rick Davies, who has a host of useful evaluation documents and websites to his credit,  did a detailed job of synthesizing the literature, to provide suggestions on when and how to proceed with an assessment, and what the possible consequence of doing this could be for project or programme design changes.

"In an ideal world projects would be well designed. One aspects of their good design would be their evaluability. Evaluability Assessments would not be needed, other thanas an aspect of a quality assurance process closely associated with project approval (e.g. as used by IADB). In reality there are many reasons why approved project designs are incomplete and flawed, including:
  • Political needs may drive the advocacy of particular projects and override technical concerns about coherence and quality.
  • Project design processes can take much longer than expected, and then come under pressure to be completed.
  • In projects with multiple partners and decentralised decision making a de facto blueprint planning process may not be appropriate. Project objectives and strategiesmay have to be “discovered” through on-going discussions.
  • Expectations about how projects should be evaluated are expanding, along with the knowledge required to address those expectations.
.... In these contexts Evaluability Assessments are always likely to be needed in a postproject design period, and will be needed to inform good evaluation planning... 
…Many problems of evaluability have their origins in weak project design. Some of these can be addressed by engagement of evaluators at the design stage, through evaluability checks or otherwise. However project design problems are also likely to emerge during implementation, for multiple reasons. An Evaluability Assessment during implementation should include attention to project design and it should be recognised that this may lead to a necessary re-working of the intervention logic." [p. 9]
The DFID guide goes through every step of planning and executing an evaluability assessment:

  • When evaluability assessments are appropriate
  • Who should conduct the assessment – the types of expertise required for evaluability assessments
  • How to contract evaluability assessments
  • How long the assessments could take
  • How much it will cost to do an evaluability assessment
  • Different processes required for an evaluability assessment
  • Detailed lists of questions to be dealt with when assessing project design, assessing availability of indicator data, and stakeholder participation in the evaluability assessment
  • The types of reports which should come out of an evaluability assessment
  • The risks of undertaking evaluability assessments.

Checklist for Assessing the Project Design in Evaluability Assessment
Project Design Checklist
Click to enlarge

This report also includes annexes summarizing different suggested models of stages suggested by different researchers and agencies, in conducting an evaluability assessment, and even a draft Terms of Reference for an evaluability assessment – something which seems to have been used almost verbatim subsequently by many aid agencies.

2. The UN Office on Drugs and Crime Evaluability Assessment Template

Checklist of questions on a UN Office of Drugs and Crime Evaluability Assessment Template
Evaluability Assessment Template - UN Office on Drugs and Crime
Click to enlarge

Increasingly, these assessments are now being used early in the implementation period, as an evaluability assessment template produced by the United Nations Office on Drugs and Crime puts it,  as a means of cross-checking the validity of the original design, with the possible purpose of enabling mid-course design changes.  
The overall purpose of an evaluability assessment is to decide whether an evaluation is worthwhile in terms of its likely benefits, consequences and costs. Also, the purpose is to decide whether a programme needs to be modified, whether it should go ahead or be stopped.
The evaluability assessment is appropriate early in the programme cycle - when the programme is being designed but has not yet become operational. A second opinion on a programme and the strength of its design and logic is only worthwhile at this early stage - when something can be done to remedy any weaknesses
This brief 3-page checklist may prove a bit too constrictive in its decision-making path to be applied literally in all evaluability assessments in other fields, but the questions it asks about the credibility of the design, data collection systems, the utility of an evaluation for management are worth reviewing, as is the advice to use theory of change workshops to test logic models and programme theories.
They are, in this context, quality control activities, in effect very early mid-term reassessments of the Results-Based Management process.  

3. Evaluability Assessment for Impact Evaluation 

(2015 – 24 pages) produced by Greet Peersman Irene Guijt  and  Tiina Pasanen for the Overseas Development Institute. 
This guide provides “guidance, checklists and decision support” in a simple and easy to read format, expanding on the checklists provided by Rick Davies in the 2013 DFID guide, but aimed primarily to be useful for those who want to use the assessment not for project or programme design, but prior to deciding whether to undertake a full scale impact evaluation.  It provides a series of easy to read checklists on questions on designing and conducting an assessment. with some practical examples of the resources required for an evaluability assessment.

Methods Lab -ODI examples of time required to conduct evaluability assessments
Level of Effort required for an Evaluability Assessment
Click to enlarge

Guides for Assessing Evaluability in Conflict Situations

Four guides deal with a very practical set of questions about whether to undertake evaluations in conflict situations.

4. The CSO Evaluability Assessment Checklist: Working Draft (2017, 13 p,) was produced by    Cheyanne Scharbatke-Church for the U.S. State Department’s Bureau of Conflict and Stabilization Operations. While it does not appear to be available on their website, itis available on the American Evaluation Association’s website, which has a large number of other very useful resources.   
This is a short, easy to read checklist on key questions to consider before deciding on whether and how to do an evaluation in conflict situations.  It takes the checklist produced in 2013 by Rick Davies, for DFID, and expanded the content of some of those checklist issues in language intended to be easier for programme managers and others who are not evaluation specialists, to understand raising some practical questions to consider when assessing the costs and benefits of evaluations.
CSO Evaluability Assessment Checklist examples about practical questions on the institutional context for evaluability assessments
Practical questions for assessing evaluability in conflict situations
Click to enlarge

In a similar vein, the CDA Collaborative  has a number of interesting guides on whether to use evaluability assessments in peacebuilding and conflict situations, or alternatively, to use what is called programme quality assessment.  Many programmes which are not focused specifically on peacebuilding or conflict resolution could benefit from considering the alternative approaches:

5. Evaluability Assessments in Peacebuilding Progamming (2012 – 21 p.) written by Cordula Reimann lists 63 criteria for assessing whether evaluability in peacebuilding situations is high, medium or low.

Chart showing criteria for assessing evaluability of design issues
Assessing evaluability of programme design
Click to enlarge
6. An Alternative to Formal  Evaluation of Peacekeeping (2012 – 34 p) by Cordula Reimann, Diana Chigas & Peter Woodrow compares the utility of and resources required for three approaches – formal evaluation, Program Quality Assessment, and internal Reflection Exercises, for organizations in different situations, and with different needs.  People working in areas not related to conflict could find the Program Quality Assessment model a useful alternative to a full evaluability assessment, also.
Chart comparing  advantages of reflection exercise, program quality assessment and formal evaluation
Comparing Reflection, Program Quality Assessment and Evaluation
Click to enlarge
7. Thinking Evaluatively In Peacebuilding Design, Implementation And Monitoring ((2016 – 72 p.) brings both of these together in a longer, more detailed assessment of how the utility of the different approaches can be assessed in choosing the right evaluative option for organizations in different situations.
Chart comparing options for evaluative approaches
Criteria for choosing evaluation options
Click to enlarge

Conclusion: Using Results Based Management in Evaluability Assessments

Evaluability Assessments apply basic concepts of Results-Based Management to determining, as the original 1974 study proposed, if the original project or programme design: 
  1.  Addresses a clear problem, 
  2.  Has an intervention strategy which is logical, and shared by stakeholders, 
  3.  Has access to information for results indicators which will describe or measure progress against the original problem, and which are likely to be useful to funding and decision-making authorities.  
Some evaluability assessments have been limited to document review, but in my experience these can be sterile, almost academic undertakings, not likely to provide solid information on stakeholder agreement to the logic of the intervention. 
But, as part of the evaluability assessment process, if field work is recognized as a necessary component, the basic components of Results – Based design will be essential in conducting or engaging in evaluability assessments.   
Getting to the stage where we understand if there is enough coherence in a programme or project to evaluate it, means using many of the same skills and processes of stakeholder consultation which are they key to solid Results-Based design:

  • Identifying stakeholders
  • Getting stakeholder agreement on actionable problems
  • Identifying with stakeholders possibly multiple causes of a problem
  • Agreeing on which causes will be the target of the intervention
  • Clarifying assumptions about what interventions work and what is appropriate in the context of the intervention
  • Getting stakeholder agreement on clear short, mid-term and long-term desired results
  • Identifying internal and external risks
  • Identifying practical indicators to verify progress on results, and collecting baseline data on these
  • Getting agreement on data collection and data reporting responsibility and format
  • Getting stakeholder agreement on what inputs, and activities will be necessary to achieve results
  • Putting this together into a coherent theory of change with all of the stakeholders

If we can successfully engage with stakeholders in these processes, we have a reasonable chance of conducting a useful evaluability assessment.  

The bottom line:  The earliest of these studies outlined the basics, and recent guides provide the details on how to do it, but we still need the people with process skills to make evaluability assessments more than just a buzzword.

Greg Armstrong is a Results-Based Management specialist who focuses on the use of clear language in RBM training, and in the creation of usable planning, monitoring and reporting frameworks.  For links to more Results-Based Management Handbooks and Guides, go to the RBM Training website

Saturday, April 08, 2017

The Global Affairs Canada Results-Based Management Guide

Greg Armstrong

[Updated June 2019]

The 2016 RBM Guide produced by Global Affairs Canada is an essential tool for anyone implementing Canadian aid projects, and useful for anyone else seeking to design a results-based development project.
Global Affairs Canada results chain

Level of difficulty: Moderate
Length: 105 pages (plus appendices)
Primarily useful for: Managers of Canadian aid projects – or anyone involved in project design, regardless of the funding source
Most useful: Reporting on Outcomes, 87-92,
Limitations:  Some elements are of use only in project design, but most can be used in implementation.
The Canadian aid agency – formerly known as CIDA – now part of Global Affairs Canada – adopted Results-Based Management in 1996.  In 2001 a useful  and user-friendly 97-page guide to using RBM in writing a project implementation plan  was produced for CIDA by Peter Bracegirdle. I reviewed  that RBM guide several years ago, and it is still available from Appian consulting and several other sites.

Despite changes to CIDA results terminology in 2008, and the posting of issue-by-issue guides on the CIDA, later DFATD and Global Affairs Websites,  under the general title Results-Based Management Tools at (CIDA / DFATD / Global Affairs Canada): A How-to Guide,  the resources available to people trying to use Results-Based Management in both the design and implementation of Canadian projects, were limited. Trainers had to paste together different documents  on logic models, indicators, risk and indicators obtained off the website, to produce a coherent, if somewhat jargon-laden  ad hoc RBM guide of roughly 45 pages.

Many people continued to use Peter Bracegirdle's 2001 PIP Guide  [A Results Approach to the Implementation Plan] as the most effective of the CIDA/DFAT/GAC guides up until 2016, not just for developing implementation plans at project inception, but, making adaptations for terminology changes, as an aid to annual work planning,

But in 2016 a new group within Global Affairs Canada, called the Results-Based Management Centre of Excellence, produced a comprehensive and very practical new  105 page Results-Based Management guide, under the title Results-Based Management for International Assistance Programming : A How-to Guide.  The new Guide is also available in French as La gestion axée sur les résultats appliquée aux programmes d’aide internationale: Un guide pratique .

While the new GAC RBM guide includes a lot of material from earlier materials used since 2008, it also has a substantial number of new clarifications, which make It a much more practical RBM tool than previous versions published since 2008.

 The GAC RBM page now also contains a draft 2018 Results Reporting Guide for Partners (PDF) and a number of checklists and tip sheets on developing, assessing or reviewing theories of change, logic models, indicators in general, gender equality results and indicators and other topics.]

Who This RBM Guide is for
This document will be of use beyond the primary intended audience which was originally staff of Global Affairs and those working with them on project and project design.  While some of the background information describing the relationship of this guide to other Canadian government policies will be of little use or interest to anyone outside of the Canadian government, there is a lot of material here which could help implementing agencies and partners working on Canadian – funded projects, to work more effectively.  And it is easy to see, with the discussions on problem identification, theory of change, risk, and other topics, how this guide will be useful to people designing projects for any agency, regardless of the funding source.

Results Terminology

The Global Affairs approach to Results-Based Management has been, since 2008, an improvement over that of many other agencies, limiting the labelling of results as “outcomes”.  While the term “objectives” is in peripheral evidence in this document, it never appears in the functional tools such as the GAC Results Chain, the Logic Model, or the Performance Measurement Framework.  There is no confusion here with results being described as purposes, or goals, terms which, for some agencies, are used almost interchangeably along with Outputs, Outcomes and Results, something that often leads to genuine confusion as implementing agencies, partners and beneficiaries try to describe results, and distinguish them from activities.
For Global Affairs Canada, as for CIDA before it,  all results are changes:  Not completed activities as some U.N. agencies confusing label low-level results – but changes in the short term in capacity, in understanding, skills, or access to services.  At higher levels results are seen as changes in the behaviour, practice and performance, of change agents or of people who are the long-term beneficiaries.  All of these changes are in theory designed to contribute to even longer-term changes in important life issues such as income, food security, health, security, status of women, levels of suffering or human rights.

The Global Affairs Canada Results Chain

The results chain, in English, for Canadian aid projects has, since 2008, looked like this:
Diagram showing the Results Chain used by Global Affairs
The Global Affairs Canada Results Chain - English

It is interesting to note that in the French language version of the Global Affairs Canada RBM guide, what are called in English Immediate Outcomes, Intermediate Outcomes and Ultimate Outcomes in are, in French, just “results”.
The GAC results chain in French - differences in wording from English
The Global Affairs Canada Results Chain - French
The differences between the English and French reflect the Treasury Board of Canada  results Results-Based Management Lexicon  which standardizes the results frameworks for Canadian government agencies.

I do not find the addition of "Outcomes" - instead of just labeling them results, to be helpful.
As someone who works regularly to help people understand RBM in other languages, defining a result as a “change” is something that can be easily translated into any language, not just for government officials or field workers, but for villagers and other beneficiaries.  But  "Outputs" and "Outcomes" are both words used in English in many different forms, which causes problems of understanding even for native English speakers working on RBM, including those in donor agencies. In some other languages, while “change” is always understood, special terms have to be devised to describe Outputs or Outcomes.  As I have argued elsewhere, clear language is always preferable, if we want people to actually use Results-Based Management in practice. I doubt, given the organizational context, that there is anything GAC RBM specialists can do about this, however.

Outputs - not Results

This version of the RBM guide provides improved operational clarity in the definitions of what are not results – inputs, activities, and particularly the products of activities – clearly labelled as “Outputs”.
Outputs are described as “Direct products or services stemming from the activities of an organization, policy, program or project.”    
Those who have examined or worked with the Results terminology used by U.N. agencies will note  difference between this, and the common definition of Outputs still used by many U.N. agencies [my emphasis added]:
"Specific goods and services produced by the programme. Outputs can also represent changes in skills or abilities or capacities of individuals or institutions, resulting from the completion of activities within a development intervention within the control of the organization. " [Results-Based Management in the United Nations Development System, 2016, p. iii] 
 "Outputs are changes in skills or the abilities and capacities of individuals or institutions, or the availability of new products and services that result from the completion of a development intervention." [United Nations Development Assistance Framework Guidance, Feb 2017, p. 27] 
In practical terms the confusion caused by mixing products and actual changes in capacity into one common category, has meant that only the most serious U.N. agency managers have actually reported on changes in capacity – where their less….”ambitious”… colleagues have satisfied themselves, although not their bilateral partners, by reporting on completed activities – numbers of people trained, handbooks produced, schools built, as real results. This has proven to be a real source of frustration to bilateral donors contributing to U.N. agency activities, because many of the bilateral agencies, like Global Affairs Canada, DFID, the Australian aid agency and others, need to report on changes, such as increased skills, better performance, increased learning by students, or improved health, security or income - and not just on activities completed.

Results Level hierarchy

The results - three forms of Outcomes, are organized in a Logic Model.

Immediate Outcomes (or Résultat immédiat)

Immediate Outcomes are, for Global Affairs Canada: 
“A change that is expected to occur once one or more outputs have been provided or delivered by the implementer. In terms of time frame and level, these are short-term outcomes, and are usually changes in capacity, such as an increase in knowledge, awareness, skills or abilities, or access* to... [services] ...among intermediaries and/or beneficiaries.  * Changes in access can fall at either the immediate or the intermediate outcome level, depending on the context of the project and its theory of change. 

Intermediate Outcomes (Résultat intermédiaire) 

Defined as 
"A change that is expected to logically occur once one or more immediate outcomes have been achieved. In terms of time frame and level, these are medium-term outcomes that are usually achieved by the end of a project/program, and are usually changes in behaviour, practice or performance among intermediaries and/or beneficiaries." 

Ultimate Outcomes (Résultat ultime) 

Defined as 
"The highest-level change to which an organization, policy, program, or project contributes through the achievement of one or more intermediate outcomes. The ultimate outcome usually represents the raison d'être of an organization, policy, program, or project, and it takes the form of a sustainable change of state among beneficiaries."
Among the many useful small changes to the way these definitions work, is the admonition that such long-term changes should not refer to generic changes in the country’s circumstances (such as improved GDP), but should deal with real changes in the lives of real people – in health, learning, security and other areas which can be demonstrated with indicator data.

RBM Tools: Logic Model, Output-Activities Matrix, Performance Measurement Framework 

CIDA in 2008 moved from the familiar Logical Framework, which combined results, indicators, assumptions and risk in a visually (and often intellectually)  confusing  manner, to disaggregation of the main elements of the Logical Framework into three distinct elements:

A Logic Model  

Based on a theory of change exercise, this  visually illustrates how different elements are intended to be combined to contribute to short-term, medium-term and long-term changes as this example from  a 2015 Request for Proposals illustrates
Example of a Global Affairs Canada Logic Model
Example of a Logic Model

A Performance Measurement Framework

Current GAC Performance Measurement Framework Template
click to enlarge

This presents indicators, targets and data collection methods and schedules for different levels of results, as this 2013 example illustrates

Example from a CIDA project of a Performance Measurement Framework, showing the result, indicator, baseline data, target, data source, responsibility for data collection
Example of a Performance Measurement Framework
Click to enlarge

A Risk Framework 

This identifies risks, likelihood of occurrence, potential effect on the project, and strategies to mitigate them.

Global Affairs Canada Risk Assessment Tool

A table for risks, reference to the result in the logic model, and risk response
Global Affairs Risk Table

RBM tools and templates

The combined templates  for Logic Model, and the Output-Activity matrix and the separate Performance Measurement Framework, within some limitations can simplify the mundane if not the intellectual tasks, of distinguishing between and recording the links between Activities, Outputs, and Outcomes in the Logic Model, and in recording agreements on indicators.  The positive side of these templates is that they standardize what is produced, and make it difficult to inadvertently omit or change the wording of results, as we move from a Logic Model to the development of activities and indicators.  
Outcome & Outcome statements entered into the GAC Logic Model
Shows how the information on Outcomes and Outputs is transferred to a table for Activities
Outcome and Outputs from the Logic Model transferred to the Outputs and Activities Matrix
The negative side of these templates – form-filling PDF files, which restrict reformatting, is that they can be difficult to work with if the forms are being projected onto a screen and being used as the basis for discussion in large Logic Model and indicator workshops, where using the suggested “sticky notes” is not practical.  In those situations  reformatting is often necessary to accommodate changes as the discussion occurs – and as new columns and notes need to be added to remind participants how these have evolved, and what needs to be done. This is apparently not possible with these forms.

 This could be handled subsequent to a workshop in additional text, but it is best to get these things on record quickly, while the discussion is taking place.  In these situations I have found word processing programmes such as Microsoft Word or Google Docs easier to work with than PDF or spreadsheet formats.

An additional factor is that some work is required if you are using Chrome for example, to disable the built-in PDF viewer, before these documents can be downloaded, even if you own Acrobat.
The templates for these tools are  not part of the actual GAC RBM Guide itself - at least not as of this writing, but download links are provided either in the text or at the Global Affairs website to download locations for the Logic ModelPerformance Measurement Framework and  Risk Table downloads.  If you get the message above, you might be able to get around it by right clicking on the link and downloading the document, but there is no guarantee this will work.

Improved operational clarity

All of the basic tools remain essentially the same as they were in 2008,  but the improvements over earlier CIDA guides produced after 2008 is that there is increased clarity in this document about how to use the Logic Model, Output-Activities Matrix and Performance Measurement Framework,  in practical terms in project design, implementation, monitoring and results reporting.  

The 2001 PIP Guide remains a useful tool, as it had more detail on some design issues such as the Work Breakdown Structure, activity scheduling, budgeting and stakeholder communication plans.  But the new Guide, working with earlier material after 2008, and with new examples, contains useful new clarifications throughout the document.  These deal, among many other things, with

Distinguishing between Outputs and Activities.  

This sounds mundane, but there has been confusion in some Logic Models about whether Outputs were just completed activities or something more.  So, in that approach, an activity might be “Build wells” and the Output would be “Wells built”, something which is of no use at all in helping project managers mobilize and coordinate the resources and individual activities necessary to really put the wells in the ground.  I have always found the CIDA (2001) Output-Activity Matrix to be a useful bridge between the theory of the Logic Model, and the need for concrete focus in work planning. This document makes this link, and the link to results-based scheduling, clearer, and the template for the Logic Model, automatically populates the matrix with Outputs, preparatory to figuring out what activities are necessary to achieve them.  
Illustrates how Outputs transferred from the Logic Model are broken down into underlying activities
An example of the Outputs-Activities Matrix after Activities are added

Examples of how to phrase Outcomes in specific terms (syntax)

The GAC RBM framework has several criteria for developing precise result - Outcome statements, reflecting the fact that these are supposed to represent changes of some kind for specific people, in a specific location. and the RBM guide provides illustrations of two ways this can be done:

A table showing how to phrase Outcome statements
Syntax Structure of an Outcome Statement - Global Affairs Canada

The Guide also provides examples of strong and weak Outcome statements, with suggestions on how they can be improved.

A table listing weak Outcome statements with the problems, and how to rephrase them as strong Outcome statements
Examples of strong and weak Outcome statements

Results Reporting Format

The Guide provides a useful new format for results reporting.  In the past different projects have reported in a wide variety of ways, often forcing readers to wade through dozens of pages of descriptions of activities, in a vain attempt to find out what the results are.  This suggested new format puts results up front, in a table, emphasizing indicator data, with room for explanations in text, below.
Example of how results can be reported in table form
Suggested Results Reporting Format - emphasizing progress against indicators and targets
These and other additional tips can be found in section 3 – Step by Step Instructions on results-based project planning and design (p. 66-85) and section 4 – Managing for Results during Implementation (p. 86-92) but others are spread throughout the document, and for that reason it is useful to read the whole document, even if users are familiar with past CIDA/GAC documents.


This is a good, practical RBM guide, but having a good  guide is one thing, and getting people to use it – or to deal with the implications of what it means for agency operation, is another.  I see two areas where further improvements could be made, some of which could be done informally, and some which, given procedures in the Government of Canada, are perhaps beyond the scope of the GAC RBM group’s control.

1. Dealing with the Implications of RBM for operations and funding

 I have seen very small civil society organizations face lengthy processes of data collection and report revisions, to comply with donor agency RBM requirements for relatively inexpensive projects. But at the same time donor agencies themselves - and this means most donors - often do not deal realistically with the implications of their own guidelines for project budgets.

Baseline data

Take baseline data collection, for example.  The GAC Guide sections on Indicators and the Performance Measurement Framework (p. 52-64) are generally quite practical, and make the very valid point that baseline data for indicators must be collected before targets can be established, and results reported on.    I agree completely that this is the most useful way to proceed – if the time and budget are allocated to make it possible.  As the GAC guide says about baseline data (I have added  emphasis):
"When should it be collected? 
Baseline data should be collected before project implementation. Ideally, this would be undertaken during project design. However, if this is not possible, baseline data must be collected as part of the inception stage of project implementation in order to ensure that the data collected corresponds to the situation at the start of the project, not later. The inception stage is the period immediately following the signature of the agreement, and before the submission of the Project Implementation Plan (or equivalent). "[p. 60]
In a rational process this would in fact be the situation.  But the reality is that for projects funded by GAC and many other donors, after two or three years of project design and approval processes, both the donor and the partners in the field want to start actual operations quickly.  The amount of time allocated by donors and partners for the inception field trips by implementing agencies – and the budget allocated to support baseline data collection processes - are too limited to make baseline data collection for all indicators during the inception period feasible in all but the most unusual cases.
A typical inception field trip for an inception period might last 3-4 weeks, rarely longer, and during this period a theory of change process has to be initiated with all of the major stakeholders, an existing logic model tested and perhaps revised, a detailed work breakdown structure, and risk framework developed, institutional cooperation agreements negotiated, and detailed discussions on a Performance Measurement Framework with a multitude of potential stakeholders completed.  As the GAC guide notes:
"As with the logic model, the performance measurement framework should be developed and/or assessed in a participatory fashion with the inclusion of local partners, intermediaries, beneficiaries and other stakeholders, and relevant Global Affairs Canada staff." [p. 58]
Some of these indicator discussions alone, where an initial orientation is required, and where there are multiple stakeholders, with different perspectives and different areas of expertise involved, can take 20 or 30 professional staff one or even two weeks in full time sessions, to reach initial agreement on what are sometimes 30 or 40 indicators.  In some cases baseline data are available immediately, and that is one important criterion in choosing between what may be equally valid indicators.  
But in many cases, the data collection must be assigned to the partner agencies in the field, who know where the information is, and how to get it.  All of this means that a second round of discussions must be undertaken, to discard those indicators for which baseline data are unavailable, or just too difficult to collect and agree on new indicators. And, as the GAC guide quite correctly notes: 
"The process of identifying and formulating indicators may lead you to adjust your outcome and output statements. Ensure any changes made to these statements in the performance measurement framework are reflected in the logic model."  [p. 81]
The partners, meanwhile, have their existing work to continue with – and rarely see the baseline data collection as their most important operational priority given the political and institutional realities they fact to do their normal work.
I have participated in several design and inception missions, and I cannot remember when baseline data for all indicators were actually collected before the project commenced.  And at mid-term in many projects it is not unusual for an audit of the indicators by a monitor to find that 30-40%  of the indicators may not have baseline data, even after two or three years of project operation.
All of this could be avoided if more money and more time – up to six months perhaps – were allocated to the inception period, with an emphasis on establishing a workable monitoring and evaluation structure, and actually funding baseline data collection.  That means that when a donor agency emphasizes participatory development of indicators, during an inception period, it should be prepared to provide the resources of time and money necessary to make this practical.

2. Limiting the Logic Model to three levels

The GAC logic model has three results levels - for short-term, medium term and very long term changes.
 This is about standard for most agencies.  But, of course, only two of these levels are actually operational, and susceptible to direct intervention during the life of the project – Immediate Outcomes in the short term (1-3 years on a 5 year project) and Intermediate Outcomes which should be achieved by the end of the project.  The Ultimate Outcome level is the result to which the project, along with a host of other external agencies, including the national government, and other donors, may be contributing.
In real life, a Logic Model which actually reflects the series of interventions, from changes in understanding, which are necessary for a change in attitudes, to changes in decisions or policies and changes in behaviour or professional practice, will go through a minimum of 4 to 5 or even more stages where needs assessments, and training of trainers or researchers lie at the beginning of the process, before we get to field implementation of new policies or innovations.
I have worked with partners in the field during project design where during the theory of change analysis, up to 8 different levels were identified, with assumptions, interventions and purported cause and effect links between these levels, before ever getting to the ultimate long-term result.  It is impractical, the donors would argue, to have even a 5 level Logic Model – and this would indeed require extra work on indicators.   But while the Global Affairs RBM guide does give a nod, on page 48, to the idea of “nested logic models” something I have worked on with partners, these can be more complicated to present and to understand than a 4 or 5-layer Logic Model.

Some partners have decided to maintain their own more detailed, multi-level logic models, and present a simplified version to the donors, because the whole purpose of these tools is not primarily for reporting to donors – but to help managers determine what interventions are working, and what changes are needed.  That is why the process is called Results-Based Management, and not just results based reporting.  Having produced a more detailed and informal Logic Model, these partners,  when an evaluator, a Minister, or a new donor representative has difficulty seeing how a simple two layer Logic Model can actually attribute results to interventions, can produce the real Logic Model and explain the relationships.

It is unlikely that any donor will agree to a 4 or 5-level Logic Model, but it would be useful, as this Guide will be revised, to include a section illustrating the process of nesting Logic Models. 

The Bottom Line
This new Global Affairs Canada Results-Based Management guide is an absolutely necessary tool and reference for anyone working on Canadian aid projects – and it is a practical, very useful resource for anyone who wants clarity on the process of results-based project design.  I will keep the old 2001 PIP Guide on hand, however, for its still useful detail, and user-friendly format, as a complement to the new RBM guide.


Greg Armstrong is a Results-Based Management specialist who focuses on the use of clear language in RBM training, and in the creation of usable planning, monitoring and reporting frameworks.  For links to more Results-Based Management Handbooks and Guides, go to the RBM Training website

Saturday, January 14, 2017

A Useful Introductory RBM Guide for Long-term Planning

Greg Armstrong

Updated August 2019

Monitoring and Evaluation: An Approach to Strengthen Planning in Cambodia provides a useful introduction to the application of RBM in planning.

Level of difficulty: Moderate
Length: 55 pages
Primarily useful for: Those new to the complexities of the use of RBM in long term planning
Most useful:  Detailed guidance on results, indicators and targets at different levels of government operation, p. 26-45
Limitations:  Some of the links have expired and the document can be difficult to find on the Cambodian government website.

Strategic Planning

We rarely see national strategic plans, 5-or 10-year plans in well-established, institutionalized democracies with a diverse population, because this diversity is reflected, in a functional, institutionalized democracy, in its electoral outcomes.  As we have seen recently, unless long-terms plans are built on a consensus on values and methods, sustaining policy change will be difficult in the face of changing electoral results.

But organizations in all countries, where the purpose is more focused and diversity within the organization is more limited, often do undertake such plans, and we can see those in a host of strategic plans for health, education, food security, gender, environmental sustainability, transportation and other areas, in both international aid agencies and within organizations in individual countries, where at the national level planning itself may be the chaotic byproduct of democracy.

Where long-term national plans are undertaken,  it is often in countries with a legacy of central state planning, and in many cases, I suspect, these plans are developed as a tool to explain to external funding agencies how money can be usefully applied to problem solving, and how a country’s strategic plan is compatible with an aid agency’s long term priorities.

Such planning is obviously very complex, and finding the tools to help bring some order to what could be a chaotic and intimidating process, is important.  It is here that results-based management can be particularly useful. It will not shorten a planning process, but it will make it more rational, provide a productive path to follow, and, if applied intelligently, will increase the chances for the achievement of results.

The Cambodian guide to integrating M&E in long-term planning

In 2012, the Royal Government of Cambodia established a National Working Group on Monitoring and Evaluation to consider how M&E could be usefully applied and integrated with the country’s 2014-2018 planning cycle.  Sarthi Acharya, working with this group as a UNDP advisor, produced this summary guide to how the major M&E concepts – all of them important elements in results-based management – could be used to facilitate and assess the planning process.

What makes this a useful tool is not its originality.  As the document notes itself notes:

“This is not a research paper. It is a contextualised primer and its main audience are policy makers and programme evaluators in the Royal Government of Cambodia. The reader is expected to look for the meaning and application of the concepts and approaches presented here rather than search for originality in the research.”  [p. 5]

While some – but not all – of the illustrations and tables have been derived from other existing work and sites, such as Tools4devThe Monitoring and Evaluation News   or from academic sources or aid agency RBM guides, the references to all of these are provided in this guide for those who wish to go deeper.  Some of these have links to original documents, and not all of the links still work, because links often expire as documents move. It is possible, however, to find many of the original documents cited here, online and I have provided links to many of these, and other RBM guides and M&E handbooks at the end of this post.

Many of the most interesting parts of this guide are, in any case, original illustrations of how concepts could be applied specifically to the target audience for this document.

Visual illustrations of how a theory of change is applied to issues in Cambodia
Theory of change applied to Cambodian issues
Click to enlarge

Those who want a comprehensive  discussion on establishing an RBM system can read Kuzek and Risk’s 247 page  10 Steps to a Results Based Monitoring and Evaluation System, published in 2004 by the World Bank.  And for those who want a hands-on step by step detailed guide to project planning, Peter Bracegirdle’s 2001 RBM guide  A Results Approach to Developing the Implementation Plan  is still useful.

This Cambodian planning guide, was never intended for wide distribution, and although it can be downloaded (for now) from the link at the top of this post, it is buried deep in the Cambodian Ministry of Planning website.  But it could be a useful tool for many policy makers and managers in other countries, people who do not necessarily have the time, or the inclination to go to original sources, or to spend time on the detailed guides cited above. It will be useful to professionals and managers who just want an overview of what the major elements of results-based management are, in the development of long-term plans. The examples used to illustrate the processes are, certainly, given its purpose, specific to Cambodia, but present issues and potential programmes in fields such as school education, water and sanitation and rural poverty alleviation which arise in many countries.

And when it comes to how to organize the presentation of results indicators linking the broader aspects of RBM - national goals, policies and actions to a specific programme structure, the guidance here is detailed enough to be of practical value.

Two templates show how to list results and indicators for a national government and align them with results and indicators for Ministries
Templates for aligning National and Ministry Results
[click to enlarge]
The guide provides a number of templates for the organization and alignment of results and indicator data at each of these levels, accompanied by guidance on how to complete the templates

The point here is that the target audience, while primarily Cambodian, is also clearly people who want to know how to fit different approaches to RBM together, people who want something practical, which they can apply, something which will introduce them to, or remind them of how different elements of results-based management can be used.

The document has what appear to me to be three basic sections, the first two of which are likely to be of the most utility to planners in other countries:

  • An initial 25 page overview of basic issues in RBM and planning, with some examples from the Cambodian situation on pages 1-26
  • An interesting set of guides and tables to facilitate systematic indicator data collection, analysis and reporting pages 26-45
  • A concluding 8-page section discussing the specific indicators and results in the 2014-2018 M&E framework for the Cambodian National Social and Economic Development Plan. 

An Overview of Basic RBM Tools

Two tables showing on the left a results framework with arrows pointing from the results to corresponding boxes in the logical framework on the right
Comparing a results framework and a logical framework
[Click to enlarge]

The first 25 pages of this guide provide introductory summaries on a number of topics:

  • How log frames, results frameworks (and what some agencies call logic models or conceptual models) relate to each other
  • The Theory of Change – and its relationship to log frames and planning
  • Very brief case studies of how a theory of change could be used to assess problems in education, sanitation and poverty alleviation in using examples from Cambodia and Laos
  • The links between programme structures and results frameworks
  • A summary of how a national plan can incorporate work at the Ministry-level, the programme,sub-programme  and activity level
  • A short summary of differences between monitoring and evaluation.

Detailed guidance on organizing results, indicators and targets

More detailed is the document’s  guidance, in pages  26-45 on the approach used in Cambodia to organize and present data for intended results, indicators, and targets and indicators at 4 levels of government operation within the national plan.

The casual reader might think these will be of only limited interest, because of their specific links to Cambodia, but, this approach – using tables to organize the reporting, or variations on it could be adapted for use elsewhere.

Included here are tables and guidance on:
  • National goals, planned actions, macro indicators and targets, with notes on issues in data collection
  • Ministry level goals, indicators, targets, achievement, budgets  and actual expenditures
  • Programme-level objectives - and how they link to Ministry and national goals,  indicators, targets, achievements, budgets and actual expenditures
  • Sub-programme objectives – and how they link to the programme , number of activities, indicators, targets, achievements, budgets and actual expenditures
  • Activity-level objectives, how they link to the sub-programme, with indicators, targets, achievements, budget and actual expenditures, for each activity, and spaces for comments on issues related to gender, environment, income distribution and technical details of the activity.

3 Tables  to organize how results, indicators and targets can be planned and monitored r
Formats for planning and monitoring results, indicators and targets
Click to enlarge
Anyone who has to contend with reports which begin with discussions of the activities, burying actual results information far down in often inaccessible tables and meandering text will appreciate this approach to putting results up front, and leaving the details on activities for later descriptions, (for those who are interested).  It is not always necessary to use formal tables such as those presented in this guide, and the groups I work with often end up using a similar approach in text.

A template with evaluation questions about both the programme process and its outcomes
Process and Outcome Evaluation template
[click to enlarge]

But people’s attention to detail on indicator data collection, organization and presentation often deteriorates over time with the competing pressures of day to day implementation, and for an enterprise as large and complicated as the Cambodian national plan, the tables are probably necessary to get any consistency of data at all across multiple ministries, programmes and activities.

In the context of this ideal vs real world implementation of RBM, it is interesting, but not completely surprising to me,  to note that the actual National Strategic Development Plan produced subsequent to this guide in 2014 does not have this level detail.  There are a large number of national and sector indicators in the plan, but none I could find at the Ministry or subprogramme level.  As the chapter on Monitoring and Evaluation notes:

"There are several capacity gaps related to M&E in all line ministries and agencies.
  • Line ministries and agencies do not have adequate capacity to formulate SMART(Specific, Measurable, Achieveble, Realistic and Time-bound) indicators for their sectors.
  • Line ministries and agencies do not have adequate capacity to collect and analyse data for measuring their indicators.
  • The RGC does not have an National M&E System to monitor and evaluate the progress of implementation of NSDP and the implementation of all projects carried outby line ministries and agencies in the Three-Year Rolling Public Investment Plan." [p. 218]

M&E Orientation Guidelines for the NSDP produced a year and a half after the plan itself was published make it clear that these problems had to be addressed.

The existing Results Framework for NSDP implementation requires additional work to be able to assess performance. Even though it includes indicators, baselines,intermediate and final targets, it does not include indicators of efficiency (value for money), cause and effect relations to establish the contribution path, or an indication of the evaluation agenda. This will require an articulation of various instruments at planning, programming and budgeting levels at institutional level and across institutions; as well as an enhanced ability to collect relevant, timely,and accurate administrative data to support the analysis for reporting.[p. 6]
These M&E Guidelines suggest that work was planned to develop the capacity of Ministries to collect and report on the kind of data suggested here.  It will be interesting to see if this actually occurs, or if this level of application of RBM to planning, will have to await subsequent plans.

The bottom line:  Although it remains to be seen if the RBM tools presented in this guide will be applied in action in Cambodia, this document provides a useful introduction to basic RBM tools for planning, and some practical guidelines on data presentation, which could be applied not just in national planning but in programme and project planning in many countries.

Related Resources on Results-Based Management and M&E


Greg Armstrong is a Results-Based Management specialist who focuses on the use of clear language in RBM training, and in the creation of usable planning, monitoring and reporting frameworks.  For links to more Results-Based Management Handbooks and Guides, go to the RBM Training website

RBM Training

RBM Training
Results-Based Management

Share on LinkedIn

Subscribe to this blog by email

Enter your email address:

Delivered by FeedBurner

Read the latest posts