google-site-verification: googlefccb558c724d02c7.html

Translate

Monday, January 25, 2010

CIDA’s Practical Guide to Planning Large Development Projects

--Greg Armstrong -- Last updated April 19, 2023

A Results Approach to Developing the Implementation Plan: A Guide for CIDA Partners and Implementing Agencies


Download it before it disappears.   Peter Bracegirdle’s Guide to developing the Project Implementation Plan is a logical, carefully structured tool for planning large projects. It can be useful, not just for people working on CIDA projects but for anyone trying to put together the pieces of a complex project or programme.
Cover page of the old CIDA PIP Guide


Level of difficulty: Moderate
Primarily useful for: Planning large projects
Length: 98 pages
Most useful sections: Examples illustrating how each component of project planning works.
Minor Limitations: Some CIDA-specific RBM jargon and it takes considerable time to actually work through these processes in real life.


CIDA’s user-friendly former comprehensive project planning Guide

This document was nine years old when I first reviewed it in 2010, and  although some of the terminology in it has been superseded by new RBM terms the Canadian aid agency CIDA adopted in 2009, this is still one of the most useful guides to planning large projects - for any donor - currently available.  While a lot of other RBM guides from large organizations come across a bureaucratic and boring, Peter Bracegirdle's writing, presentation and illustrations make this a much more accessible, user-friendly guide than any others I have come across.


But, it is just barely available. It is no longer directly downloadable from the CIDA (now GAC) web site, and it is not available even from sites such as the Monitoring and Evaluation News, which have most, even “out of print” donor documents. A Google exact word search for the Guide in 2022 returns 242 hits, many  bibliographic references by other donors, providing a link to a CIDA site, that itself has expired.  In theory you might once have been able to get a copy from CIDA by sending an email to CIDA’s Performance Management Division but but that is no longer possible.


I have an old PDF copy of this guide, and a couple of hard copies, and you can still get it from several sites including Businessdocbox, but if all else fails, here is an archived copy.


CIDA changed into first DFATD and then Global Affairs Canada under different governments, has been, and continues to work on revamping its results-based management terms and guides, and the document was removed from the CIDA/DFATD/GAC website pending adaptation to the new terminology. I would not count on the PIP guide (PIP for Project Implementation Plan), as it is known, ever being reintroduced.  And, although there are some useful new RBM guides being produced by Global Affairs Canada, this old PIP guide is easier to read than many subsequent guides, and worth looking for because it gets down into the dirty details of how we can plan in practical terms for a project - not just for the Canadian aid agency, but for any organization.


Who this Guide is for


This is not a guide I would suggest for small projects. It is long and will take time to work through, time that may not be a good investment if a project is short-term or low-cost. But for anybody starting out on the planning process for a longer (2 years or more) complex, or expensive project, regardless of who is funding it, this guide can help put the sometimes daunting process of building and then fielding a complex project into a usable, coherent framework.

I know of one international lawyer who has seen this document for the first time and is happily using it in April 2023 for the design of a rule of law project.

While the guide has some CIDA-specific jargon, the logical and (everything is relative) user-friendly structure of the guide can help produce a functional, and understandable plan for most large projects.  I know several project managers who refer to it still, as a guide to sorting out their work plans and reports.


This was one of the most important of a series of documents on results-based management produced by and for CIDA roughly ten years ago. The people most likely to benefit from using the PIP guide, therefore, were obviously those working with CIDA projects.  Some of the terms are now dated.   Global Affairs Canada has changed from the pure Logical framework approach (the LFA), for example, now calls its results “immediate, intermediate and ultimate Outcomes”, uses a logic model instead of a results chain, and has a more complex risk identification process. But the logic in this original PIP  Guide is still sound and could easily be adapted to current terms used by Global Affairs Canada or, for that matter, to the work of other agencies.


This guide has considerable potential utility for anybody trying to turn a general project design into a functioning on-the-ground project, for people who want to, or have to, burrow down into the details of planning.  It is a tool that should be used with a group, or in multiple sessions, with key project implementation staff, partners and for some components, with stakeholders.


I avoided using this guide for the first several years after it was published, simply because it was so long (97 pages), and, I assumed, too complex for the projects I was working on. That is still true where the projects are under $500,000, but for large projects I was wrong. 


When I did use it in a planning workshop on a justice project with a UN agency that was not required to use the CIDA format, we found that while working through the process took considerable time, it was not really intellectually difficult. The Guide helped focus discussions, and tease out the logical implications of how we were putting the project together. I have since used it on two other projects, including governance and environment projects and I regret not using it earlier.


Format: Quick Notes on Project Planning

A table organizing information required and roles and responsibilities for different stages in project design, project management, project implementation, project information management, and finalizing the project implementation plan
Organizing tasks for developing the Implementation Plan
(click to enlarge)

This is not an Results Based Management guide by itself. It assumes that readers have at least an introduction to RBM. But it takes the logical elements of RBM -- problem, results, resources and activities -- and ties them together in a usable format, with easy to understand illustrations and examples.

Each of the 22 units, in six categories, is covered in a two or three pages, including one or two paragraphs on key concepts, a list of clear questions to focus group work, and a practical example of how the questions and framework can be applied in a project. 


There are, in total, 111 questions that can be used to focus discussion. The first three units are specific to the CIDA project implementation plan process in particular, but there are, in the remaining 18 sections, probably at least 80 or 90 questions that could usefully be examined for any project.


This is not deep, not revolutionary, but it focuses attention on what we need to know if we are planning a project likely to have any practical impact,and what we need to do if in light of changing circumstances, we need to revise the project design.
A table organizing information required for project design, missing information, possible sources of information and responsibility for data collection.
Framework for revising the project design
(click to enlarge)

This guide assumes the basic project design (the conceptualization of the general need and direction) has been completed, and that now the reader is tasked with doing something to bring ideas into implementation.


How Long will it take to use it?

My experience has been that with a relatively small group of people -- ten key staff planning a two-year, single-country project, worth about $2 million -- at least a week is required to work through, in a very cursory manner, the issues outlined here. That built the foundation for the project, but much more work still had to be done to nail down the baseline data, and flesh out the details of the operational and reporting tasks.  A project of that size would probably require, therefore, at least a month of full time work to do this properly. Unit 5 of this guide, for example, has ten key questions about the development problem, and examining them carefully with key partners, could easily take two or three days.


On a larger project ($10 million over 5 years) with multiple partners, three weeks of work was not sufficient to cover all of the territory, but the basic structure of the project, implementable --although requiring much follow-up -- was in place after going through the process. For a project of that size, another two months of serious concerted attention would be needed to get baseline data for the indicators, and through this process eliminate the impractical indicators.



Resistance to spending money on project planning


Many donors and implementing agencies do not encourage “long” planning periods, but this is ultimately short-sighted. A reluctance to spending time and money on rigorous planning just means double the time later on monitoring and remedial design.



 In CIDA’s projects, and unhappily in the successor agencies DFATD and GAC, for example, while a project implementation planning “mission” to the field before project implementation might be scheduled for a month, it can often take up to a year (or more) for the actual plan to be approved. This is because, in large part, insufficient time and money is often budgeted or spent at the beginning, examining the logic of the project, the way the logic relates to operation, and, at its most basic, focusing on collection of baseline data. 


When it later becomes clear that indicators have not been tested, and the logic of the management structure of funding arrangements is vague or confusing, budget and programming delays, and endless rewrites of the plan often are the result.


It does not seem unreasonable to me that for $2 million, a month or two should be spent on serious planning, and two or three weeks each year on reporting. And for $10 million or more, three or four months of serious attention at the beginning, and a month a year later for internal monitoring, can save months of wasted time and perhaps millions of dollars on unfocused activities. OK, it may cost $200,000 to put together a competent plan for a $10 million project, but what is that -- 2% of the total project cost, to focus the effective spending, and reporting on the other 98%? 


 There are in some cases, examples of projects costing well over $50 million, where no clear attempt to examine the logic was ever undertaken, until critical evaluations raised the uncomfortable questions that should have been asked many years, and many millions of dollars before.


For consultants who do both planning and monitoring, donors skimping on planning should not be a problem -- because they will get the work later, anyway, as donors and executing agencies scramble to retrieve the mess created by the rush to implementation.


Key Questions for Effective Results-Based Project Planning


The first three chapters introduce where the Project Implementation Plan is in the CIDA structure, and those not working with CIDA can probably safely skim these.

The other major sections of the guide include units, with key questions and examples on
  • Assessing information requirements for practical planning.
  • Defining the Development Problem.
  • Clarifying the logical framework
  • Reach and beneficiaries
  • Risk analysis
  • Incorporating cross-cutting themes, such as gender or the environment
  • Sustainability strategies
  • Defining a management structure
  • Clarifying partner roles and responsibilities
  • Specifying oversight processes
  • Relating results to activities and work tasks (the work breakdown structure)
  • Using scheduling to focus attention on assumptions behind activities and results
  • Relating results to budgets
  • Developing internal monitoring, risk management and communication processes.
None of these is exciting, but in the process of working through each, some very interesting ideas about results, assumptions and processes can emerge from a workshop.  The Output-Activity Matrix has been my favourite tool for facilitating detailed project planning.  This tool I have found, genuinely focuses attention on the resources needed and how they link to results, and this is useful in the context of any project.


A table linking each anticipated result to specific requirements for resources.
Output-Activities Matrix
(click to enlarge)
Limitations: This is a summary of basic processes.  It does not explore any of these ideas in detail, and genuinely working through these issues will take considerable time, and the participation of all of the stakeholders, something the author emphasizes.


The bottom line: This guide won’t do the work for you, and it won’t implement the project, but it will help you define a rational structure for a large or complex development project. You may decide to use only part of it, but there is a real logic to the sequence here. Taking each part seriously and working it through, does, it turns out, make sense.

_____________________________________________________________



GREG ARMSTRONG
Greg Armstrong is a Results-Based Management specialist who focuses on the use of clear language in RBM training, and in the creation of usable planning, monitoring and reporting frameworks.  For links to more Results-Based Management Handbooks and Guides, go to the RBM Training website



Monday, January 11, 2010

The LFA Debate: Making the LFA participatory

-- Greg Armstrong --


SIDA's LFA Papers:


3. The Logical Framework Approach - an Appreciative Approach


This  is not a complete guide to participatory use of the LFA, but it could help focus attention, in the initial needs assessment stage of project development, not just on problems, but on opportunities.  Much of what is said here was old news in adult education 80 years ago, but good ideas do occasionally merit repeating.

[Edited to update links June 2018]

Level of difficulty:  Moderate
Primarily useful for:  NGOs looking for participatory needs assessment techniques
Length:  24 pages
Most useful section: Guidelines on running a workshop, p. 12-20.
Limitations:  Academic jargon may distract the reader, no discussion of indicators or reporting


The LFA Debate (part 3)


Prepared by the SIDA Civil Society Center and published in 2006, this is the third in a series of publications produced by SIDA dealing with the different perspectives on how and whether, to use the Logical Framework Approach in development programming.


The first document of the three I have reviewed was a summary of the basic elements of the LFA, published in 2004.  The second was a discussion of how civil society organizations critiqued the use of the LFA.


Who this document is for:



This document is rooted in the experience of a Swedish Civil Society Organization (Swedish Pentecostal Mission’s development cooperation agency)  working in Africa, and is apparently aimed at project designers and planner working in NGOs.  The guidelines for conducting a workshop, and the suggested time allocations for different components, suggest the document is aimed at organizations with fairly small projects.


Overview - Useful, but not new



This document proposes that “Appreciative Inquiry” an approach to research which focuses on how participants perceive their own strengths, problems and opportunities be applied to the Logical Framework Approach used by most development agencies. The model presented here focuses on planning, not evaluation, and is based in part on field trials in Niger, Nicaragua and Tanzania in 2005.


While the ideas have some utility, the use of academic jargon to label what is essentially participatory planning, is likely discourage from reading it, some people  who might otherwise find some reasonable ideas here.  


Not that the ideas are necessarily new. Much of what is said here was old news in modern adult education 80 years ago: Start with the learners, build on their strengths, help them explore their own potential. John Dewey, Edward Lindeman, Moses Coady, and Roby Kidd among many others have been making the same case, give or take a few academic flourishes, for the past 100 years.  But good ideas do occasionally merit repeating.



Principles of Appreciative Inquiry




The basic premises of Appreciative Inquiry, as summarised in Appendix 2 of this short document, appear to be that:


1. The language we use affects our perceptions of ourselves and our communities. “By changing our language, e.g. by talking about opportunities and strengths instead of weaknesses and threats, we can alter our mental frame of reference, and thus our reality.”


2. The process of inquiring about a situation (needs assessments, for example) affect the situation itself:
"Change begins the moment we start to ask questions and study someone’s experiences and perceptions, and the questions we ask determine what we will find.”
[This is described in more general approaches to qualitative research as an “interaction affect”, something with which anthropologists and most other  professionals applying qualitative research methods will be familiar.]


3. The way people talk about their situation, is a narrative, and  interventions - research and action -- should take particular note of what people say as the basis for understanding.


4.  The expectations people have about the future -- their  “vision”  or  preconceptions -- affect the actions they take, and contribute to, or bias that future:
“Studying our preconceptions and expectations about the future, and formulating desirable  visions of the future on the basis thereof, will help us to take positive, action-oriented steps in our lives.”
5. Building a positive attitude in discussions is important for change: 
“Positive, affirmative premises are needed to build and maintain the forces of change at a deeper level. The more appreciative our starting premise, the more successful and sustainable our efforts to bring about meaningful change and development.



Applying Appreciative Inquiry to Project Planning




In applying these rather general concepts to project planning, this document is quite specific about the context in which it suggests appreciative inquiry can be useful, particularly in reference to SIDA guidelines on working with civil society organizations:


1. The views of the poorest people should influence decision making about what will affect them (presumably including development projects)


2. There should, at the end of the intervention or series of interventions, be long-term improvements in the situation of the participants


3. The capacity of local groups and civil society organizations should be strengthened by interventions.


The Logical Framework approach, in theory, although often not in practice, begins with stakeholders identifying a problem and its causes, then specifying results which would presumably improve the situation.  And, taken in this context, there is really nothing surprising in what  this document proposes to do with the basic steps in the Logical Framework Approach.   


This paper was intended as an integration of what are basically participant-centred methods with the Logical Framework Approach, so it is reasonable that the basic steps of the LFA would be incorporated, but adapted slightly.  And, where the concepts might appear a bit vague, in the text, the document presents, in Appendix 1, a 9-page outline for a workshop.  This guide lays down a set of prescriptions about how to use this approach - detailed enough to specify group size and even the amount of time in hours and minutes to be allocated to each stage.


Steps in the Appreciative LFA process


This document breaks the usual LFA process into nine steps, and although there are some slight variations in sequence or how the steps are combined,  they are not  significantly different from what the SIDA 2004 LFA summary paper described, from what Philip Dearden described in his 2005 guide to using the LFA, or from what most agencies now use.  My comments, if any, are noted in brackets.


Step 1 - Identification of necessary participants  [One component of stakeholder analysis in other LFA formats]


Step 2 - Situation assessment:  A Description of the current situation -- both what works well, and what does not work, or works poorly.


Step 3 -  Analysis of what the consequences are of this situation -- undesirable effects, and any useful effects that participants do not want  to endanger or undermine through programming.  In this stage too, the document includes a discussion of what the participants would like to change. 


[Steps 2 and 3 together are sometimes in the usual LFA process addressed in  problem identification - although again, this “appreciative inquiry approach” emphasises identification of opportunities as well as  problems]


Step 4 - Analysis of the factors affecting the situation   [This often happens in  assessment of the causes of problems in other LFA approaches - and is frequently included in the problem identification stage.   Here, again, this document emphasises also analysis of the  causes of successes, not simply the causes of problems]

In this stage, several questions are asked:



  • What things, people or processes are working well - supporting positive aspects of the situation, or mitigating negative aspects of the situation?
  • What factors, similarly, are contributing to negative or undesirable effects of the situation?
  • What is the relationship between these factors - do they interact?  
  • Which of these factors - the ones leading to positive or negative results, should a project or programme concentrate on?


Step 5 -  Analysis of assumptions and internal conditions necessary for success.  Here two questions are asked:


  • Based on the analysis of factors, which groups are most important to change?
  • What strengths and resources do these groups bring with them, and how can they be effectively applied?
[It is clear - and refreshing to see - that this paper takes the discussion of assumptions seriously. Accounting for assumptions  is implicit in most LFA formats, but rarely directly addressed in practice.  In the accompanying workshop guide, in Appendix 1, two of the nine pages are given to descriptions of how to work through a discussion of assumptions in the group]



Step 6 - Identification of the project goal and “deliveries” (deliverables).


  • Based on the analysis of problems, strengths and critical factors affecting both, what changes should the group focus on? 
[This looks like a mid-term result - achievable by the end of the intervention -- what OECD-DAC defines as a medium term effect, or Outcome -- essentially, however ,just a longer-term change to which the project hopes to contribute.]


  • What will the project “deliver” to contribute to this change?  :
The example provided is:“Local decision-makers and leaders in civil society organisations (CSOs) have been educated on, and understand children’s rights”
[This seems to combine, what in other circumstances would be Outputs - completed activities --- “decision-makers…have been educated….” with short term results -- ‘…decision makers….understand children’s rights.” 
  •  What strengths - identified in step 4, during the analysis of factors -- can different participants bring to these activities and results?

  • What are different groups prepared to take responsibility for?

  • What are they key factors we need to consider to get results?  [This seems, although it is unclear, to correspond to the risk assessment or risk analysis in other LFA terminology]


Step 7 -  Overall Goal of the Project 

  • To what longer-term result is the project likely to contribute?
 [This clearly seems the equivalent of the OECD/DAC “Development Objective”, CIDA’s “Ultimate Outcome”, or in more basic terms simply the long-term change to which the project is intended to contribute.]



Step 8 -  Resources and division of labour


  • What resources can different participants including the donor bring to the activity?
  • What organizational capacity needs to be improved?


Step 9  -  Action Plan 

[This, combined with step 8  seems to correspond to the activity development or activity design component of other LFA processes.]

  • What concrete needs do we have; how will labour be divided; what are the deadlines, and methods of reporting?

Greg Armstrong’s Comments:  



1. Adult Education and Project Planning



The overall idea of putting more emphasis on strengths and not just problems seems to me to be constructive, but it is not particularly new, as I noted in the introduction to this review.  Putting the label “appreciative inquiry” on the process of starting with strengths, does not necessarily make it innovative, but nor does it invalidate the utility of starting with how people see their situation, problems and opportunities.  

What putting this academic label on the process could well do, however, is discourage field-oriented practitioners from reading this, and therefore from learning something useful, because nobody wants to get bogged down in academic jargon. Results-based management itself has enough jargon without adding a new layer of academic terminology. 

If academics really insist on applying their considerable expertise to practical issues, they would be well advised to reduce the jargon.  But, that may be an unrealistic hope. if they don’t eliminate the jargon, however, what can we expect next: 


  • Transformative Learning and the LFA?  
  • Critical Self-reflection and the LFA?
  • Reflective Dialogue and the LFA?
  • Conscientization and the LFA?
  • The Dialogical LFA?


2. Needs Assessments and RBM




Needs assessments have always, in theory, started with an attempt to understand the strengths and problems of individuals, communities, institutions or countries, and then have attempted to assess available but underused resources, to build on strengths and minimise problems. 


The appreciative inquiry approach explained in this document makes that process of assessing strengths and weaknesses explicit, and that is probably useful, because over time, there has been a tendency to focus on problems.  Analysing what works, and trying to build that into a programme is essentially what adult education does - building on our strengths to overcome our problems.  



3. Sequencing Results and Activities



The general sequence suggested here -- at least on the identification of problems, strengths and how they play into a potential definition of needs and the structuring of a project, is reasonable, but not revolutionary.  But steps 6-9 are somewhat problematic for me.  It would seem to me that focusing on the broader change - the long term result (step 7) might logically precede the definition of the immediate project results (step 6), which might eventually contribute to a desirable long-term change.  


The danger of focusing first on project results is that short term changes might not really be relevant to a long-term need.  Some universities, for example ,have in the past been notorious for assuming that the result of every project should be more graduates.  While that might serve the university’s immediate need, (particularly in terms of job security for professors) it will not always contribute to the solution of a longer-term development problem.



Limitations: Indicators and reporting


This document is intended as a discussion of planning, not implementation or reporting.  So, it might be unfair to criticise it for not discussing reporting.  In fact, it would probably be fair, analysing the guidelines for the workshop, to say that approach may be aimed at improving primarily the LFA stage of situation  analysis, not the concrete activity development needed to finalize a project. If limited to this stage in the development of a project, I think it could be said that the document makes some useful contribution to the process of defining needs.


The  paper does at least suggest, however, that it is integrating the participatory process not just with needs assessment, but with the whole Logical Framework process, and if the LFA, or results-based management in general are to have any utility for aid agencies, reporting has to be addressed.  


If a participatory approach to planning is used, the implications of this for reporting need to be discussed in detail if the participatory approach is to have any credibility with field workers or donor agencies.


Indicators - the evidence that will tell us if we are making progress on results, are an essential (and often criticised) element of practical results-based management.  Although the word “indicator” is mentioned 54 times in this document, the complex process of identifying practical indicators is never actually discussed.  There is no discussion of what the group process of identifying an indicator might look like, no discussion of the potential problems such an approach might present in practice, and particularly strange, coming from advocates of a participatory approach, no discussion of the strengths of group identification of indicators,


The bottom line:  This report provides a reminder of the utility of engaging stakeholders in real discussions about problems and strengths of their community, during the project planning process.   Although in practical terms there is little here that is startlingly new, those interested primarily in focusing on the situation analysis, needs assessment and problem identification stage of RBM and the LFA process could probably make use of these approaches.  As a guide to the whole LFA process, however, it is unlikely to be as useful as the 2004 SIDA publication summarizing the LFA process..  


Because of its limited scope this paper does not resolve the debate on the utility of the LFA.  Of the three papers prepared for SIDA between 2004-2006 on the LFA, it is probably the first, on the theory behind the LFA that is likely to be of the most practical use to field-based development practitioners.

_____________________________________________________________



GREG ARMSTRONG
Greg Armstrong is a Results-Based Management specialist who focuses on the use of clear language in RBM training, and in the creation of usable planning, monitoring and reporting frameworks.  For links to more Results-Based Management Handbooks and Guides, go to the RBM Training website



Sunday, January 03, 2010

The LFA Debate: How Implementing Agencies View the Logical Framework

-- Greg Armstrong--


SIDA's LFA Papers:

2. The Use and Abuse of the Logical Framework Approach


This document reviews the many problems NGOs encounter in using the Logical Framework Approach.  

Level of difficulty:  Moderate to complex
Primarily useful for: Agency Planning and Evaluation groups, monitoring and evaluation specialists
Length:  28 pages
Most useful section:  Discussion of the LFA as a conceptual framework (p. 12-14)
Limitations:  Lacks a snappy concluding summary, and the findings are indicative, given the limited sample.


[Edited to update links, June 2018]

The LFA debate 


The Logical Framework is often associated with results-based management, and even when agencies dispense with the formal Logical Framework matrix, they almost always maintain some variation, such as a Performance Measurement Framework, as part of the planning and evaluation process.  For the past ten years, there has been an active debate on whether the Logical Framework Approach, and the associated matrices are helpful or harmful to understanding and improving development interventions, and much of this debate can be found in the Monitoring and Evaluation News archives and in academic research.  

There are three documents produced for one donor agency --- the Swedish International Development Agency (SIDA) between 2004-2006, however, that seem in part to reflect  something of the debate over whether - or how - to use the Logical Framework Approach in the design and management of development interventions.  

The first paper I reviewed, on December 27, 2009 was a summary of the theory behind the LFA, originally published in 2004. The next post will review a SIDA paper, published in 2006, that proposes the application of appreciative inquiry to the LFA.  

This post reviews The Use and Abuse of the Logical Framework Approach, (PDF) by Oliver Bakewell and Anne Garbutt, of the organization INTRAC (The International NGO Training and Research Centre) in the U.K. Although the document itself, once, but no longer, available from both MandE and from INTRAC itself, has the SIDA logo on it, interestingly enough this document is not available directly from SIDA, and is not listed as one of the LFA-related documents available on the SIDA publications site. 


Who the Report is for


This report is not intended as a guide to practice.  It is a review of how different organizations, donors and NGOs, view the use of the LFA.   While many of the documents on the INTRAC website are oriented to practitioners, this one will probably not be of practical use to field officers, although it could inform discussion within aid agencies about whether and how to use or adapt the LFA.  


The report builds on both a review of literature (14 documents produced between 1990 and 2003), a survey of 18 agencies, and interviews with a selection of key informants in those agencies.  Officials of three donors are included as sources here - DfID, CIDA and FAO, along with those from 15 NGOs and consulting agencies.


How the LFA is used by Implementing Agencies


 The Logical Framework Approach in general is intended to tie together consultation with stakeholders, problem identification, assumptions, results definition and indicator development.
  • Some of the 15 implementing agencies responding to this study, reported using the Logical Framework matrix, but without going through the consultative process.
  • Others reported using the general approach, including consultation with stakeholders, and working through the relationship between interventions and results, but without using the matrix.
In theory, the whole process is supposed to relate not just to planning a project, but to managing inputs and activities for results, reporting on results, to monitoring, and to evaluation.  Yet, in reality, the authors note, "...in most cases the LFA is only explicitly used at the planning stage."


Problems with the Logical Framework Approach


Problems with participation and ownership


There are a couple of definite problems related to participation.  The report makes the point in several places that really incorporating genuine stakeholder participation in the Logical Framework process can be so time-consuming that many organizations minimise or abandon it.  

"It seems likely," this report notes, "that the individual's attitude to the LFA is likely to be closely related to how useful it is for their work rather than the nature of the organisation." 
But, the report suggests, a potential downside of genuine early participation in design of a project, is that ownership of the results, indicators and procedures becomes so strong, that nobody wants to make adjustments that are almost always necessary during project implementation.
"Some NGOs which had invested the time and effort in participatory planning reported finding that the resultant logical framework was such a valuable artefact, representing so many hours of negotiation to reach consensus, that it became very difficult to contemplate making further revisions as the project continued."

Strategies for coping with the LFA


 The study found three primary roles reported by implementing agencies, primarily NGOs, in using the LFA with their own stakeholders (smaller agencies or implementing groups)
  • Facilitator: NGOs sometimes work as facilitators of the process, helping their own stakeholders work through problem identification, results and indicator development, but this requires a huge amount of time.
  • Translator: NGOs sometimes serve as translators of existing planning processes, and turn that into different variations of the LFA depending on the varying formats required by different donors. 
  • Buffer: Sometimes implementing agencies simply act as a buffer between recipients and donors, selling the project with the LFA, which is later ignored during implementation.
The problems with implementing agencies acting as translators of the stakeholders' interests or as buffers between the stakeholders and donors are:
a)  Donors are divorced from the process, reducing their understanding of both the      programme and the organizations involved; and
b) The process of translating stakeholders' own analyses into a donor-acceptable Logical Framework matrix can be very complex.  "The process of logframe construction" one northern NGO respondent is quoted as saying in the report, "appears difficult for many international staff, even those with PhDs."


The LFA's adaptability - a myth


 The adaptive nature of the LFA - that, in theory, it can be changed as time and circumstances require - is a myth, according to this report.
"A major problem is the requirement in the LFA to work out the programme logic including identifying indicators from the outset, and the tendency, in practice, for that logic to become fixed in the matrix. In theory, the logical framework can be revised through the programme cycle and changes made, at least to the output level. In practice, this rarely happens....The rhetoric of flexibility and learning which is suggested by the theoretical application of the LFA does not work out in practice."

The LFA in Monitoring and Evaluation


Another significant criticism of the Logical Framework Approach reflected in this document is that donors often have little involvement in the participatory development of the original Logical Framework matrix with stakeholders. Consequently, they often have little understanding of the thinking behind the process, but later insist on using it as the basis for evaluations.


"They then take the resultant logical framework", the report says, "to evaluate a project or programme and ask (usually external) evaluators to use the indicators as the benchmarks for assessing the work." 


The LFA as a conceptual framework


The strength of the LFA is almost universally accepted as being its requirement for systematic thought about the relationship of activities to problems and results. On the other hand, the report concludes, it is too linear in its logic, leaving little room for unintended consequences, or the complexity of factors that may lead to results. 

Assumptions and Risk in the LFA


One very short, but important element of the report relates to assumptions:
"...one respondent argued that the management of risk and coping with the unexpected is critical for the success (or failure) of most development initiatives, and the risks and assumptions column is therefore the most important part of the logical framework matrix. However, it is usually the part taken the least seriously as it is the last column - more time is spent on outcomes and indicators. 'Risks are almost always poorly analysed and just put in for completeness' sake."

Conclusions about the LFA

 

The major strength of the LFA, seems to be, in this report, that it forces us to think through the logic of what we are doing, the relationship between the problem, activities and results.  But it is easy, the donors and NGOs seem to agree, to get locked into unrealistic expectations and assumptions.
"A simplistic characterisation of the prevailing attitudes to the LFA runs as follows: donors insist on it, while NGOs use it under sufferance. All recognise that it has many weaknesses, but there is a common view that despite these weaknesses, it is the best of a bad bunch of options available for planning and monitoring development work. Hence it carries on being widely used against all objections."
As an alternative, the report suggests:
"Rather than assessing whether we are delivering activities and outcomes in accordance with our grand theory set out in the logical framework, we should assess the theory. We may start with a set of expected activities and results, but as events unfold, the project or programme needs to respond, change its theory and revise its expectations. The emphasis should be on setting up systems for monitoring the impacts of our work, changing directions accordingly and monitoring those changes in direction. We need to constantly think how what we do relates to the goal, but we also have to hold that theory loosely and be prepared to learn and change."
The report suggests that implementing agencies should not be tied to the originally defined Outputs (usually, in donor-speak, the completion of activities) but should be free to make adaptations to reach the longer-term agreed upon results.  This would be the practical manifestation of genuinely "results-based" management. 


Learning Lessons from the Logical Framework Approach


The real issue here, the report suggests in its conclusion, is that lessons should be learned from development interventions, and they should be heeded both by those implementing projects and those planning and funding them.
 "Rather than criticising NGOs that carry out activities which then fail to contribute to the goal - i.e. that do not conform to the initial model - their sanctions should be reserved for those who fail to learn from this experience and carry on regardless."


Greg Armstrong's Comments on 9 Main LFA Issues:


1. The Logical Framework Matrix without Participation


Many organizations turn the use of the LFA from an "approach" which involves extensive participation, into mere Logical Framework analysis, which can be done, although without much beneficial effect, by an individual or small group. 

When the Logical Framework Approach degenerates to this level, when it becomes simply an individual or small group filling in the boxes in the matrix, the utility of the approach, for management, in my experience, diminishes to the point of nonexistence. The Logical Framework matrix then becomes a convenient tool for a donor agency to use as it summarizes and sells its projects to decision-makers, but without the grounded discussion that implementing agencies, field workers and aid recipients can bring to the process, it is likely to leave a development intervention that in practice will bear little relationship to the design, or the problems that the project hoped to address. 

2. Ownership and utility of the LFA


Among the organizations I have worked with it is, indeed, those which have found a way to use RBM in general, and particularly the LFA, to clarify their own thinking, which have used it most consistently, and most successfully. Those organizations that have used the LFA because they have been forced to do so,  rarely internalize it, or sustain the process. 

These would be the organizations this report describes as those that "... will prepare logical frameworks and jump through various LFA hoops when it is necessary to satisfy donors." 

3. Participation and delays in the development of the LFA - the role of donor agencies


It is not uncommon for the participatory component of project design within a donor agency to take up, perhaps, 4-5 months of time, and some donors shrink at that prospect, fearing design delays. But this is a total fantasy -- the delays, when they occur, are rarely attributable to genuine stakeholder participation, but instead to the time required within the donor agency to manipulate the data coming from the consultative process, to cope with the donor's internal political wrangling over shifting priorities, or the demands of changing development fashion. 

These bureaucratic issues can delay project development for up to three years. By that time, of course, the initial consultations among stakeholders may well be irrelevant, the original problem may have changed, and the design can easily have become outdated. 
So, pinning the blame for project delays on solid participatory needs assessment, results definition, and indicator development, seems to me to be a feeble attempt by donors to evade responsibility for delays. 


4. Complexity and Jargon in the Logical Framework process



This report points out, quite correctly, that thinking through the Logical Framework process can be complex, and the authors quote one source as saying the process is difficult too for international staff, “even those with PhDs”.

Working with NGOs, government organizations, private sector implementing agencies and universities, what I have seen is that it is precisely the people with PhDs, especially those working in universities, who have the biggest problems in working through the process. 

Getting a doctorate often involves the acquisition of sector-specific thinking patterns and jargon, something many of us know from personal experience. But I am convinced that the major reason the Logical Framework Approach and RBM in general are viewed as difficult is because the language used in these processes is obscure, and for most field workers unrelated to the reality of what occurs on the ground, in development projects. 

A simplified approach can overcome a lot of barriers to effective use of the Logical Framework Approach and to results-based management in general, and while this has proved to be very successful with NGOs and government agencies, the greatest resistance to using simple, plain language in the process is often met when working with universities. 



5. Is the Logical Framework Adaptable?


In my experience it is not universally true that the Logical Framework is too rigid to be adapted. I have seen at least four projects over the past five years, where indicators and sometimes results statements were changed mid-way through a project, after some sophisticated discussion among project stakeholders and the donor. These were all on projects worth between $5 million - $10 million, in fields as diverse as parliamentary development, environmental management and economic policy capacity development. 


6. Using the LFA for Evaluation: Donors' short attention span


It is definitely true, as this report suggests, that in many donor agencies there is often little institutionalized memory of the discussion process underlying development of the Logical Framework for individual programmes. 

Sometimes the donor representatives do conscientiously participate in these early project design discussions with stakeholders. But even when this occurs, delays in project approval, and the shifting of the original donor staff to new programme areas, often mean that with new staff, the donor agency does not have any continuous or embedded memory of the thinking processes underlying results statements, discussion of assumptions, indicators, or risks. 

Implementing agencies, however, usually do have this memory. For the implementing agency - providing it is not just another donor agency (as when bilateral donors fund projects by UN agencies) the senior staff often remain with the project, and have adapted to changing circumstances with new strategies, and perhaps new ideas about what the results should be. 


7. Baseline data: The weak foundation of many indicators 


On the other hand, it seems to me that the claim of rigidity in applying indicators is actually the reverse of reality in many cases. 

It is unfortunately rare for donors to actually use the indicators in the Logical Framework as the foundation for an evaluation.  If  the indicator development discussions are done right, if realistic indicators are chosen by stakeholders and permitted to be adapted over time -- as I have seen done on several occasions -- then using the agreed-upon results and indicators as a starting point for the evaluation is reasonable. But there are two important issues here:
  • Baseline data is often never collected at all on many large projects.  This undermines the utility of developing indicators, which could, if they were tracked, help us learn if we have a reasonably defined result, and help us test the assumptions in our results chains.
  • Many donors simply don't appear to be interested in whether there is baseline data, or not.  Many don't seem to notice if any baseline data has been collected, or don't really care, if they do notice. Often the donor agencies just don't want to rock the contractual boat, or to be perceived as predatory, expecting any hard-nosed accountability.   
I cannot remember the last time I saw a donor agency tell implementers that funding would be affected if baseline data were not collected.   It is reported that DfID may be moving to require baseline data early in project implementation, and while there is some fear that it will make the process of evaluation and planning rigid, I think it is a good idea.  

While rigidity is obviously not desirable, it seems to me that the greater risk is that public funds will be spent without any clearly articulated idea of why they are being spent, and whether results are likely to be realised.  The threat, eventually, without some genuine accountability for results, will be to the perceived legitimacy of aid programmes, by those footing the bill.

The casual attitude of some donors towards baseline data, however, is consistent with one of the key points of this study -- that the Logical Framework Approach, (and by extension, it seems to me, results-based management) -- is being used by donors primarily for planning, but not really for management. 



8. The LFA and attribution of results




It is interesting that it is the implementing agencies in many cases that seem to buy into the "cause and effect" theory of activities leading to results -- at least if the results appear to be achieved.  It is donors, increasingly, that are questioning the attribution of results solely to funded development interventions.  In Canada, for example, for several years, the Treasury Board, the Auditor General and more recently CIDA have cautioned against making exorbitant or unrealistic claims for the successes of development interventions -- and have called for a more subtle and nuanced discussion of how projects and programmes contribute to long-term results. 


9. Unexamined Assumptions in the LFA




The quote in the report to the effect that  “…the risks and assumptions column is therefore the most important part of the logical framework matrix.... However, it is usually the part taken the least seriously as it is the last column - more time is spent on outcomes and indicators, “ is perceptive.  

It is after all, our assumptions that we are testing when we undertake development projects or programmes:


  • Do we share assumptions about what the problem is?
  • Do we share assumptions about cause and effect in development interventions - what works and what does not?
  • Do we understand (and share) the assumptions we are making about the behaviour of other actors, if our interventions are likely to contribute to results?
  • Are we prepared, and permitted by the donor, to act to adapt or terminate the project or programme, if we learn that our assumptions about the problem, our assumptions about the logic of our intervention, or our assumptions about the situation and other actors are wrong?
Many donors do deal, superficially, with risk -- but often it is treated as an afterthought, in terms of whether an occurring risk should trigger management changes.

Assumptions are given even less attention -- rarely detailed or treated seriously as a factor affecting an intervention. In some cases, where RBM frameworks change, the discussion of assumptions drops out of the discussion completely.

If, as the report suggests, the major strength of the Logical Framework Approach is that it "...does force people to think through their theory of change", then clarifying the assumptions beneath these espoused theories, and revisiting them as theories in use  as the project advances, should be an important part of the life of a development intervention.



 Limitations to the report

There are three limitations to the utility of this report:

1. Some organizations, such as UNDP, play roles at different times as both the donor and the implementing agency, with very mixed results.  It would have been interesting to get the UNDP view on the questions the authors were exploring.

2. This paper does not provide a snappy summary of the problems or potential advantages of the LFA.  Readers who want that, however, can get it from several documents on the MandE website.

3. It remains to be seen, based on the sample size, if the findings are generalizable, or relevant to issues such as programme-based approaches, but the authors do not make any claim for this.

The bottom line:  This is not a guide to practice; it is a report on how some organizations use the LFA. This paper is not arguing for some radical alternative to the Logical Framework Approach, but it does suggest that more flexible approach to its use would be useful.  It could be useful as background to the discussion on the appropriate uses of results-based management for development, primarily for planners and aid theorists.



Further reading:



_____________________________________________________________

GREG ARMSTRONG
Greg Armstrong is a Results-Based Management specialist who focuses on the use of clear language in RBM training, and in the creation of usable planning, monitoring and reporting frameworks.  For links to more Results-Based Management Handbooks and Guides, go to the RBM Training website




RBM Training

RBM Training
Results-Based Management

Subscribe to this blog by email

Enter your email address:

Delivered by FeedBurner

 
Read the latest posts