An Introduction to Multi-Agency Planning Using the Logical Framework Approach
A practical guide to using the LFA in project planning and management.,
-Reviewed by Greg Armstrong-
[Review Updated March 2016, July 2018]
Level of Difficulty: Easy-to-moderate
This document, prepared in 2005 by Philip Dearden, Head of the University of Wolverhampton's Centre for International Development and Training, is a practical, hands-on guide to the use of the Logical Framework Approach in project planning. It is placed in the context of Project Cycle Management, but it is clearly relevant to results-based planning and management, which also uses the Logical Framework -- even if in reality many RBM practitioners stop with the "Framework", and forget the "Approach". In fact, there is not much difference between the Project Cycle Management and many forms of results-based management, which incorporate the Logical Framework -- except that Project Cycle Management emphasises stakeholder involvement in a way that RBM does in theory, but often ignores in practice.
Clear Language RBM
The first section ("Who are we?") establishes the participatory nature of the Logical Framework Approach, something that really should apply to results-based management overall. Many of the criticisms I have seen and heard of RBM are that it is too top-down and technocratic in nature, but in fact it need not be this way.
Bringing stakeholders into the process, from problem identification, through testing of assumptions, identifying potential results, and focusing on the most useful activities to reach results should be an essential component of RBM . In practice, however, because it takes time, effort, and money, participatory processes are often viewed as annoying distractions by many implementing agencies and donors. Talk about participation is cheap, but practice takes commitment.
In a section called "Where are we now?", the guide discusses how to identify both the problem that the project or programme will address, putting it into context through a problem tree, and the potential strengths in the situation, those on which the project can build.
The section of the document "Where do we want to be?" focuses on choosing among broad results, then finding the short-term and mid-term results likely to contribute to what we want in the long run.
Risk Assessment and Assumptions
The section called "What may stop us getting there?" explains risk analysis -- discussing what potential problems could derail the project -- and the need to redesign activities to minimise those risks. This section also deals with the testing of assumptions -- how these assumptions need to be clarified and explicitly addressed in design of activities, and identifying the assumptions underlying the logic between short, mid-term and long-term results.
Sections six and seven of the Guide ("How Will we Know if We've Got There" and "How do we Prove it?") deal with indicators and data collection. The indicator development discussion makes a useful distinction between indicators for completion of activities, and indicators for results.
_____________________________________________________________
GREG ARMSTRONG
Greg Armstrong is a Results-Based Management specialist who focuses on the use of clear language in RBM training, and in the creation of usable planning, monitoring and reporting frameworks. For links to more Results-Based Management Handbooks and Guides, go to the RBM Training website.
A practical guide to using the LFA in project planning and management.,
-Reviewed by Greg Armstrong-
[Review Updated March 2016, July 2018]
Level of Difficulty: Easy-to-moderate
Primary Useful for: Project field managers
Limitations: More detail on baseline would have been useful
Length: 58 pages
Most useful sections: p. 20-24 (risks and assumptions); p. 49-58 (LFA examples)
Problem Tree |
This document, prepared in 2005 by Philip Dearden, Head of the University of Wolverhampton's Centre for International Development and Training, is a practical, hands-on guide to the use of the Logical Framework Approach in project planning. It is placed in the context of Project Cycle Management, but it is clearly relevant to results-based planning and management, which also uses the Logical Framework -- even if in reality many RBM practitioners stop with the "Framework", and forget the "Approach". In fact, there is not much difference between the Project Cycle Management and many forms of results-based management, which incorporate the Logical Framework -- except that Project Cycle Management emphasises stakeholder involvement in a way that RBM does in theory, but often ignores in practice.
Who the Guide is for
While I think anybody generally interested in results-based management could get some benefit from this guide, it will be most useful for project field managers and others trying to design a practical consultation process aimed at clarifying results, results chains and indicators.
Clear Language RBM
Most of the guide is easy to understand, using clear language to set out the basic steps in using the logical framework approach - something that should be central to the use of RBM. Where the UNIFEM guide, reviewed earlier, was essentially an introduction to RBM terms, this guide goes further, walking the reader through seven "core questions" beginning with a stakeholder analysis, and ending with a discussion of indicators.
Basic RBM Concepts
Philip Dearden and his colleagues have a long history of working in the field with development practitioners, and this is reflected in the way this document is written. It uses a simple, largely jargon-free approach to explain seven necessary stages in the Logical Framework Approach (LFA).
Participatory RBM Processes
The first section ("Who are we?") establishes the participatory nature of the Logical Framework Approach, something that really should apply to results-based management overall. Many of the criticisms I have seen and heard of RBM are that it is too top-down and technocratic in nature, but in fact it need not be this way.
Stakeholder analysis |
Bringing stakeholders into the process, from problem identification, through testing of assumptions, identifying potential results, and focusing on the most useful activities to reach results should be an essential component of RBM . In practice, however, because it takes time, effort, and money, participatory processes are often viewed as annoying distractions by many implementing agencies and donors. Talk about participation is cheap, but practice takes commitment.
Problem Identification
In a section called "Where are we now?", the guide discusses how to identify both the problem that the project or programme will address, putting it into context through a problem tree, and the potential strengths in the situation, those on which the project can build.
Maintaining a focus on the underlying problem is important. Many projects I have seen, bilateral and multilateral, essentially abandon the problem as the underlying foundation of the intervention once the activities are designed, and the projects become essentially activity-focused enterprises.
Clarifying Results-Chains
The section of the document "Where do we want to be?" focuses on choosing among broad results, then finding the short-term and mid-term results likely to contribute to what we want in the long run.
The term "Outputs" has been used differently in the past among different donor agencies - for some meaning essentially "completed activities" and for others "short-term results". In this section, Outputs are essentially defined as short term results - for example -- improvements in community capacity to manage activities and resources -- although later in the discussion of indicators, it labels them as "deliverables".
Defining Activities to reach results
The section called "How do we Get There?" ties the development of activities to the earlier analysis of problems.
And it really is important to go through the first three stages before identifying activities, because keeping in mind a clear idea of what the problem is, and what the logical sequence of possible results may be to the problem, tests the logic and soundness of any proposed activity. Unfortunately, in practice, again, the fact is that many organizations start with activities and then try to find a problem that might possibly be used as a justification for funding what they have already decided to do.
Risk Assessment and Assumptions
The section called "What may stop us getting there?" explains risk analysis -- discussing what potential problems could derail the project -- and the need to redesign activities to minimise those risks. This section also deals with the testing of assumptions -- how these assumptions need to be clarified and explicitly addressed in design of activities, and identifying the assumptions underlying the logic between short, mid-term and long-term results.
I found this section of the document very helpful. In my experience the great under-valued component of results-based planning has been the casual and obscure manner in which assumptions are often handled in the logical frameworks and, more importantly, in the discussion process which should precede the completion of the Framework. Seriously focusing on what stakeholders and implementing partners assume to be the relationships between activities and results, and about the underlying conditions necessary for solving problems, can reveal profound differences of opinion among these groups, not just about the political, social or economic conditions necessary to make a project work, but also about what types of interventions are likely to be most effective, about what our underlying, and often unspoken theories of learning and development are, and even assumptions about what the basic problems are, that the development activity will purportedly address.
Spending more time on clarifying these assumptions at the project design stage can prevent a lot of problems, and save considerable time and money during implementation, as, invariably, the different perspectives slowly start to surface. Of course, it is also important to reassess these initial assumptions at regular intervals as a programme or project evolves. When this is encouraged by donor agencies it makes it easier to make constructive adjustments to project management, perhaps even to the project's design, while always keeping an eye on the problem to be addressed. But this is something on which many donors, and most implementing agencies really don't want to spend time.
Indicators and Data Collection
Sections six and seven of the Guide ("How Will we Know if We've Got There" and "How do we Prove it?") deal with indicators and data collection. The indicator development discussion makes a useful distinction between indicators for completion of activities, and indicators for results.
The section on data collection deals with sources of data, and means of data collection. I think more space could usefully have been given to this, because the big problem in most indicator development is the impracticality of collecting data for many proposed indicators.
LFA Checklist
The concluding section provides a checklist of 29 issues against which to assess the utility of the Logical Framework for the project. Using the checklist without having reviewed the earlier text is probably feasible, but is likely to be considerably less useful than applying it after taking the time to review the rest of the document.
The appendices (p. 37-58) provide a glossary of terms, a list of advantages and disadvantages of using the Logical Framework for project management, a description of the Project Management Cycle approach, and a brief discussion of the purpose of monitoring and evaluation. But the most useful of the appendices may be the nine pages of examples of Logical Frameworks for the three children's projects in Sheffield, showing in detail how the indicators for these projects related to assumptions, results, and activities.
Limitations of the Guide
While the data collection discussion in the Guide does mention baseline data, it does not discuss it. This is the one area of the guide in which I think more detail would have been useful, even for beginners. It is something that all projects, and all donor agencies need to spend more time on at the beginning of the indicator development process.
Collecting baseline data is not necessary just for telling us if we have results --whether anything has changed -- but also for testing whether we can actually collect the data for the indicators we have agreed on. Yet, I have rarely seen international development projects, funded by any donor, where baseline information is actually collected even within the first year. Often, and this is no exaggeration - the baseline information is never collected - or it is collected retrospectively, or simply faked, three, four or five years after the project begins. This makes a mockery not just of the whole concept of "results-based" management, but also means that in a very substantial number of cases, only after the project has been implemented for years, does the realization hit home that many of the indicators are completely useless. If donors really took RBM seriously, and not just as window-dressing for their management committees or Auditors-General, they would insist that genuine baseline data -- and the consequent redesign of indicators, be completed before project activities are funded.
The bottom line: Overall, this is a practical and reasonably straight-forward Guide to the development of Logical Frameworks as part of the planning, monitoring and evaluation process. It is likely to be useful for many project field managers, in government, private sector or civil society implementing organizations.
More resources:
The Centre for International Development and Training now also offers an online course on RBM.
_____________________________________________________________