--Greg Armstrong --
[Links updated 2018]
This website houses a large number of articles, some of them quite complex, on how to construct logic models, using proprietary software.
The Outcomes Theory Knowledge Base
Level of Difficulty: Complex
Primarily useful for: RBM specialists, academics
Length: 50-60 web pages
Most useful sections: Articles on evaluation
The “Outcomes Theory Knowledge Base” is the title given to a compilation of more than 50 articles on what the author refers to as Outcomes Theory -- what many of the rest of us refer to as RBM, or management for development results. Most of the articles focus on how to use visual Logic Models for project management.
Who this is for
Readers will need to sift through 50+ articles, all written by Paul Duignan, to find what they need. But, although there is a lot of repetition in many of the articles, some of them could be useful to three groups:
For most field project managers, and host-country counterparts, the utility of many of the articles will be limited by the relatively dense language used to explain some common-sense ideas (for example, see the article “Problems faced when monitoring and evaluating programs which are themselves assessment systems”. Some simpler summaries on logic model development are also, however, available at a related commercial website, EasyOutcomes.org.
While there is considerable overlap in the ideas discussed in the more than four dozen articles originally included in what Google refers to as a “Knol” or a “unit of knowledge”, the reader who takes the time to work through these will find some useful material.
By my estimate at least 30 of the articles focus on the advantages to project managers, evaluators and monitors of using a visual approach to managing for results - Outcome Models, Logic Models or other visual representations about the relationship between activities and results. The core of these articles (although each puts these in a slightly different context) is based in some common-sense ideas that many RBM trainers, planners or evaluators may recognise from their own experience. Among these is that in planning, monitoring and evaluating for results, we should:
Most of these articles also suggest that the proprietary software (DoView) sold through a related website, can help us do all of these things more efficiently and creatively than we can by relying just on tables in word processing software. Taken together these four dozen articles also appear to form a help file for those using that software.
At least ten of these articles focus specifically on evaluation. While the author obviously thinks that the visual logic model would assist in focusing evaluation questions, the articles go beyond this, and some provide what could be, for those looking for quick summaries, useful overviews of major evaluation issues.
Among those articles that could be useful to readers, whether they use the author’s software or not, are:
Greg Armstrong’s analysis
While many of the articles on the logic model and on evaluation are useful, the article on “Key Outcomes, Results Management and evaluation resources” provides fewer useful links than the average reader might expect, from someone of the author’s experience.
The descriptive summary says it contains ”A summary list of key outcome theory related resources for working with outcomes, results management, evaluation, performance management, outcomes-focused contracting and evidence-based practice”.
“ Ahha!” I thought, “just what people who want to learn about RBM should have - ideas from the UN, DfID, CIDA, SIDA, Universities, government agencies, think-tanks, trainers and NGOs.” This could have been very useful to professionals seeking user-friendly tools on evaluation and RBM.
A quick review, however, shows it contains, at least at this writing in 2010, just 14 links - all of them to one of 8 of the author’s own websites, including his blog and twitter feed, and all with links to the sale of the logic modelling software. The author obviously has a history of work in evaluation, and presumably knows of other useful sites.
Links to other relevant sites, such as, for example the Monitoring and Evaluation News, or the Centers for Disease Control's Evaluation Working Group resources would have been helpful to people looking for useful tools.
The list of references to Outcome Theory similarly lists 33 articles, all written by this author. Many of them are probably useful, but a broader net might have brought in ideas about the work other people have done on similar or related topics.
In one of the articles on this website, dealing with the value added of evaluation, to governance and policy making, using a visual logic model, the author writes:
I facilitate workshops on developing results frameworks, logic models and indicator assessment several times a year, in almost all cases in countries where English is at best a second language, where internet access is often unstable, and in some cases where electrical power is unreliable.
I have not used the DoView software which many of these articles are linked to, in such workshops but I can see from its description that it could be helpful particularly during facilitation of logic model development workshops. Given that this software was developed particularly with results chains in mind, it could possibly have an advantage over other visual mapping software of a similar nature, such as Xmind, Vue or SmartDraw among many others. Like those other programmes, too, however, the software promoted here has limitations which would diminish its utility for facilitators working in the situations I work in.
At the end of a Logic Model development workshop, one important deliverable is a draft Logic Model and possibly an indicator assessment framework, which the users can take back to their many different offices, in different countries, different provinces or cities, a document they can distribute widely to their own colleagues and their own networks, for further critique and possible alteration.
The price for the DoView software is not high - roughly $35 per copy, cheaper than others that can run to several hundred dollars - but not as cheap obviously, as Vue, Xmind or others that are free. Even the free programmes have a problem with accessibility and portability however. Having used any of these programmes to engage people in a dynamic discussion of results, what do you do next, when they want to continue the discussion with their own partners? Do you ask them all to download and install the programmes?
It is not clear to me that the Logic Model diagrams from any of the visual mapping programmes I have seen can actually be edited with standard, commonly - available word-processing software such as Microsoft Word, OpenOffice Writer, or Google Docs. While Logic Models produced with DoView, Xmind, Vue, SmartDraw and many other similar programmes can be viewed using those programmes, or alternatively in PDF or on the web, and can be pasted into word processing programmes, in most cases they cannot be edited by people who do not have the original software in which the diagrams were originally produced -- making downstream participation very difficult.
For those programmes which are web-based, some editing can be done on the internet, but accessibility does not rest in “the cloud” for people where internet access is not always reliable. The bottom line is that the utility of all of these mapping and diagramming programmes is limited where it is impractical to install specialised programmes on dozens of different computers.
If portability really is the criteria for assessing all of these programmes, then the priority should be not just the ability to view the results in a PDF file or on the internet, but the ability of partners to critique and edit the models.
Another issue, is that none of these programmes, with these limitations, will be easy to link to the other half of the results discussion - in many ways the most time consuming portion of results-based planning -- the assessment of the utility of indicators.
As anyone who has worked through the indicator development process knows, it can take days for project partners, working in groups, to sort through potential indicators, testing them for validity, for the existence of baseline data, for the availability and accessibility of reporting data, for the existence of appropriate research skills, and the time required for data collection and analysis.
While several of the articles in the Outcomes Theory Knowledgebase refer to the tongue-twisting “Non-output attributable intermediate outcome paradox” and make a reasonable point about attribution, none of them makes the job of assessing indicators any easier, any faster or any more accessible for partners.
The Outcomes Theory Knowledgebase web site has many “how to” videos, hosted on YouTube, aimed primarily at helping people use the proprietary software. One of these is titled “Painless Performance Indicators: Using a Visual Approach”. This got my hopes up!
But, foiled again: What the video demonstrates is that if you have already done all of the hard work on indicators, having completed this assessment, you can insert a reference to the existence of the indicator, in the Logic Model, using the software. I am sure this is useful (although it can also be done with word processing programmes and the use of hyperlinks) but the point is that inserting indicators in a visual model is not the painful part of indicator development.
For the time being, until something new develops, I will be sticking with the basic word processing programmes which allow a facilitator to work with participants to develop a logic model (albeit without some of the ease of the mapping software) and then link and integrate it with an indicator assessment worksheet, as indicators are being proposed, tested, rejected, modified and accepted. But, I continue to live in hope, and may revisit the issue of software again later.
The bottom line: "The Outcomes Theory Database" includes articles with some useful arguments in favour of using a visual logic model approach, and some quick summaries of evaluation issues, but there is no magic bullet here.
_____________________________________________________________[Links updated 2018]
This website houses a large number of articles, some of them quite complex, on how to construct logic models, using proprietary software.
The Outcomes Theory Knowledge Base
Level of Difficulty: Complex
Length: 50-60 web pages
Most useful sections: Articles on evaluation
Designing Logic Models - Review by Greg Armstrong |
The “Outcomes Theory Knowledge Base” is the title given to a compilation of more than 50 articles on what the author refers to as Outcomes Theory -- what many of the rest of us refer to as RBM, or management for development results. Most of the articles focus on how to use visual Logic Models for project management.
Who this is for
Readers will need to sift through 50+ articles, all written by Paul Duignan, to find what they need. But, although there is a lot of repetition in many of the articles, some of them could be useful to three groups:
- Those interested in learning how a visual approach to results, through the development of logic models or outcome models can clarify results discussions.
- Those who want an overview of some broad issues in evaluation.
- Those people interested in an academic analysis of how results are viewed in a broad conceptual format.
For most field project managers, and host-country counterparts, the utility of many of the articles will be limited by the relatively dense language used to explain some common-sense ideas (for example, see the article “Problems faced when monitoring and evaluating programs which are themselves assessment systems”. Some simpler summaries on logic model development are also, however, available at a related commercial website, EasyOutcomes.org.
The Utility of a Visual Logic Model
By my estimate at least 30 of the articles focus on the advantages to project managers, evaluators and monitors of using a visual approach to managing for results - Outcome Models, Logic Models or other visual representations about the relationship between activities and results. The core of these articles (although each puts these in a slightly different context) is based in some common-sense ideas that many RBM trainers, planners or evaluators may recognise from their own experience. Among these is that in planning, monitoring and evaluating for results, we should:
- Focus on results, not activities - and label results as “Outcomes”.
- Use a visual logic model to clarify results. This makes it easier to see the relationships between activities and different levels of results than is possible using a Logical Framework.
- Distinguish, in the logic model, between a) results and indicators for which an agency is directly responsible in the near term, and b) higher level results, for which attribution to the intervention (for success or failure) is not clear.
- Hold managers responsible for two primary tasks: a) Achieving results for which there are clear indicators and a reasonably clear and accepted causal relationship between activities and results; and b) Managing for development results at a higher level, in part by collecting and reporting on indicator data on results for which there is less certainty of attribution.
- Frame contracting, monitoring and evaluation within the context of the results, activities and indicators identified in the visual logic model.
Most of these articles also suggest that the proprietary software (DoView) sold through a related website, can help us do all of these things more efficiently and creatively than we can by relying just on tables in word processing software. Taken together these four dozen articles also appear to form a help file for those using that software.
Evaluation issue summaries
At least ten of these articles focus specifically on evaluation. While the author obviously thinks that the visual logic model would assist in focusing evaluation questions, the articles go beyond this, and some provide what could be, for those looking for quick summaries, useful overviews of major evaluation issues.
Among those articles that could be useful to readers, whether they use the author’s software or not, are:
- “Impact/outcome evaluation designs and techniques” which discusses in broad terms seven different research approaches to impact and outcome evaluations;
- “Impact Evaluation - When it should and should not be used;
- “Distinguishing evaluation from other processes (e.g. monitoring, performance management, assessment, quality assurance)”.
Greg Armstrong’s analysis
Key Resources on Evaluation and RBM?
While many of the articles on the logic model and on evaluation are useful, the article on “Key Outcomes, Results Management and evaluation resources” provides fewer useful links than the average reader might expect, from someone of the author’s experience.
The descriptive summary says it contains ”A summary list of key outcome theory related resources for working with outcomes, results management, evaluation, performance management, outcomes-focused contracting and evidence-based practice”.
“ Ahha!” I thought, “just what people who want to learn about RBM should have - ideas from the UN, DfID, CIDA, SIDA, Universities, government agencies, think-tanks, trainers and NGOs.” This could have been very useful to professionals seeking user-friendly tools on evaluation and RBM.
A quick review, however, shows it contains, at least at this writing in 2010, just 14 links - all of them to one of 8 of the author’s own websites, including his blog and twitter feed, and all with links to the sale of the logic modelling software. The author obviously has a history of work in evaluation, and presumably knows of other useful sites.
Links to other relevant sites, such as, for example the Monitoring and Evaluation News, or the Centers for Disease Control's Evaluation Working Group resources would have been helpful to people looking for useful tools.
The list of references to Outcome Theory similarly lists 33 articles, all written by this author. Many of them are probably useful, but a broader net might have brought in ideas about the work other people have done on similar or related topics.
The Value of Logic Models
While there is some overlap in the content of these different articles, the basic point being made here is valid - using a Logic Model diagram as the focus for discussion can, indeed, as I have found recently in workshops in Vietnam, Cambodia, Indonesia and Thailand, clarify differences of perception over results, assumptions about cause and effect, and can energize discussions on project design and evaluation.In one of the articles on this website, dealing with the value added of evaluation, to governance and policy making, using a visual logic model, the author writes:
“Outcomes models need to be able to be used in all parts of the decision-making process. In order for them to be able to be used in this way, their visualizations needs to be portable across different media so that they can be used whenever and wherever they need to be used. For example, they should be able to be developed and used in real-time during meetings with high-level stakeholders, printed out in a report, and reproduced on an intranet or the internet. Meeting this criteria requires using appropriate software and laying out an outcomes model in a way that ensures that it is portable”.
Software Limitations for Logic Model Development
I facilitate workshops on developing results frameworks, logic models and indicator assessment several times a year, in almost all cases in countries where English is at best a second language, where internet access is often unstable, and in some cases where electrical power is unreliable.
I have not used the DoView software which many of these articles are linked to, in such workshops but I can see from its description that it could be helpful particularly during facilitation of logic model development workshops. Given that this software was developed particularly with results chains in mind, it could possibly have an advantage over other visual mapping software of a similar nature, such as Xmind, Vue or SmartDraw among many others. Like those other programmes, too, however, the software promoted here has limitations which would diminish its utility for facilitators working in the situations I work in.
At the end of a Logic Model development workshop, one important deliverable is a draft Logic Model and possibly an indicator assessment framework, which the users can take back to their many different offices, in different countries, different provinces or cities, a document they can distribute widely to their own colleagues and their own networks, for further critique and possible alteration.
The price for the DoView software is not high - roughly $35 per copy, cheaper than others that can run to several hundred dollars - but not as cheap obviously, as Vue, Xmind or others that are free. Even the free programmes have a problem with accessibility and portability however. Having used any of these programmes to engage people in a dynamic discussion of results, what do you do next, when they want to continue the discussion with their own partners? Do you ask them all to download and install the programmes?
It is not clear to me that the Logic Model diagrams from any of the visual mapping programmes I have seen can actually be edited with standard, commonly - available word-processing software such as Microsoft Word, OpenOffice Writer, or Google Docs. While Logic Models produced with DoView, Xmind, Vue, SmartDraw and many other similar programmes can be viewed using those programmes, or alternatively in PDF or on the web, and can be pasted into word processing programmes, in most cases they cannot be edited by people who do not have the original software in which the diagrams were originally produced -- making downstream participation very difficult.
For those programmes which are web-based, some editing can be done on the internet, but accessibility does not rest in “the cloud” for people where internet access is not always reliable. The bottom line is that the utility of all of these mapping and diagramming programmes is limited where it is impractical to install specialised programmes on dozens of different computers.
If portability really is the criteria for assessing all of these programmes, then the priority should be not just the ability to view the results in a PDF file or on the internet, but the ability of partners to critique and edit the models.
No Painless Performance Indicators
Another issue, is that none of these programmes, with these limitations, will be easy to link to the other half of the results discussion - in many ways the most time consuming portion of results-based planning -- the assessment of the utility of indicators.
As anyone who has worked through the indicator development process knows, it can take days for project partners, working in groups, to sort through potential indicators, testing them for validity, for the existence of baseline data, for the availability and accessibility of reporting data, for the existence of appropriate research skills, and the time required for data collection and analysis.
While several of the articles in the Outcomes Theory Knowledgebase refer to the tongue-twisting “Non-output attributable intermediate outcome paradox” and make a reasonable point about attribution, none of them makes the job of assessing indicators any easier, any faster or any more accessible for partners.
The Outcomes Theory Knowledgebase web site has many “how to” videos, hosted on YouTube, aimed primarily at helping people use the proprietary software. One of these is titled “Painless Performance Indicators: Using a Visual Approach”. This got my hopes up!
But, foiled again: What the video demonstrates is that if you have already done all of the hard work on indicators, having completed this assessment, you can insert a reference to the existence of the indicator, in the Logic Model, using the software. I am sure this is useful (although it can also be done with word processing programmes and the use of hyperlinks) but the point is that inserting indicators in a visual model is not the painful part of indicator development.
For the time being, until something new develops, I will be sticking with the basic word processing programmes which allow a facilitator to work with participants to develop a logic model (albeit without some of the ease of the mapping software) and then link and integrate it with an indicator assessment worksheet, as indicators are being proposed, tested, rejected, modified and accepted. But, I continue to live in hope, and may revisit the issue of software again later.
The bottom line: "The Outcomes Theory Database" includes articles with some useful arguments in favour of using a visual logic model approach, and some quick summaries of evaluation issues, but there is no magic bullet here.
Other resources on Logic Models:
- The University of Wisconsin's detailed and engaging online self-study module Enhancing Program Performance with Logic Models
- The W.K. Kellogg Foundation's Logic Model Development Guide.
- Using PowerPoint to develop Logic Models, from the Centre for Community-Based Research.