google-site-verification: googlefccb558c724d02c7.html

Translate

Thursday, June 30, 2011

Podcasts and RBM - 2: Audio Podcasts from the BBC, Australian Broadcasting Corporation and National Public Radio

Greg Armstrong --

Radio programmes from the BBC, the Australian Broadcasting Corporation and National Public Radio provide stimulating insight into how other people work with and interpret indicators and results. The second of three posts on podcasts surveys the programmes available from these three broadcasters that are relevant to Results-Based Management discussions.


Level of Difficulty: Moderate-complex, but entertaining
Primarily useful for: Anyone who wants or needs fresh insights on results and indicators
Length: Usually Vary from 15 minutes to an hour. (mp3 format)
Limitations: Audio podcasts are difficult to reference, and follow up, compared to other media.

Who these programmes are for:

Some project managers dread discussions of indicator data, possibly because they rarely collect the data.  But many project stakeholders, and those managers who do take the process of indicator development and data collection seriously, often seem energized by the discussions.  Indicator development discussions reveal the different priorities stakeholders have, and what they think results really mean;  and they challenge participants to think critically and creatively about data sources, data validity and the practicality of data collection.   For these people, audio podcasts -- available free on the internet -- can provide though-provoking insights into both results and indicator development.

This is the second of three posts on audio podcasts.  The first post, originally published May 28, 2011 dealt with the advantages, disadvantages, and mechanics of downloading, listening to, and using podcasts for RBM.  This post surveys results-relevant podcasts available from the BBC,  Radio Australia, and National Public Radio.  The final post will review one programme, BBC’s More or Less, in more detail.

Thousands of podcasts to choose from

There are hundreds of possible programmes, and thousands of individual episodes of programmes available for listening, free on the internet.  The BBC website alone had 262 separate available programmes, in January 2011, and by June  had 288 programmes with material available for download.    Of these, roughly were in categories such as music, comedy, sports, religion or children's programming. 122 programmes fell into BBC's "factual" category  in January 2011.  By June 2011 there were 133 factual programmes listed and most of these had  dozens, sometimes hundreds of individual episodes available either for download or listening online -- some  news programmes, in multiple languages, others about consumer affairs, arts, literature, economics or history. 

The BBC radio 4 website itself suggests that there are over 9,000 episodes of different programmes just in this  “factual” category available for listening in one of its formats, listing them alphabetically and by genre.  Compared to the BBC podcast homepage, which organizes the available programmes into more recognizable categories, the 9,000 available podcasts may seem like a huge and unfathomable number to wade through -- but these radio 4 episodes are worth skimming. Some individual episodes buried there -- such the interesting 2008 Peer Review in the Dock -- do not appear to be listed on the podcast page.

But this blog is about Results-Based Management and, given that we all have limited time available for listening, the following are suggestions of some of the programmes I think are worth listening to for useful -- but also entertaining -- insights into the kind of work we do when we think about results and how to describe, measure or report on them:

Results-relevant Podcasts from the BBC


BBC's More or Less: Behind the Stats, is by far the programme with the most direct link to indicators and results based management, of any I have found.  Each 24-minute episode usually includes 3-4 issues, all of which are directly relevant to how results and indicators can be interpreted. 22 individual episodes dating back to September 2010 are available for download as I write this, and 82 more going back to 2005 are available for listening online. I will review More or Less in more detail in my next post.

Thinking Allowed, a half-hour programme focusing on social science research, currently has a total of 228 episodes available -- 40 in downloadable MP3 format, dating back to September 2010.  The Thinking Allowed Archives includes broadcasts going back as far as January 2007, using the BBC iPlayer.

Documentaries  is by far the most prolifically accessible of all of the BBC podcasts.  It had 88 24- minute episodes available, all downloadable, at the end of June -- and this is just for 2011. Another 660 downloadable programmes in MP3 format in the archive from 2007-2010.  Finding these archived materials is not perfectly intuitive, but you can get access to them by going to the BBC factual/history  link where, among many other programmes, the Documentaries for 2007, 2008, 2009 and 2010 are listed.

Material World, a BBC science programme had 41 half-hour episodes available for downloading the last time I looked, and 350 more in the archives for which the listener will require either RealPlayer or another media player such as VLC, capable of handling the Realplayer files.



Results-relevant Podcasts from the Australian Broadcasting Corporation (ABC Radio)



ABC radio’s Counterpoint, which is second only to BBC’s More or Less in its relevance to results discussions, delivers weekly one-hour programmes, and also "unbundles" the components – breaks the programme up into shorter segments which can be downloaded or listened to individually.  Thus, you might want to listen to just that part of the Counterpoint February 14, 2011 broadcast on the “decline effect” -- or why much apparently validated published research can’t be trusted – but not those parts of the same  broadcast dealing with Australian politics, limits to  online publishing freedom,  or the morality of long-term debt

Hiding the pigs…


The only quibble I have with Counterpoint’s unbundling

For example, the June 6, 2011 episode of Counterpoint  included three components:  “You’ve got to be rich to work for free”, “Hunters, the real conservationists” and “David Burchell: Anger, politics and the new media”.  Looking at these, you might not, (and I did not) expect that one dealt with the fascinating issue of how Australians are trying to deal with 23 million highly intelligent feral pigs that are roaming the country.   I’m not sure this has anything to do with RBM, but it’s interesting!  When I pointed this out to my colleague Anne Bernard, who first led me to Counterpoint, and someone who listens to every episode in its entirety, her reply was “Armstrong, you have the attention span of a gnat!  Just download the whole programme!”  

But, if, like me, your attention span argues against downloading an hour of material just to find out about the pigs, you can, as I did, just download the feral pig episode

And if the pigs don't interest you, there are a number of other topics of potential relevance for results and indicator discussions in recent available podcasts of Counterpoint:



  • The December 6, 2010 episode dealing with the relationship between  expenditures on education and educational outcomes


  • The June 20, 2011 broadcast which contains two interesting indicator-related segments: one on how the quality of data collection instruments can affect data quality, and with it our conclusions about results; and another on what data tell us about the relative contributions to project results and organizational success made by senior management, mid-level project managers, and creative personnel.

Ockham’s Razor is another ABC programme which takes a slightly more academic approach to issues, with individuals making presentations on simple truths behind complex issues, rather than being interviewed.  The programme has 240 13-minute episodes going back to January 2006 available for download, and a large number of transcripts for programmes as far back as 1997. These include, among much else, discussion of the difficulties of relating and working with a bizarre field of indicators for earthquake prediction, how simple language and basic math can bring policy debates into perspective, and why effective and simple solutions to policy issues are not implemented.

As an example of how many episodes are available, and on what variety of topics, a search for "evidence" in the Ockham’s Razor archives, produces a list of several hundred presentations.

A Results-Relevant Programme from National Public Radio



Radiolab  – a programme from the U.S. National Public Radio produces a one hour episode every two weeks, and these can be downloaded as one entire episode, or you can choose, as with ABC’s Counterpoint, to download component parts of the episode, lasting10-30 minutes each.  The format is much more story-telling than "More or Less" or "Counterpoint", and while dramatic liberties may sometimes be taken with the narratives, there is a lot of interesting material here.  In total, by the end of June 2011, there were 46 one-hour episodes available. Not all of them are obviously related to results or indicators, but they are all worthy of attention.  Two of the episodes I found particularly interesting were:
  • The June  2010 Radiolab episode “Oops”  – which tells three stories about unintended and very negative results, growing out of projects with only good intentions, and 
  • The October 2010 Radiolab episode on "Cities" particularly the component called “It’s Alive?”  that describes how speed (of talking, and walking) can be used as an indicator of city culture   and how physicists have used walking speed to predict city size, average income, crime rates and a number of other variables related to the culture of different cities.


The bottom line:


BBC’s "More or Less" and ABC’s "Counterpoint" provide a good starting point for anyone wanting a little entertainment with their results and indicator discussions. There are dozens of other programmes out there that I haven’t covered, and no doubt many more that I am not even aware of.  Many of these may be of interest to you, and may also, as an incidental byproduct of your attention, provide new ways of looking at results and indicators.


Further listening

Referenced here:

BBC: More or Less: Behind the Stats
BBC: Thinking Allowed
BBC: Documentaries
BBC: Material World 
ABC: Counterpoint 
ABC: Ockham’s Razor
NPR: Radiolab
NPR: Krulwich Wonders

Other radio programmes of potential interest include:

BBC: The Reith Lecture Archives 
BBC: Start the Week, with Andrew Marr
BBC: Four Thought
BBC: File on Four 
Podcasts from the Guardian
Podcasts from the Scientific American 



_____________________________________________________________



GREG ARMSTRONG
Greg Armstrong is a Results-Based Management specialist who focuses on the use of clear language in RBM training, and in the creation of usable planning, monitoring and reporting frameworks.  For links to more Results-Based Management Handbooks and Guides, to to the RBM Training website



This post edited July 4, 2011 to update links.

Monday, May 30, 2011

Podcasts and RBM 1: How to use audio podcasts to reinvigorate thinking on indicators and results

Greg Armstrong --

Indicator discussions don’t have to be boring.   A wide range of audio podcasts, easily accessible to listeners throughout the world, are available online from the BBC, ABC and NPR.  This is the first of three posts surveying audio programmes available online, and relevant to results-based management.


Level of Difficulty: Moderate-complex, but entertaining
Primarily useful for: People who don’t have experience with downloading podcasts
Length: Usually Vary from 15 minutes to an hour. (mp3 format)
Limitations: Audio podcasts are difficult to reference, and follow up, compared to other media.

Who this post is for:

This is the first of three posts dealing with how, and which audio podcasts can be useful for people working on results frameworks and indicators.  This introduction explains why podcasts can be useful, what their limitations are, and how to use them. This post is intended primarily for people who do not already know how to get access to, or use podcasts.  

The second post in this series will survey the broad range of podcasts available primarily from the BBC, Radio Australia and one from the U.S. National Public Radio.  The third post will review in more detail one particular programme on the BBC, More or Less which always has something useful to say about indicators. 

Those readers who already know how to subscribe to, or download podcasts may wish to skip this post, and move on to the next two.

Why use audio podcasts in RBM? Because (gasp!) RBM can be boring

All of us who work with international development projects or with Results-Based Management, are  familiar with the reams of paper, log frames, risk management frameworks and charts, generated in results and indicator discussions.  These can put even the most enthusiastic proponents of results-based management into a coma.

But there is a range of very entertaining material available for listening, that can reinvigorate interest in how results and indicator data can be manipulated, misrepresented, forged, and in some inspiring instances, creatively interpreted --and not just in politics, or development assistance, but in daily life.  For this, the audio podcasts available for download, or for online listening from a number of sources, are a useful and energizing source of not just learning, but entertainment, for people who work with results and indicator data on a regular basis.

Podcasts provide an escape from the drudgery of reading about RBM


I am a late adopter, someone slow to embrace new technologies, after an early, expensive and ultimately futile early adoption of the Betamax in the 1975.  While my closest colleague has for many years been downloading to her MP3 player and listening to not just music but documentaries, and fiction, I only grudgingly started to do so a few months ago, when I began an exercise regime that put me in a boring environment for an hour a day.  Music doesn’t provide the escape for me it does for many people, and I wanted to use the time productively.  Early misguided attempts at reading while exercising  produced unintended (but in hindsight predictably disastrous) results.

My  colleague pointed me  to the BBC website  and its literally hundreds of podcasts; I continue to use it, but also moved on from there to the Australian Broadcasting Corporation’s smaller but worthwhile set of documentaries and then to the National Public Radio site.  Now I find that an hour of exercise is intellectually productive and, best of all, entertaining.  These podcasts provide insights on how other people deal with results  and indicators in the real world, challenging my understanding of issues, and ways of thinking about them, and providing me with alternative approaches to data analysis, often things that I had skipped over, or forgotten in my daily reading.  Some of these programmes are engrossing enough that I double my exercise time to complete them.


The drama of results and indicator discussions


The major attraction of using audio podcasts, for me, is this entertainment value.  It is rare that a discussion, even on apparently boring topics related to results or indicators will make its way to an audio podcast on any of the major radio networks, unless there is an interesting or unusual  twist to it.  These programmes are often presented in a way that will stimulate the listener intellectually or emotionally, sometimes reawakening a dying interest in how to use data productively.

Listening to politicians or pharmaceutical manufacturers  twist data, and then face the challenge of someone who knows enough to ask pointed and challenging questions, is much more interesting than reading the same discussion in a journal, a newspaper or online.  Debates on issues such as health services, school quality, risk, crime, disastrously unintended results, and a wide range of other topics, can generate new ideas for people working with results frameworks, and struggling to recognize, generate or interpret convincing indicator data.  

Some programmes such as WNYC’s Radio Lab  deliberately dramatise the discussions to keep listeners involved, and that approach is effective. But most programmes benefit simply from focused questions, good editing, and the energy and passion of the people they are interviewing, to keep a listener’s interest.  Many of them remind me of the best indicator discussions in a project context, when stakeholders understand the importance of indicators for defining results and activities that are important to them, and look forward to and passionately engage in the discussions about what they mean.

Podcast length

Most of the podcasts I listen to are about 30 minutes long.  Some programmes such as ABC’s Counterpoint are an hour long, but listeners can download individual components of the episode, that might vary from 10 to 30 minutes in length.  Some sites, such as The Scientific American /, have podcasts that last only a minute or two while others last roughly 15 minutes. ,


Difficulties in referencing or sharing podcast data


The primary disadvantage of using podcasts as a source of new ideas is that it is very difficult to footnote or bookmark the programmes. Only a few podcasts provide transcripts of their audio programmes and  among those which do, such as ABC’s Ockham’s Razor , even fewer make use of the primary advantage of the internet – web links.

With paper,  we can footnote references, drawing attention to individual words, sentences or ideas,  and move back through an article to check consistency or the spelling of a name or an organization.

With electronic data, available on websites, we can provide links, from a blog such as this, so readers can jump to original or alternative sources, to document or challenge an idea, and readers can easily supplement ideas by using search engines.

But if you download a podcast, and you find a startling new idea you want to reference, while you are, for example, jogging, climbing stairs, lifting weights, or walking, how do you do it?  I tried carrying a notebook and jotting down the ideas, but this is distracting and sometimes dangerous if you are exercising.
And it doesn’t work well in the rain.

In these cases the only useful way to actually use the podcast as a source of potential learning and a reference for other people, (at least as far as I know) is to listen to it on a computer, then go to the podcast home page, to note the web links to the individual podcast, and sometimes to note the running time of the particular quote within the podcast.  Then, too, we can check the website’s home page and links for supplementary information.

So, while I often start now listening, for example, to BBC’s excellent More or Less  as I exercise, I often end up listening to it again, in front of a computer, where I can pause the programme,  make notes, rewind, or fast forward to relevant sections of the discussion – or jump to the web where I can seek supplementary information.   I will be reviewing More or Less in greater detail in my third post in this series, and the difficulty in referencing individual stories in a programme will be illustrated more clearly in that post.

In any case, I assume everyone who reads this blog will have access to a computer – so it should be possible to go directly to some of the sites and programmes I list in my next post, and listen to them online.


The Mechanics of accessing and listening to podcasts


The most common format for podcasts is MP3. While not providing great sound for music (so I am told) this format is, certainly to my impaired hearing,  good enough to deliver an audible conversation, debate or discussion.  MP3 players such as the iPod all include software to play podcasts automatically.  You can spend a lot of money on MP3 players if you want to, but there are perfectly serviceable models, such as the one I use, available in most countries for roughly $20 U.S.   And if you decide you want to listen to these on the computer, any reasonably modern computer with a media player, such as the ubiquitous Windows Media Player, Apple’s ITunes  or one of the many free alternative media players, will automatically start playing these programmes once you click on them.

There are also other formats, sometimes proprietary, used by individual websites.  BBC, for example,  while making podcasts for almost all of its programmes available in MP3 format when they are first put on the site, has a few that can only be listened to with its BBC iPlayer, online.  This requires an updated flash player and  I have had uneven success in using this where internet connections are slow. Some older archived BBC episodes from 2005 or earlier, may only be available in the ram format, which requires RealPlayer or an alternative, and these will still start automatically on most computers when you click on the file.  A few of these BBC radio programmes, primarily music, are restricted in places like Canada, by the BBC licensing of its products.

Listeners can also subscribe to audio programmes through links on the site, or through aggregators such as iTunes, or Google Reader, or directly through links on the podcast webpage. Episodes can then be automatically delivered to the computer or mp3 player.  Personally, I prefer to select the individual programmes, read the background, and download them myself, but many people prefer the convenience of automatic delivery.

There are, then, as far as I know, two primary ways of listening to some of this excellent material

a) With earphones, downloaded to an MP3 player or smartphone, or
b) Through your computer, by clicking the appropriate link.

Most websites can use all of the major web browsers, but there are sometimes minor differences in how you download a podcast using Google Chrome, or for example,  Internet Explorer,   Opera, Firefox  or Safari. Most of the time, on most of the sites, the link to the specific audio programme or episode you want will give you fairly straightforward instructions, to either click to download, or click to listen at your computer, and where it doesn’t do this, left clicking will usually lead to the programme playing immediately on your computer while right clicking will often download the whole programme to your computer, for listening later either on the computer or on the MP3 player.


Time limits on available programmes


Websites vary widely in how long they  will keep  an individual podcast publicly available.  Some do it as a matter of policy for  three months, others for a month, some only for a week and some, such as NPR’s Krulwich Wonders  -- basically a written blog -- appear to have only monthly episodes in audio, but many more episodes as written blog posts that can be read later.

In the case of the BBC podcasts, the length of time differs depending on the programme.  Some are available for download for years, others for only a week, after which they may completely disappear, or be available only for immediate listening at your computer, but not for download.

As the BBC’s podcast website help page explains it
 “But please don't forget that once you have downloaded a podcast episode, it is yours to keep forever and will not expire. Unfortunately, if you missed an episode and didn't download it within the period of availability we are not able to send you a copy.”
So, if you find something that is only potentially interesting, it is worthwhile downloading it for further review first to a computer, and then moving it, if you want, to an MP3 player, or simply keeping it for later listening.   The files, because they are relatively low-fidelity for conversation, do not take up as much space on the computer as would higher fidelity music files.

What topics are available?

In my next post I will provide an overview of some of the most interesting radio programmes available, that are relevant to Results-Based Management.

The bottom line:

Podcasts can be difficult to work with as references, but they are stimulating additions to the dense written material we work with regularly, and can be a useful additional tool for people who think about and work with results and indicators.


Further reading on how to listen to podcasts:

BBC podcast help 
ABC podcast help
Apple iTune podcast help 
Advice on buying MP3 players



_____________________________________________________________


GREG ARMSTRONG
Greg Armstrong is a Results-Based Management specialist who focuses on the use of clear language in RBM training, and in the creation of usable planning, monitoring and reporting frameworks.  For links to more Results-Based Management Handbooks and Guides, to to the RBM Training website




This post edited to update links July 2, 2011

Friday, December 31, 2010

26 lessons about RBM from the 1990's remain valid today

[Updated November 2018]
Greg Armstrong --


Lessons learned about RBM in the last century remain valid in 2018.

Implementing Results-Based Management: Lessons from the Literature – Office of the Auditor-General of Canada

Level of difficulty: Moderate
Primarily useful for: Senior Managers of partner Ministries and aid agencies
Length: Roughly 18 pages (about 9,000 words)
Most useful section: Comments on the need for a performance management culture
Limitations: Few details on implementation mechanisms

The Office of the Auditor-General of Canada deals with the practical implications of results-based management, or with the failure of agencies to use RBM appropriately, as it conducts performance audits of a large number of Canadian government agencies. The Auditor-General's website, particularly the Audit Resources section  holds several documents in the “Discussion papers” and “Studies and Tools” that are a reminder that many of the lessons learned fifteen years ago about RBM remain relevant today.


Who this is for:
26 Lessons on RBM - Reviewed by Greg Armstrong 


The paper Implementing Results-Based Management: Lessons from the Literature  provides a concise and relatively jargon-free summary of lessons from practical experience about how to implement results-based management. Its purpose, as the introduction notes, was to “assess what has worked and what has not worked with respect to efforts at implementing results-based management”.   It is shorter and more easily read than some of the useful but much longer publications on RBM produced since, and could be useful to agency leaders wanting a reminder of where the major pitfalls lie as they attempt to implement results-based management and results-based monitoring and evaluation systems.

The lessons reported on here about implementation of results based management remain as valid today as they were in 1996 when a first draft was produced, and in 2000, when this document was released.


Lessons learned about RBM in the last century remain relevant in 2010


Many of the lessons described briefly here are derived from study of field activities of agencies from North America, Europe, and the Pacific, going back at least twenty years. The 2000 paper is based on reviews of 37 studies on lessons learned about RBM which were themselves published between 1996-1999, and builds on the earlier study, referenced briefly here, which reviewed 24 more studies produced between 1990-1995.

More recent reviews of how RBM or Management for Development Results are -- or should be --implemented in agencies such as the United Nations, such as Jody Kusek and Ray Rist’s 2004 Ten Steps to a Results-Based Monitoring and Evaluation System  and Alexander MacKenzie’s 2008 study on problems in implementing RBM at the UN country level  build on, and elaborate many of the points made in these earlier studies, moving from generalities to more specific suggestions of how to make operational changes.

The 2000 paper from Canada's Office of the Auditor-General lists 26 lessons on how to make RBM work, and many of them repeat, and elaborate on the lessons learned earlier. The lessons on effective results-based management as they are presented here are organized around three themes:

  • Promoting favourable conditions for implementation of results-based management
  • Developing a results-based performance measurement system
  • Using performance information

A brief paraphrased summary of these lessons will make it obvious where there are similarities to the more detailed work on RBM and results-based monitoring and evaluation done in subsequent years. My comments are in italics:



Promoting Favourable Implementation Conditions for RBM


1. Customization of the RBM system: Simply replicating a standardised RBM system won’t work. Each organization needs a system customized to its own situation.

  • The literature on implementation of innovations, going back to the 1960’s confirms the need for adaptation to local situations as a key element of sustained implementation.

2. Time required to implement RBM: Rushing implementation of results-based management doesn’t work. The approach needs to be accepted within the organization, indicators take time to develop, data collection on the indicators takes more time, and results often take more time to appear than aid agencies allocate in a project cycle.

  • Many of the current criticisms of results-based management in aid agencies focus on the difference between the time it takes to achieve results, and aid agencies’ shorter reporting timelines.

3. Integrating RBM with existing planning: Performance measures, and indicators, should be integrated with strategic planning, tied to organizational goals, and management needs, and performance measurement and monitoring need high-level endorsement from policy makers.
  • Recent analyses of problems in the UN reporting systems repeat what was said in articles published as long ago as 1993.  These lessons have evidently not been internalised in some agencies.
4. Indicator data collection: We should build management systems that support indicator data collection and results reporting and, where possible, build on existing data collection procedures.

5. Costs of implementing RBM: Building a useful results-based management system is not free. The costs need to be recognised and concrete budget support provided from the beginning of the process.

  • This is something most aid agencies have still not dealt with. They may put in place substantial internal structures to support results reporting, but shy away from providing implementing agencies with the necessary resources of time and money for things such as baseline data collection.

6. Location for RBM implementation:  There are mixed messages on where to locate responsibility for coordinating implementation of RBM.  Some studies suggested that putting control of the performance measurement process in the financial management or budget office, “may lead to measures that will serve the budgeting process well but will not necessary be useful for internal management".  Others said that responsibility for implementation of the RBM system should be located at the programme level to bring buy-in from line managers, and yet another study made the point that the performance management system needs support from a central technical agency and  leadership from senior managers.

  • The consensus today is that -- obviously in a perfect world -- we need all three:  committed high level leadership, technical support and buy-in from line managers.

7. Pilot testing a new RBM system: Testing a new performance management system in a pilot project can be useful before large-scale implementation – if the pilot reflects the real-world system and participants.

8. Results culture: Successful implementation requires not simply new administrative systems and procedures but the development of a management culture, values and behaviour that really reflect a commitment to planning for and reporting on results.

  • 15 years after this point was made in some analyses of implementation of results-based management, the lack of a results culture in many UN agencies was highlighted in the 2008 review of UN agency RBM at the country level, and the 2009 UNDP handbook on planning, monitoring and evaluating for development results, reiterates the old lesson that building this culture is still important for implementation of results-based management.

9. Accountability for results: Accountability for results needs to be redefined, holding implementers responsible not just for delivering outputs, but at least for contributing to results, and for reporting on what progress has been made on results, not just on delivery of outputs.

  • The need to focus on more than just deliverable outputs to make results-based management a reality, was mentioned in some articles in the early 1990’s, reiterated in OECD documents ten years later, yet remains an resolved issue for some aid agencies which require still, just reports on deliverables, rather than on results.


10. Who will lead implementation of RBM: Strong leadership is needed from senior managers to sustain implementation of a new performance management system.

  • This remains a central concern in the implementation of results based management and performance assessment.  Strong and consistent leadership, committed to, and involved in the implementation of a new RBM system, remains in recent reviews of aid agency performance, such as the evaluation of RBM at UNDP, a continuing issue.

11. Stakeholder participation: Stakeholder participation in the implementation of RBM  -- both from within and from outside of the organization – will strengthen sustainability, by building commitment, and pointing out possible problems before they occur.

  • There is now a general acceptance – in theory – of the need for stakeholder participation in the development of a results-based performance management system but, in practice, many agencies are unwilling to put the resources – again, time and money – into genuine involvement of stakeholders in analysis of problems, collection of baseline data on the problems, specification of realistic results, and ongoing data collection, analysis and reporting.


12. Technical support for RBM: Training support is needed if results-based systems are to be effectively implemented, because many people don’t have experience in results-based management. Training can also help change the organizational culture, but training also takes time. Introducing new RBM concepts can be done through short-term training and material development, but operational support for defining objectives, constructing performance indicators, using results data for reporting, and evaluation, takes time, and sustained support.

  • A fundamental lesson from studies dating back to the 1970’s on the implementation of complex policies and innovations, is that we must provide technical support if we want a new system, policy or innovation to be sustained – We can’t just toss it out and expect everyone else to adopt it, and use it.
  • Some aid agencies have moved to create internal technical support units to help their own staff cope with the adoption and implementation of results-based management, but few are willing to provide the same technical support to their stakeholders and implementation partners.


13. Evaluation expertise: Find the expertise to provide this support for management of the RBM process on a continuous basis during implementation. Often it can be found within the organization, particularly among evaluators.

14. Explain the purpose of performance management: Explain the purpose of implementing a performance management system clearly. Explain why it is needed, and the role of staff and external stakeholders.

Auditor-General of Canada web page, on lessons learned about implementing RBM
Click to go to the English version of  Auditor-General of Canada's website or here for French

Developing Performance Measurement Systems



15. Keep the RBM system simple: Overly complex systems are one of the biggest risks to successful implementation of results-based management. Keep the number of indicators to a few workable ones but test them, to make sure they really provide relevant data.

  • Most RBM systems are too complex for implementing organizations to easily adopt, internalize and implement. Yet, they need not be. Results themselves may be part of a complex system.  But  simpler language can be used to explain the context, problems and results, and jargon discarded, where it does not translate -- literally in language but also to real world needs of implementers and ultimately the people who are supposed to benefit from aid.


16. Standard RBM terms: Use a standard set of terms to make comparison of performance with other agencies easier.


  • The OECD DAC did come up with a set of harmonized RBM definitions in 2002, but donors continue to use the terms in different ways, and, as I have noted in earlier posts, have widely varying standards (if any) on how results reporting should take place.  So simply using standardised terms is not itself sufficient to make performance comparisons easy.


17. Logic Models: Use of a Logic Chart helps participants and stakeholders understand the logic of results, and identify risks.

  • Logic Models (as some agencies refer to them) were being used, although somewhat informally, 20 years ago, in the analysis of problems and results for aid programmes. Some agencies such as CIDA [now Global Affairs Canada]  have now brought the visual Logic Model to the centre of project and programme design, with some positive results. The use of the logic model does indeed make the discussion of results much more compelling for many stakeholders, than did the use of the Logical Framework.

18. Accountability for results: Make sure performance measures and reporting criteria are aligned with decision-making authority and accountability within the organization. Indicator data should not be so broad that they are useless to managers. If managers are accountable for results, then they need the power and flexibility to influence results. Managers and staff must understand what they are responsible for, and how they can influence results. If the performance management system is not seen as fair, this will undermine implementation and sustainability of results based management.


19. Credible indicator data:   Data collected on indicators must be credible -- reliable and valid.   Independent monitoring of data quality is needed for this.

  • This remains a major problem for many development projects, where donors often do not carefully examine  or verify the reported indicator data.

20. Set targets:  Use benchmarks and targets based on best practice to assess performance.

  • Agencies such as DFID and CIDA are now making more use of targets in their performance assessment frameworks.

21. Baseline data:   Baseline data are needed to make the results reporting credible, and useful.

  • Agencies such as DFID are now concentrating on this. But many other aid agencies continue to let baseline data collection slide until late in the project or programme cycle when it is often difficult or impossible to collect.  Some even focus on the reconstruction of baseline data during evaluations – a sometimes weak and often ultimately last-ditch attempt to salvage credibility from inconsistent, and unstructured results reporting.
  • Ultimately, of course, it is the aid agencies themselves which should collect the baseline data as they identify development problems.  What data do international aid agencies have to support the assumptions that first, there is a problem, and second that a problem is likely to be something that could usefully be addressed with external assistance? All of this logically should go into project design. But once again, most aid agencies will not put the resources of time and money into project or programme design, to do what will work.

Using Performance Information



22. Making use of results data: To be credible to staff and stakeholders, performance information needs to be used – and be seen to be used. Performance information should be useful to managers and demonstrate its value.

  • The issue of whether decisions are based on evidence or on political or personal preferences remains important today, not just for public agencies but, as it has been recently argued, for private aid.


23. Evaluations in the RBM context: Evaluations are needed to support the implementation of results based management. “Performance information alone does not provide the complete performance picture”. Evaluations provide explanations of why results are achieved, or why problems occur. Impact evaluations can help attribute results to programmes. Where performance measurement is seen to be too costly or difficult, more frequent evaluations will be needed, but where evaluations are too expensive, a good performance measurement system can provide management with data to support decision making.

  • Much of this is more or less accepted wisdom now.  The debate over the utility of impact evaluations, primarily related to what are sometimes their complexity and cost, continues, however.

24. Incentives for implementing RBM: Some reward for staff – financial or non financial – helps sustain change. This is part of the perception of fairness because “accountability is a two way street”. The most successful results based management systems are not punitive, but use information to help improve programmes and projects.

25. Results reporting schedule: Reports should actually use results data and regular reporting can help staff focus on results. But “an overemphasis on frequent an detailed reporting without sufficient evidence of its value for public managers, the government, parliament and the public will not meet the information needs of decision-makers.”


26. Evaluating RBM itself: The performance management system itself needs to be evaluated at regular intervals, and adjustments made.


Limitations:

 This study is a synthesis (as have been many, many studies that followed it) of secondary data, a compilation of common threads, not a critical analysis of the data and not, itself, based on primary data.

It is only available, apparently, on the web page, not as a downloadable document. If you print it or convert it to an electronic document, it runs about 18 pages.

The bottom line:

The basic lessons about implementation of RBM were learned, apparently, two decades ago, and continue to be reflected throughout the universe of international aid agency documents, such as the Paris Declaration on Aid Effectiveness, but concrete action to address these lessons has been slow to follow.

This article still provides a useful summary of the major issues that need to be addressed if coherent and practical performance management systems are to be implemented in international aid organizations, and with their counterparts and implementing organizations.


Further reading on Lessons learned about RBM



OECD’s 2000 study: Results-based Management in the Development Cooperation Agencies: A review of experience (158 p), summarizes much of the experience of aid agencies to that point, and for some agencies not much has changed since then.

The World Bank's useful 2004, 248-page Ten Steps to a Results-Based Monitoring and Evaluation system written by Jody Kusek and Ray Rist, is a much more detailed and hands-on discussion of what is needed to establish a functioning performance management system, but it is clear that some of their lessons, similar to those in the Auditor-General's report, have still not been learned by many agencies.

John Mayne’s 22-page 2005 article Challenges and Lessons in Results-Based Management  summarises some of the issues arising between 200-2005.  He contributed to the earlier Auditor-General's report, and many others. [Update, June 2012: This link works sometimes, but not always.]

The Managing for Development Results website, has three reports on lessons learned at the country level, during the implementation of results-based management, the most recent published in 2008.

The 2009 IBM Center for the Business of Government’s 32-page Moving Toward Outcome-Oriented Performance Measurement Systems written by Kathe Callahan and Kathryn Kloby provides a summary of lessons learned on establishing results-oriented performance management systems at the community level in the U.S., but many of the lessons would be applicable on a larger scale and in other countries.

Simon Maxwell’s October 21, 2010 blog, Doing aid centre-right: marrying a results-based agenda with the realities of aid  provides a number of links on the lessons learned, both positive and negative, about results-based management in an aid context.



_____________________________________________________________




GREG ARMSTRONG
Greg Armstrong is a Results-Based Management specialist who focuses on the use of clear language in RBM training, and in the creation of usable planning, monitoring and reporting frameworks.  For links to more Results-Based Management Handbooks and Guides, go to the RBM Training website

RBM Training

RBM Training
Results-Based Management

Subscribe to this blog by email

Enter your email address:

Delivered by FeedBurner

 
Read the latest posts