google-site-verification: googlefccb558c724d02c7.html


Thursday, March 01, 2012

Indicator data visualizations: The Guardian Datablog and Data Store


“Facts are Sacred”, The Guardian Datablog says, and the Guardian challenges us to examine those facts carefully, as its recent story on the "3 Little Pigs" illustrates.

While creative visualizations of  indicator data undoubtedly make it more interesting, without informed explanation of what the information means, the visualizations can often be misleading.  The Guardian Data Store and the Guardian Datablog provide an example of how journalists bring images, indicator data and intelligent analysis together.

 [Edited to update links, June 2018]
Level of Difficulty: Moderate to Complex
Primarily Useful for: Aid agency communications managers, creative programme reporting staff
Most useful: The Development Data directory
Limitations:  The raw data searches work only intermittently, and the Kindle format reduces the utility of the Ebook.

Who this is for

Not everyone will be able to produce the kind of data visualizations presented by the Guardian, but we can all benefit by the examples the news site provides of where to get data, how to present it, and what kind of questions are necessary in determining the reliability and validity of indicator data.

Those who may have the resources to actually produce these kind of visualizations will probably not be working at a project level.  But aid agency communications staff, and some large agencies at the country level, may have the intellectual and financial resources to do what the Guardian data team does.

Background - data visualization

Many development workers, project managers, monitors, evaluators, wading through dense international development project reports and evaluations, may wish that more attention could be paid to making indicator data more understandable.

The late Hans Rosling, for example, is well known for creative presentations on indicators.
These are intended, as the Gapminder Institute  puts it, to “unveil the beauty of statistical time series by converting boring numbers into enjoyable, animated and interactive graphics”.  But doing this at the project and programme level in international development can be challenging.

And as entertaining as animations, graphs or pictures are, by themselves they  can be misleading.  Hans Rosling  does more than provide the graphics, of course. He interprets what the data and the graphics can tell us, in clear, compelling language.

The Gapminder Institute is not alone, however, in presenting – and interpreting - indicator data in a compelling manner.

The Guardian Data Post and Datablog

News websites  reach millions of people a day, and among these the most creative in obtaining and utilizing data in compelling visualizations is by far the The Guardian.  The Guardian newspaper’s online site  with more than 29 million unique visitors in December 2011 alone, is the fifth most visited newspaper, and among the most respected sites on the internet.

The  most interesting part of the Guardian site, (aside from 3 Little Pigs ad) for many of us working in international development is the Guardian Datablog, and the associated Data Store,  its directory of all of the statistics the Guardian uses as it reports the news.  This includes World Government data search, a Development data search, examples of featured data visualizations and a link to an electronic version of Facts are Sacred, a new book by Simon Rogers, one of the Guardian’s news editors, on how The Guardian collects and presents data,. He is also editor of both the Data Store and Datablog.

The Guardian itself, in its main economic, political, health or education pages publishes mainline news stories. What the Datablog does, as far as I can see, is highlight the stories making innovative use of publicly available data, explain where the data come from, and then challenge readers to question it, or do more with the information. In some cases the Guardian Datablog appears to produce its own visualizations from raw data sources, but in most, it seems, the Datablog team provide a link to, or a variation on another agency’s visualization – and then they provide The Guardian’s explanation of what the information means.

Scope of the Guardian’s data

The Manchester Guardian, one of the UK’s oldest newspapers, was founded in 1821 and established its online site almost 175 years later, in 1995. The Guardian Datablog was established in 2009 to explore what editor Simon Rogers refers to as data journalism – journalism which mines available data looking for hidden or emerging stories.

As he points out, data journalism is not new – good journalists have been using obscure data as the basis for breaking news for centuries.

The Datablog, between January 15, 2009, and the time it produced a summary of all of the blog’s comments on data journalism, on July 15, 2011, listed 1,407 articles or blog posts, including links to all of the available underlying data in spreadsheet format.  It is unclear why that information had not been updated since it was posted in 2011,  but by my very rough estimate as I write this in March 2012, there have been approximately 490 - 500 additional posts between August 2011 and February 28, 2012.  This brings the total to approximately 1,900 articles -  all with some form of graphic – tables, charts, static or interactive, simple or complex, and all challenging us to use the underlying data ourselves in whatever way makes sense to us.

This is a mind-boggling number of analyses, given the detail involved - over 10 such articles a week for two and half years – and it does not necessarily represent the true extent of the work The Guardian has done.

As just one example, one story, by The Guardian Data editor about Malaria, referenced in the Datablog on February 3, 2012, for example, was preceded and followed within a week, by at least three other stories on Malaria in the main section of the Guardian  (by the health editor), in the Guardian Weekly, and in the New Review section  of the Guardian’s sister publication the Observer.

Old data - new presentations

An interesting illustration of both the history of data journalism, and of how the Guardian gets -  and uses - data can be found in a Datablog post of September 26, 2011. The first edition of the Guardian, in 1821, had a story on school funding and attendance, based on leaked data, and in 2011 the Guardian Data Post used the same data, repackaged using today’s technology.

The original story - as we would expect within the constraints of the day’s technology – used print, and tables.  But the 2011 review presents the same data using the free IBM ManyEyes software in a form which allows users to manipulate the data, sort it and see the results in different formats.

1821 presentation of indicator data - click image to enlarge
Copyright – Guardian News & Media Ltd. 
1821 data, 2011 visualization - click image to enlarge
Copyright – Guardian News & Media Ltd.
1821 data, 2011 visualization - click image to enlarge
Copyright – Guardian News & Media Ltd.

And going further, the Datablog provides the data in spreadsheet form, so readers can download it, and analyse it using any programme they think will produce interesting variations on both presentation and insight.
1821 data in a 2011 spreadsheet -  click image to enlarge
Copyright – Guardian News & Media Ltd.

Indicator data in context

But visualizing the data is one thing and explaining it is another. I have seen project reports with fine graphics, but confusing narratives, and incomprehensible results.

Putting data in context is something the Guardian does well in most of its reporting, using, for the most part, plain, clear language to explain what the charts and graphs mean.    In this particular post on education in 1821, the importance of the story, and the context, are explained in an accompanying analysis to the 2011 Datablog post.

It is worth noting that the url – the web address – for the Datablog is a subset of the news url: "", and its editor is a news editor of the Guardian.

“Data journalism is not graphics and visualisations. It's about telling the story in the best way possible. Sometimes that will be a visualisation or a map (see the work of David McCandless  or Jonathan Stray).
But sometimes it's a news story. Sometimes, just publishing the number is enough.
If data journalism is about anything, it's the flexibility to search for new ways of storytelling. And more and more reporters are realising that. Suddenly, we have company - and competition. So being a data journalist is no longer unusual.
It's just journalism."

Finding the indicator visualizations relevant to development

But with almost 2,000 Datablog examples of data visualization– and probably several thousand accompanying news stories - how do you find those related to development?  Most of the stories and visualizations are in fact UK-oriented.  But there are dozens with direct relevance to development workers, project managers and aid agencies.

Unfortunately, for a site so focused on data accessibility, narrowing down the huge amount of material to specific development topics with data visualizations is not something that is always intuitively easy on the Guardian site.

Readers going to the DataStore site  which provides links to all of the data resources, will see, at the top, a list of the 5 or 6 most recent posts from the Datablog, and  a link to older posts.  Those older posts are listed at roughly 15 per page, going back apparently as far as 2009, which would mean clicking  manually (or digitally) through 125 or so pages.

Given the nature of what is there, I can recommend it as an entertaining manner of passing time, but for readers looking for something specific, it is far too time consuming to be workable.

The Guardian data choice: News stories, indicator visualizations or raw data?

The Guardian site provides access to data in at least three ways.

1. Searching for development news

Searching for news stories relevant to development issues is relatively easy:  From the main Guardian page  clicking on "Development" takes the reader to the Global Development section.

This has a dedicated search box on the right, at the top.

Copyright – Guardian News & Media Ltd.

Hundreds of news stories can be obtained on development topics through this search.  Some of these include maps, charts or interactive visualizations of underlying data, although most do not.

2. Raw indicator data

Raw indicator data  - with no accompanying interpretation or visualization - is also available, from time to time, for those who want to sort and interpret the data themselves.  There are search boxes for both World Government Data and for Global Development Data  on the site. Both were difficult to use at the beginning of 2012 and while you may get lucky and get data now, as I did earlier today, it is not consistent.

It is rare to find news editors, or their teams, on a international site such as this, who actually reply to readers comments, let alone to their emails.  But a major strength of the Guardian is the responsiveness of the data team.  After pointing to editor Simon Rogers my problems with search I found, within hours, that the Guardian data searches did start to work, although that success proves short-lived.  A few hours later, the search boxes were not functional.

13,000 sets of raw indicator data - when the "search" works

When it works, and this seems intermittent, the World Government Data search provides links to raw data, usually in spreadsheets but occasionally in other formats, data which users can download and manipulate for themselves.  There are roughly 13,000 such data sets sometimes available through this Guardian portal, from  US, UK, Australian, Canadian, New Zealand and Spanish governments, on at least a dozen topics such as agriculture,  education, environment, health, or population.

The Global Development Data search, which also works only intermittently, is slightly more difficult to find on the Data Store page, but when it does work, it provides links to similar types of data - over 5,000 sets in all, primarily from DFID, the World Bank and the UN Office for Coordination of Humanitarian Affairs.  These are, again, usually data provided in spreadsheet or xml format which can be used (by those with the technical skills) with free software available online, to make new visualizations.

MDG data

What I found particularly interesting about the way this is organized, is that when the data search is functional, we can sort the indicator data by agency - but also 151 of the data sheets are sorted by their relevance to the different  Millenium Development Goals, and 80 in terms their relevance to Millenium Development Targets.

There are, so far, no links that I could find to other donor agencies such as AusAid, CIDA, or USAID, but these may be added as those agencies, some of which do make data available on their own sites, see the utility of the Guardian links.

It is genuinely frustrating, however, to find such a potentially useful tool that is also this unreliable.

3. Indicator data visualizations

Getting to the lighter and more entertaining part of the data search --the development indicator data visualization -- is more time consuming, but also more reliable than the raw data search. There is no direct search box for such visualizations, but there are, however, at least 4 ways to find the stories and the data visualizations that may be of interest to readers:

A Graphics Link  - For readers who don’t want to look at spreadsheets but just want the stories and more interesting graphics, 694 of these can be found, at the graphics link

A Directory - Readers can go to the bottom of the DataStore page, where there is a  form of directory, a list of 8 categories, which cover 49 sub-topics – and, apparently,  all 1,900 posts since 2009.

Click image to enlargeCopyright – Guardian News & Media Ltd.

Under “World” for example, there are 10 sub-categories, one of which is “Development Data”.  Clicking on that will take us to 6 pages with roughly 80 stories.  These include, among others just since November 2011, articles and graphics on

There are hundreds more available also, from the period prior to November 2011.

An Alphabetical list  -  Readers can go to the A-Z data search at the top of the Data Store

Copyright – Guardian News & Media Ltd.

This link gives us a list of roughly 180-200 topics.  Some development-related topics are listed here, alphabetically, but others are not.

“Literacy”, for example, is listed, providing 5 stories, one of which is directly relevant to development work, the rest focused on the UK.  

The Malaria stories, however, are not listed alphabetically here, but are included among 70 other stories listed under “health”, most of which are focused on the UK.  So, using this method requires a bit of attention and some lateral thinking.

A Data spreadsheet - The fourth method of finding data stories and visualizations, at least as far as I can see, is to go to the July 2011 post “All of our data journalism in one spreadsheet”, which I referred to at the top of this review. With its 1,407 posts, this has more than enough to keep anybody busy, and entertained, for weeks.

If it were up to date, it would be even more useful.

What is particularly useful about this database of stories, even in its current state, is that readers can sort the available stories
  • By title, alphabetically,
  • By whether they have downloadable spreadsheets with the original data (over 1,100 do have the original data), 
  • By the number of times the story was referenced by users of Twitter in their “retweets” to others,
  • By the number of online comments.

The default for the material is from most recent (August 2011) to earliest (January 2009) but the list is not easily resorted this way.

Links to the original visualizations

Where the Guardian does make use of other people’s graphics and visualizations the Datablog provides links to the original sources.

Going to those original sources can sometimes provide more arresting and more interactive data visualizations than the Guardian itself presents.

As just one example, take the post on Malaria.  The Guardian graphic is interesting but static, although the story includes also an interactive table.  One of the sources for the underlying data is the University of Washington’s Institute for Health Metrics and Evaluation,  and the Institute's different visualizations are all interactive, and much more engaging than version of the graphic on Malaria found on the Guardian site.

I will explore this and other such sites, in a subsequent review.

Putting a critical eye to data – a challenge, not a limitation

The Guardian makes most of the data it works with available to its readers in the spreadsheets. But as the Guardian itself notes about the underlying data, for its World Government Data Store ,

“Please keep in mind that the data provided in the Guardian's World Government Data API is aggregated from the source sites we are tracking and is provided on an "as is" basis. That means we do not check the accuracy or completeness of the data, nor are we able to grant you a licence to use it. If you wish to republish any of the data, you will need to check that such reuse is permitted by the source site, by following the link guidelines and usage terms and conditions on each site . You are solely responsible for what you publish.”

I have been described by one frank colleague as someone working with "a distinct spreadsheet dysfunction", and I do not think I am alone in this affliction among those who will be looking at the Guardian site. Even given the resources of the Guardian, there are occasionally problems – either with the way the information is explained or, perhaps, simply in how people like me understand (or fail to understand) the explanations.

Take, for example, the August 2, 2011 post on the U.S. debt ceiling .  It makes some interesting points about who has been in power during the periods when the ceiling was raised, but actually looking at the underlying data I found at least two things confusing about it, which may, however, have been clear to other readers:

  • There are references to what the net debt will be in 2013 and 2014, but it was unclear to me where those dates came from.  I assume it was from budget projections, but the fact that the figures in the table for the debt ceiling – and GDP for that matter – in 2014 are substantially lower than 2011, is information that could, one would think, justify some explanation.  Clicking on the link to download the full spreadsheet, however, takes us not to data on the debt ceiling but to a spreadsheet on who holds US debt  – the subject of a different, November 22, 2011 Datablog post.
  • The second, and more minor issue, was a matter of proof-reading.  One line in the post refers to the fact that “Ronald Reagan increased the debt ceiling by 23 times”,  which would have put the debt ceiling at somewhere near 40 trillion dollars when he left, when what I think was meant was that he increased it on 23 separate occasions (from roughly  $2 trillion to $4 trillion).

    This, of course, is the point of the challenge that the Guardian Datablog puts at the end of most of its posts:

    “You can download the full data below. What can you do with it?”

    Limitations – Ebook formatting

    I started out a month ago, on this post, with the intention to review the Kindle version of Simon Rogers’ book:  Facts are Sacred: The Power of Data  which explores in more detail than the web pages how the Guardian gets and uses data.  This book costs roughly $4, a small price given the contents, but  while it is interesting, it is not nearly as useful as the Datablog and Data Store sites, which are, after all, free.

    The limitations of the book are not derived from the information, but appear to be inherent to the Kindle, and perhaps other Ebook formats.  I have yet to meet anyone working in international development who has a Kindle – and only one who uses an Ipad or Apple desktop. Of the roughly 90 subscribers to this blog, most of whom are in Asia, Africa or Latin America, only two appear to access it from a device using an Apple operating system.  But Kindle and iPad versions were, in February 2012, the only two formats in which this Ebook was available.

    I know people who do use the Kindle in North America, for recreation, and some of them think highly of it for these purposes.  Readers like me, who don’t have the Kindle device, but want to read the Facts are Sacred book can, indeed, as I did, download from the Amazon website free software which will permit us to read a Kindle publication on our desktop computer, after we have purchased it. But I found that the formatting, the relative paucity of links in this version of the book, the number of dead links, and the difficulties in copying text if we want to reference material for research or reporting, all make the Kindle format of very limited utility to those of us who want to use it for professional, rather than entertainment purposes.

    There are publishers, however, who make Ebooks available in PDF format.  Although as Simon Rogers writes, the PDF format may be “the worst format for data known to humankind” my experience is that PDFs are superior - to the Kindle format at least -  for those reading reports for professional purposes. For Luddites like me, being able to see and work with recognisable pages and consistent formatting is comforting.

    While the Ipad version of this particular Ebook may be more interactive than the Kindle version, very few people working in the field on development projects can afford one, and most, in any case, use Windows-based computers.

    Some publishers produce Ebooks that can be used by anyone with a computer. O’Reilly Books, for example, produces Ebooks in formats which can be viewed on the IPAD, the Kindle, personal computers, smartphones or other devices, using both formats specific to those devices, and in PDF format.  Readers can annotate, copy and print the material, and get updates to outdated books. The prices for any individual publication are significantly higher than most of those available for the Kindle on Amazon’s site  – but we can spend $15 for an Ebook from O’Reilly and read it on our existing computer, or pay $150-$300 for a Kindle or Ipad, then pay $3 for the book, and face the Kindle and Ipad-specific problems of trying to use it for reference.

    Nevertheless – aside from the issues of layout and links, the Facts are Sacred book makes some interesting points about how the Guardian obtains and interprets data. If it comes out in a more usable format, it would be a useful companion to the Data Store and Datablog sites.  

    Given the Guardian’s commitment to open data, it would be helpful to see this available (even behind a paywall if necessary) or in another format, so people with limited resources can read it.  It is welcome news that the book will be available in paper, but it would be even better if a more accessible electronic version were produced.

    The bottom line

    Anyone who wants to see how creatively indicator data can be presented will enjoy the Guardian site, and could easily spend hours exploring what is available. But the processes of transforming mundane data sets into dynamic interactive presentations requires the kind of resources (of time, technical expertise and money) that most individuals and small organizations do not have.

    In any case, as the Guardian sites make clear, visualizations are not the end of the story.  Checking data for validity, reliability and context are as essential to journalists as they are to all of us as we try to make our reporting credible.


    Greg Armstrong is a Results-Based Management specialist who focuses on the use of clear language in RBM training, and in the creation of usable planning, monitoring and reporting frameworks.  For links to more Results-Based Management Handbooks and Guides, go to the RBM Training website

    Tuesday, January 31, 2012

    Online RBM Training -2: The UN's Programme Performance Assessment in Results-Based Management

    --Greg Armstrong --

    The UN’s Programme Performance Assessment in Results-Based Management, is a jargon-laden, time-consuming and partially out of date production that, whatever its original merits, is too frustrating to be useful today. There are more productive ways to spend your time.

    Level of Difficulty: Moderate to Complex
    Primarily Useful As: Difficult to say - a history of UN RBM jargon?
    Most useful: Section 4-5 on data collection and reporting
    Limitations: Out-of-date RBM language,jargon-laden, boring format


    Online RBM training is unlikely to provide the hands-on practice and the opportunities for learning of group processes which are the foundation for effective Results-Based Management training and implementation, but some courses are definitely better than others.

    After my December 15, 2011 post describing the University of Wisconsin’s effective interactive online results-based management course, I was asked by one reader for my opinion of other online RBM courses. The University of Wisconsin’s Logic Model course, is, in my opinion, excellent, making the most of interactive opportunities available in the absence of live training.

    On the other end of the spectrum, however, is an online course available from the UN Office of Internal Oversight Services–Programme Performance Programme Performance Assessment in Results-based Management.

    Google search results - Online Results-Based Management Training
    CLICK image to enlarge

    If we do a plain word search for "online results based management course" (without the quotes) on Google's .com search we will get, depending on the day, more than 80 million hits, and on page one of this search – at least as of January 31, 2012, the first and third results shown are for this specific UN RBM course.

    Those search results may vary slightly given country-specific versions of the search engines, but  Microsoft's Bing and Yahoo search return several million results on this topic too, again with this UN RBM course ranking very high in the search results.

    The highest ranked result --with a 2011 date-- is UNESCO’s Open Training Portal, and if we click on the UNESCO link, it takes us to what appears to be the home of the course, the UN’s Office of Internal Oversight Services, which is number 3 on the search list.  The course also appears as a link in some other UN documents.

    So - how does this highly-ranked course stand up to scrutiny?

    A disappointing use of the web

    For the general development community this online RBM course is not worth the time it will take to complete it.  With over 150 screens, it could easily take 3-4 hours to complete this, and most of us could make much better use of written manuals to get the same, and more detailed information, in a shorter period.

    For UN staff, it is possibly of greater use than for the general public, other implementing agencies or national partners, but there are, even here, some limitations. It may appear, perhaps, unfair to criticise the lack of utility of this course for the general development community, when it was intended primarily as an in-house United Nations training tool when it was originally designed.  The problem is, that for any user, UN or not, there is little that is compelling in the way this is put together.

    Not an interactive course

    This RBM course makes no use of the engaging possibilities of interaction that the University of Wisconsin used to advantage in an online Logic Model course designed at least 3 years before this UN course was published.

    At the very lowest level of potential interactivity, the menu on the UN course is extremely limited.  Readers  can skip to one of 4 “sessions” but if we want to move to subthemes inside them, after quitting and then rejoining the course, for example, laborious clicking is required to move slide by slide forwards or back to potentially interesting topics.

    There is some interesting material, here, but  nothing you can’t find more easily in printed handbooks or electronic files. The content of this  course is basically just material taken from other available online RBM Manuals and documents.

    The exercises, where they occur, simply ask the viewer to download Microsoft Word documents and complete them.

    The few tests of skill, such as matching technical terms such as “Impact”, “Objective” or “Indicator” with definitions either require the viewer (and I hesitate to call anyone using this a learner) to print out lists and match them up with a pen, or require a series of simple “either-or” choices that are too simple to be useful.

    "Interactive" pen and paper
    CLICK image or link to enlarge

    This course, then,  is basically just a 152-slide PowerPoint presentation. There is nothing wrong with putting PowerPoint presentations online, if they are creatively put together.  A good PowerPoint can be engaging –- applying the limited but still useful possibilities for animation that PowerPoint has, so that ideas are sequenced and highlighted.

    Interesting substantive points about data collection and reporting

    The sections on data collection (section 5)   which runs from slides 101-126 and reporting results using the Story Pyramid,  (which runs roughly from slide 138-146)  actually have some useful material, but both could have been much better if this material was presented in a more engaging and interactive manner.

    The slides on qualitative and quantitative data, and causality have some interesting ideas, for example.

    But even here, what we see in this screenshot is exactly what we get – the whole page is dropped in front of the viewer at one load, no drop-downs, no sequencing, no animation, no emphasis – essentially just what we would get, with much less trouble, if we read one of the many PDF manuals or Word documents on results-based management available from UN sources.

    Discussing qualitative and quantitative data collection - static format
    CLICK image or link to enlarge

    A substantial amount of the text in these screens is made from photographs of text – jpg files inserted into the slide.  Pasting text like this is a bad idea even for PowerPoint presenations where viewers have a big screen, and a facilitator to wake them up with a joke or two – but for a static, online view, it makes reading difficult.

    Jargon and out of date RBM terms

    This course slips into academic jargon from time to time, and many of the 152 slides have long text.  References to the “post hoc ergo prompter hoc fallacy”  and the “dichotomy between quantitative and qualitative research” may show the erudition of the writers, but do nothing for intercultural adult learning.

    Another potential problem is the outdated UN RBM jargon. Take the definition of "Outputs" for example:

    Outdated results definitions
    CLICK image or link to enlarge

    For a number of years the UN has been moving very slowly towards definitions of Outputs which see them more as a result – a real change in capacity - rather than just the completed activities that so many field-level projects appear to settle for in their reporting. The current (as of 2011) UN definition of "Output", for example, from the UNDG RBM Handbook  looks like this:

    “Outputs are changes in skills or abilities, or the availability of new products and services that must be achieved with the resources provided and within the time-period specified. Outputs are the level of result where the clear comparative advantages of individual agencies emerge and where accountability is clearest. As stated in section 1.7, UNDAF results should be formulated in change language.” [p.13]

    Even the 2007 UNDG technical brief on Outputs , written within a year of the first apparent appearance of the UN office of Internal Oversight Services online RBM course, had a more complete view how Outputs should be viewed in reporting:

    "You may be tempted to list things like workshops and seminars as outputs. After all, they are deliverable and some workshops can be strategic if they gather decision takers in one room to build consensus.  But, in most cases, workshops and seminars are activities rather than outputs. And remember that outputs are not completed activities – they are the tangible changes in products and services, new skills or abilities that result from the completion of several activities. "   [p.2 - italics added]

    The same statement is included in the 2011 updated Technical Briefs on Outcomes, Outputs, Indicators, and Risks & Assumptionsn Outcomes [p. 5], so the idea of accountability for genuine results is not disappearing.

    But while some in the UN are conscientiously trying to move staff to reporting on real results, this programme performance assessment course defines Outputs as essentially a completed activity - and  - if people see this definition as current, this will undermine UN attempts to move towards genuine results reporting.

    The Black Hole in UN Results Reporting

    The problem from the bilateral donors’ point of view with much of the UN results framework is that the ambiguity on whether an Output - the primary level of reporting for individual UN projects - is a completed activity or a genuine change in capacity, has provided unambitious UN project managers with an excuse for easy reporting on completed activities.

     Meanwhile, the bilateral donors who actually provide the money for most of these projects, have to report on real results.  This UN results-based management course uses  the  older definition of what an Output is, and because it holds the continued imprimatur of the Office of Internal Oversight Services, it reinforces the laziest side of UN reporting.

    This leaves a great black hole in the results chain, with reports on completed activities at the project level, and then a long conceptual leap to country results at the other, with no assessment of the quality or utility of the intervening completed activities.   If UN agencies, like their bilateral counterparts, genuinely want to test assumptions about what activities and theories of change actually work - and what does not work,  then there is no excuse for not assessing the immediate results -- changes in understanding, attitudes and behaviour and professional practice -- that the activities are intended to produce.

    Repeating the most limited interpretations of what is expected in terms of results will do nobody any service if this approach is reinforced by an online course that ranks, at least in January 2012, so high in internet search results.

    Similarly, the course course puts substantial attention on “Accomplishments” instead of what the UN agencies  have all referred to for several years, as “Outcomes”.  

    What do you do with an out of date course?

    This  RBM course is clearly out of date.  There is a reference on the last substantive slide (#151), for example, to an exercise using 2002-2003 data, and for potential forthcoming work on a 2004-2005 report.

    Using 2003 data
    Click image or link to enlarge

    The case could be made, I suppose, that the course was originally put online in 2005, so as times change we should not expect too much.  But there are multiple yearly search listings for the course, with somebody apparently updating them so they will appear current in search engine results.  If you do a date search for "Programme Performance Assessment" on Google for example, you will find listings for this course (as if the course were updated) for 2005,  2006, 2008,  and 2011.

    Search results for the online course 2005-2011
    CLICK on the image to enlarge

    I compared the March 2005  version of the course with the 2011 version, and could find, before I dropped off into a semi-comatose state, no difference between them.

    It is worth noting that the UN Office of Internal Oversight Services search reference itself only lists this as updated as far as 2008 – although it is unclear to me in comparing them, what - if anything - was changed between 2005 and 2008.

    The more recent October 2011 listing is a reference from the UNESCO Opentraining site, which describes the course this way:

    “The United Nations has adopted a results-based approach and in this, programme performance assessment is an essential component. It is an invaluable tool for programme managers to achieve the twin goals of accountability and effectiveness. The present training is designed to allow learners to work at their own pace in order to acquire the skills they will need to implement programme performance assessment.”

    While the programme performance assessment approach in general may be "an invaluable tool", this course, unfortunately, is not.

    Missing links – an indicator of website utility

    On the whole, the entire course, while it may, in 2005, have had some potential, is too frustrating to be useful today.

    Even where potentially interesting documents are referenced, the links are sometimes dead.  For example, section two references a manual on “How to Design and Carry Out Data Collection Strategies for Results-Based Budgeting”.  If you have to do results-based budgeting, as some of us do, this could be quite useful – but the link referenced there does not work.

    And it is not just that the PDF file may have been moved – that can happen to any of us.  The link does not work, because it is actually not written in a valid  html format.  A critical “dot” has been left out of the html for the link.  This looks the same in the 2005 version of the course, as it does in the version online today.  But even if you format the link correctly, it is still out of date, and there is, in fact, no downloadable document that I could find, even with considerable time spent on the search.

    It is a bit mind-boggling that nobody has noticed  this - or any of the other missing links - in  the 6 or 7 years this course has been online, and presumably intended for use by people who would want to download the documents.  That such broken links exist in the still available 2005, the 2008 and 2011 versions, suggest that in all likelihood nobody has clicked on the missing link in years, and pointed this out to the webmaster.  All of us who put work on the web will occasionally find we are left with expired links, and most of the time we try to review our sites every few months, to test the links, and update them. Any competent web analytics programme can tell webmasters when a link is broken, but evidently nobody responsible for this online RBM course much cares about this.

    As indicators of website utility go, this one seems to suggest that very few people are actually viewing the course in any detail and, again, that those managing the site also have little interest in whether it works properly or not.

    So, if  the course is out of date, there are basically three choices here:

    a) make it clear, with a little simple editing, that this is a 2005 document, and may be out of date – fair warning to the user – or
    b) just take it offline, and
    c) replace it, one hopes, with something genuinely interactive or at least up to date.

    The Office of Internal Oversight Services website

    It should not, perhaps, be surprising to find such outdated material on the UN Office of Internal Oversight Services' website.  The UN Internal Evaluation Division section  of the website says that
    “ IED is committed to providing timely, valid and reliable information from an independent perspective that can be used to strengthen the Organization."
    But, while some of OIOS reports are up to date, others are not -- in their own terms --  at all "timely".  At least in January 2012, the latest available programme performance report is for the 2006-2007 year

    The real indicator for me about how the OIOS looks at updating the site, however, is the section called  “What’s new?”  When I reviewed this in January 2012, I found that only one of  11 references was to any  date after 2009.  That one clear reasonably recent link was to a 2011 fraud alert .

    Aside from that, here is what the OIOS website considers new:

    • An undated page on risk management, that has at least one non-functioning link;
    • A March 2009 version of the Internal Audit Division’s Audit Manual
    • An advance notice (!) for an upcoming workshop to mark the 15th anniversary of the Office Of Internal Oversight Services – in October 2009;

    Indicators for completed activities - not results

    Even where there is more recent material, such as in the Investigations Division monthly performance indicators, the documents are dispiriting if we are looking for any clear guidance on results, not just on completed activities.

    There are 26 monthly reports there, ranging from October 2009 – November 2011.  Not one of these has indicators about results –

    • No indicators about the quality of  the quality of investigations,
    • No indicators about sanctions applied to violations,
    • No indicators about  the resolution to investigations
    • No indicators about what long term changes to which the investigations might have contributed.

    All of the indicators refer to completed activities – number of cases received, number processed, age of cases, the kind of things that are interesting indicators of process – but not of results.

    So, with this limited type of reporting, and with such a casual approach to keeping materials up to date, it should really be no surprise that this organization's online RBM course should be outdated.

    A possible alternative RBM course for UN staff

    Now, it may be that this course has been superseded for UN staff, with the online course Measurement for Effective RBM, but if that is the case, it is not at all clear from any of the UN websites.

    In any case, for the general public -- those of us working with NGOs, governments or other donors --that RBM course is not accessible.  Users must have an "official UN system organization e-mail”, must be nominated (it is not clear who can do this), and there is the small matter of the fee – US$1,850, per person.  

    That course may well be more interactive and more entertaining than Programme Performance Assessment online course – but for the price, it should be!

    The bottom line:

    If the creators  - or managers - of this online RBM course can’t spare the time to update it, why should you spare the time to read it?

    If you want an introduction, that remains valid today, to the processes of thinking about results, I still recommend the University of Wisconsin’s online RBM course -- Enhancing Program Performance with Logic Models.   Although it was created 3 years before the UN course, ten years ago, as I write this, it makes much more creative use of what the internet can offer for interactive training, the links work, and it provides, additionally, a complete PDF version of the course as a supplement.

    And if you want to match this understanding of the processes of RBM with an understanding of what RBM terms the UN agencies are using,  instead of wading through the Programme Performance Assessment in Results-Based Management course, a much better use of  your time could go to reviewing the many documents available online at the UNDG RBM site.  You will find conflicting information in the documents available there, because the UN itself has evidently not yet sorted itself out on results-based management, but at least you will be getting the latest thinking on the subject  from people who are trying to make RBM work in the UN context.

    [Edited to update links February 13, 2012]


    Greg Armstrong is a Results-Based Management specialist who focuses on the use of clear language in RBM training, and in the creation of usable planning, monitoring and reporting frameworks.  For links to more Results-Based Management Handbooks and Guides, go to the RBM Training website

    RBM Training

    RBM Training
    Results-Based Management

    Share on LinkedIn

    Subscribe to this blog by email

    Enter your email address:

    Delivered by FeedBurner

    Read the latest posts