google-site-verification: googlefccb558c724d02c7.html


Monday, March 29, 2010

Applying RBM to Policy

--Greg Armstrong -- 

[Edited to update links and content August 2016]

Can policy making and policy advice be assessed by the same performance assessment standards and RBM methods that are applied to programmes? Mark Schacter’s views have evolved over the past five years.

Level of Difficulty:  Moderate to complex
Primarily useful for:  Policy makers, performance-management divisions
Length: 2 papers, 42 pages
Most useful section: What constitutes “good policy advice” p. 7-8 in “The Worth of a Garden”
Limitations:  Focused primarily on senior officials, unlikely to be useful to field workers, unless they are in a policy development project.
Mark Schacter's web page

Who this is for

Senior policy makers and performance management officials in the public service will recognise the issues Mark Schacter raises in all of his writing on RBM. Targeted primarily on Canadian public officials, the issues he raises are relevant to public servants everywhere. The policy discussions, however, are less likely to meet the needs of development field workers, or project managers, unless they are working specifically on policy development projects.

A decade of RBM analysis

Mark Schacter has been working on results-based management, training people on performance measurement and writing about it, since at least 1999. While this is certainly not the only thing he does his writing on RBM has been prolific, and influential.   The CIDA/Global Affairs Canada revised RBM terminology, formally adopted in 2008, for example, bears a striking resemblance to that used by Mark Schacter in his 2002 paper “Not a toolkit: Practitioner’s Guide to Measuring the Performance of Public Programs

First at the Institute on Governance, and more recently as a freelance consultant, he has published [at least 40] articles since 1998, focused on performance measurement and RBM. Some of these can be found on the Institute on Governance web site, but more are available directly on one of his own web sites: Mark Schacter Consulting.

[Editorial note March 2016: This review does not cover a number of new articles which can be found at the Mark Schacter publications page.]

Should Policy be judged by the same RBM standards as Projects or Programmes?

While the paper “Not a Tool Kit” provides a useful summary of the steps and issues in RBM for the public service, particularly the discussion of tradeoffs on indicators, the focus of this review is on how he treats policy making in the RBM context.

Two of his articles demonstrate how Mark Schacter's views shifted between 2002 and 2006, although I think, only marginally, on the important issue of whether standard performance measurement processes can -- or should -- be applied to policy development and public servants’ role in providing advice.

Is Policy Unique?

A strong advocate of intelligent application of RBM to the management of public programmes, in his 2002 article, What Will Be, Will Be The Challenge of Applying Results-based Thinking to Policy, he reviewed the standard arguments against applying results-based management to policy -- that policies are intangible things, subject to a variety of influences, such as politicians’ short-term political needs, that there is often a huge lag in time between policy development and any chance of seeing concrete results. The conclusions, for those who take this view -- and I have heard this recently -- is that policy is therefore “unique”, and that tracing the effect of advice on the success of policy is therefore in some way, “unfair”.

Schacter took the view, in this 2002 article, that intangibility and complexity are not unique to policy but occur often in programme implementation. He appeared to take the view that while these things present challenges to people attempting to assess performance, they also provide an opportunity to use performance measurement, and the critical examination of a logic model, to clarify assumptions and test the understanding of the intended results, implied or clearly stated by policy.

Evaluation and Performance Measurement

The link between the views Mark Schacter held in 2002, and those he expressed in 2006, is the role of evaluation in assessing the effectiveness of policy. Performance measurement, he wrote in 2002, looks at where we are today, and tries to assess how likely we are to achieve long-term results, by looking for evidence that we are making progress against shorter-term results. 

Evaluation, on the other hand, assesses not just whether results have been achieved, but whether they were the most appropriate results, why results were, or were not achieved, and whether alternative means of achieving them would have been more appropriate. [p. 17]

[2016 Edit:  In 2011 Schacter added a new paper - Tell Me What I Need to Know: A practical guide to program evaluation for public servants, which makes some useful distinctions between the policy requirements of monitoring and evaluation.]

The case for performance measurement of policy

Performance measurement, he wrote in 2002, has its limitations, particularly given the usual lag between policy development and achievement of long-term results. But, he continued:
“Sometimes a less-than-perfect instrument is, under the circumstances, the best one for the job at hand. Performance measurement is indeed a “second-best” instrument – but a very useful instrument nonetheless….Citizens have no less a right to be informed about the performance of policies than of programs. In order to explain and justify the allocation of resources to …any policy (or program) you need to have a way of connecting what you are doing now with where you want to be in the long term. This connection needs to be clear and must make sense not only in the minds of the people responsible for the policy, but also in the minds of external stakeholders (citizens, civic groups, private sector operators, politicians, etc.).
Performance measurement helps you make that connection. It helps you tell a believable and compelling story about why a policy was conceived in the first place, and whether or not it appears to be on the right track.” [p. 24-25]

The case for evaluation of policy

By 2006, in a paper for Canada’s Treasury Board, “The Worth of a Garden: Performance Measurement and Policy Advice in the Public Service” Mark Schacter had apparently come to the conclusion that measuring policy performance in the short term, might in fact be too challenging, and that an emphasis could probably be more productively put on longer-term evaluation.

He outlined two options for using performance measurement on a regular basis to assess progress towards policy results. The first is to assess what he called the “process and outputs standards for policy advice”, the second is essentially what he advocated in his 2002 article - to assess progress toward achievement of immediate and intermediate outcomes - whether policy advice was accepted and implemented. The conclusion he came to in 2006, however, differed from his earlier view:
“Low-specificity organizations and tasks pose especially difficult problems for performance measurement – problems so significant that it may be impractical (if not impossible) to apply standard performance measurement in a way that yields useful results. This does not mean that one should not attempt to assess the quality of a policy shop’s performance. But it does suggest that evaluation may be worth considering as a better tool than performance measurement for this particular task. Evaluation, though closely related to performance measurement, differs from it in ways that may provide a better fit with the subtleties and ambiguities of the policy-advice process.” [p. 11]

Greg Armstrong’s comments:

Are performance assessment and evaluation mutually exclusive?

At no point did Mark Schacter advocate abandoning the assessment of policy units’ performance. He has always maintained that at some point policy functions have to be assessed.

What is unclear to me, however, is why the assessment options -- regular performance assessment and eventual evaluation -- appeared to be regarded as mutually exclusive. [2016 edit: His 2011 paper - referenced above, sees them as complementary elements on an "evaluation continuum".]

It seems to me that combining a) an assessment of the quality of policy advice, and the processes which lead to it, with b) an assessment of interim results, and c) a longer-term evaluation, is a reasonable (if obviously not perfect) way of helping policy advisors, policy makers, legislators, and those who fund them, to understand the progress they are making toward long term results.

By 2008, Schacter was writing about other performance assessment issues, and in How Good is Your Government: Assessing the Quality of Public Management [2008] policy was mentioned only once, in passing. One type of information he proposed in that article, for assessing the efficient management of resources, however, was “Results-based performance information is used routinely as a basis for continuous improvement of program/policy performance.” [ p. 5] 

This suggests that he had not given up completely on the contribution to the policy function of regular performance assessment.

How RBM applies to Policy Projects in International Aid

It is important to note that none of Mark Schacter’s writing, at least between 2002-2006, was focused on whether performance assessment could be applied to improved capacity to provide policy advice. 

If, as I contend, there is a role for performance assessment in assessing progress on policy in general, there is surely a much clearer role for it, and for RBM in general, in planning, implementing and assessing results for international aid projects which focus on the development of capacity for policy research, policy formulation, and legislative capacity.

Mark Schacter maintained in 2006 that the provision of policy advice is essentially an output - a completed activity. But while there could be a case made that this is true for some policy functions, in the context for which he was writing -- and even that is not completely clear to me -- it is not true for policy capacity development. Improved quality of the policy making process, and improved quality of the advice provided, are both clearly interim results in capacity development terms, and therefore worth assessing on a regular basis.

In the 2006 paper, Schacter outlined the commonly regarded criteria for assessing the quality of the policy advice process, adopted in part from studies in Australia and New Zealand:

  • The timeliness of the advice for decision-makers
  • Relevance of the analysis to the current realities faced by decision-makers
  • Stakeholder consultation underlying the proposed policy
  • Clarity of purpose (essentially - does the policy itself rest on a solid logic model)
  • Quality of evidence, and the link between evidence, policy and purpose
  • Balanced range of alternative and viewpoints reflected in the analysis
  • Presentation of a range of viable options
  • Clarity in presentation
  • Pragmatic assessment of the potential problems of implementing the policy.

All of these, with some work, could form the basis for useful performance indicators for policy capacity projects or programmes and have, in many cases, been used for this purpose. Certainly as Mark Schacter observed, indicators relevant to these issues would provide qualitative data -- subjective in nature, and time consuming to collect.

But, in my experience, qualitative data are not necessarily any more time consuming to collect than quantitative data, and certainly not less valid if the intention is to assess the quality of the policy formulation process.

The bottom line:

Policy development is, indeed, sometimes an uncertain process, but there are ways of improving it, of building capacity and of assessing this capacity. Mark Schacter’s articles on the role of performance assessment in policy clearly outline the challenges, but also deliver some reasonable suggestions on how to deal with them.

Further reading:

[2016 - Other very useful more recent papers can be found on Mark Schacter's website, including several on evaluation, monitoring, the use of performance dashboards and risk assessment.]



Greg Armstrong is a Results-Based Management specialist who focuses on the use of clear language in RBM training, and in the creation of usable planning, monitoring and reporting frameworks.  For links to more Results-Based Management Handbooks and Guides, go to the RBM Training website

RBM Training

RBM Training
Results-Based Management

Subscribe to this blog by email

Enter your email address:

Delivered by FeedBurner

Read the latest posts