SlideShare a Scribd company logo
The views expressed in this presentation are the views of the author/s and do not necessarily reflect the views or policies of the Asian
Development Bank, or its Board of Governors, or the governments they represent. ADB does not guarantee the accuracy of the data included
in this presentation and accepts no responsibility for any consequence of their use. The countries listed in this presentation do not imply any
view on ADB's part as to sovereignty or independent status or necessarily conform to ADB's terminology.
The Perils of Performance
Measurement
Olivier Serrat
2015
Measuring What Matters
The need to sell the idea that management
improves things means that "SMART" performance
indicators—customarily aligned in a results chain
linking activities, inputs, outputs, and outcome to
impact—proliferate. (Supposedly, pre–post
comparisons can then be made to assess the
relevance, efficiency, effectiveness, sustainability,
and impact of endeavors.) But, not everything that
counts can be counted and not everything that can
be counted counts. What is more, people—
responding to incentives—will often do what
measurement summons them to; so, measure the
wrong things and you will almost certainly
encourage the wrong behaviors.
"SMART"
indicators
are to be
specific,
measurable,
achievable,
relevant, and
time-bound.
SMART But Not So Smart
• Conflicting definitions of performance indicators abound.
In their shortest yet most stringent and frequent
expression, indicators are a numerical measure of the
degree to which an objective is being achieved. (So, they
are prone to merge with objectives and effectively
become targets.) Others consider them an discernible
change or event that provides evidence that something
has happened, be that an output delivered, an
immediate effect occurred, or a long-term process
observed. To such discerning interpreters, indicators do
not offer proof so much as reliable clues that the change
or event being claimed has actually happened or is
happening: rather, evidence from several indicators will
make a convincing case for claims being made.
First
SMART But Not So Smart
• Complex issues of cause-and-effect are seldom
considered. Obviously, performance indicators can only
pertain to matters that an agency controls. But, agencies
never command much and usually settle for subprime
indicators that afford enough control for their purposes.
This reality is intrinsic to all human endeavors, especially
those that touch political decision making or aim to
spark social change. Consequently, interest has grown in
approaches to planning, monitoring, and evaluation of
outcomes and their metrics that consider actor-centered
development and behavioral change, continuous
learning and flexibility, participation and accountability,
as well as non-linearity and contribution (not attribution
and control).
Second
SMART But Not So Smart
• The dimensions of performance mentioned earlier—
namely, relevance, efficiency, effectiveness,
sustainability, and impact—intimate that there can be no
single assessment of accomplishments overall.
Performance is an amalgam of dimensions, some of
which may conflict. Measuring it calls for an appropriate
basket of benchmarks, developed with full knowledge of
their interrelationships.
Third
SMART But Not So Smart
• Performance measurement must have a purpose—it can
never be an end in itself. Robert Behn notes that distinct
reasons for engaging in it are to budget, celebrate,
control, evaluate, improve, learn, motivate, and
promote. Manifestly, no single metric is appropriate for
all eight objectives. Therefore, practitioners had better
consider the managerial purpose(s) to which
performance measurement might contribute—these,
alas, being ordinarily to budget and control—and how
they might best deploy an informative blend of measures
anchored in context. Only then will they be able to select
valid yardsticks with the characteristics necessary to help
meet each purpose, directly and indirectly, concentrating
on what matters most.
Fourth
SMART But Not So Smart
• Many other things besides performance indicators are
needed to ameliorate achievements (after the indicators
have been recognized for what they are, namely,
individual links in a results chain). The other requisites
include Board, Management, and staff who are focused
on meeting the explicit and latent needs of client,
audiences, and partners; leadership and commitment to
developing and extending products and services; and a
culture of openness in which personnel are encouraged
and willing to question why they do what they do.
Fifth
Why Measure Performance?
Budget On what programs, projects, or people should my organization spend
the public's money?
Celebrate What accomplishments are worthy of the important organizational
ritual of celebrating success?
Control How can I make sure that my staff is doing the right thing?
Evaluate How well is my organization performing?
Improve What exactly should who do differently to improve performance?
Learn Why is something working or not working?
Motivate How can I motivate line staff, middle managers, nonprofit and for-profit
collaborators, stakeholders, or citizens to do what is needed to raise
performance?Promote How can I convince political superiors, legislators, stakeholders,
journalists, or citizens that my organization is doing a good job
Matching Measure to Purpose
Budget Efficiency measures—specifically, outputs or outcomes divided by
inputs
Celebrate Periodic and significant performance targets that, when achieved,
provide people with a sense of personal and collective accomplishment
Control Inputs that can be regulated
Evaluate Outcomes—combined with input, output, and process measures—that
also appreciate the effects of exogenous factors
Improve Inside-the-black-box relationships that connect changes in operations to
changes in outputs and outcomes
Learn Disaggregated data that reveals deviancies from the projected
Motivate Near real-time outputs compared with production targets
Promote Easily understood aspects of performance about which people care
Transforming Performance
Measurement
Cynics argue that performance measures are
seldom used to make decisions. Yet, they do have
effects from the suspicion that actions, e.g.,
sanctions or rewards, might be based on such
information. People will search what behaviors and
related activities are recompensed and then
endeavor to perform or "game" these, often to the
exclusion of things not rewarded. In general, moral
codes and professional standards should suffice to
prescribe right action. Next, measures must better
match purpose. Lastly, what matters most are
intangible sources of value, such as human,
relational, and structural capital, that challenge
traditional, technical approaches.
Earnings can
be pliable as
putty when a
charlatan
heads the
company
reporting
them.—
Warren
Buffett
On Performance Leadership
Good performance cannot
be compelled, commanded,
or coerced. Most
professionals are self-
motivated but intrinsic drive
must be channeled skillfully
to excite, engage, and
energize. Therefore,
performance measurement
must restrain demotivators,
e.g., office politics, and
build motivators, e.g.,
fairness, so people may
strive to do their best.
In an environment of positive
accountability, collaboration,
truth-telling, and learning would
be rewarded, not just hitting all-
too-often senseless targets. It is
more a matter of helping
managers manage, not making or
letting them manage. The better
practices that Robert Behn
recommends relate to (i) creating
the performance framework, (ii)
driving performance
improvement, and (iii) learning to
enhance performance.
Better Practices to Ratchet Up
Performance
Creating the
Performance
Framework (What
would it mean to
do a better job?)
• Practice 1: Articulate the organization's mission:
proclaim—clearly and frequently—what the
organization is trying to accomplish.
• Practice 2: Identify the organization's most
consequential performance deficit: determine what
key failure is keeping the organization from achieving
its mission.
• Practice 3: Establish a specific performance target:
specify what new level of success the organization
needs to achieve next.
• Practice 4: Clarify your theoretical link between target
and mission: define (for yourself, at least) the mental
model that explains how meeting the target will help
accomplish the mission.
Better Practices to Ratchet Up
Performance
Driving
Performance
Improvement
(How can one
mobilize people?)
• Practice 5: Monitor and report progress frequently,
personally, and publicly: publish the data so that every
team knows that you know (and that everyone else
knows) how well every team is doing.
• Practice 6: Build operational capacity: provide your
teams with what they need to achieve their targets.
• Practice 7: Take advantage of small wins to reward
success: find lots of reasons to dramatize that you
recognize and appreciate what teams have
accomplished.
• Practice 8: Create "esteem opportunities": ensure that
people can earn a sense of accomplishment and thus
gain both self-esteem and the esteem of their peers.
Better Practices to Ratchet Up
Performance
Learning to
Enhance
Performance
(How must one
change to do even
better?)
• Practice 9: Check for distortions and mission
accomplishment: verify that people are achieving
their targets in a way that furthers the mission (not in
a way that fails to help or actually undermines this
effort).
• Practice 10: Analyze many and various indicators:
examine many forms of data—both quantitative and
qualitative—to learn how your organization can
improve.
• Practice 11: Adjust mission, target, theory, monitoring
and reporting, operational capacity, rewards, esteem
opportunities, and/or analysis: act on this learning,
making the modifications necessary to ratchet up
performance again.
Further Reading
• ADB. 2009. Learning from Evaluation. Manila.
www.adb.org/publications/learning-evaluation
• ADB. 2009. Understanding Complexity. Manila.
www.adb.org/publications/understanding-complexity
• ADB. 2010. The Perils of Performance Measurement. Manila.
www.adb.org/publications/perils-performance-measurement
• ADB. 2010. Bridging Organizational Silos. Manila.
www.adb.org/publications/bridging-organizational-silos
• Robert Behn. 2003. Why Measure Performance? Different
Purposes Require Different Measures. Public Administration
Review. September-October. Vol. 63, No. 5, pp. 586–606.
Further Reading
• Robert Behn. 2006. Performance Leadership: 11 Better
Practices That Can Ratchet Up Performance. IBM Center for
the Business of Government.
Quick Response Codes
@ADB
@ADB Sustainable
Development Timeline
@Academia.edu
@LinkedIn
@ResearchGate
@Scholar
@SlideShare
@Twitter

More Related Content

The Perils of Performance Measurement

  • 1. The views expressed in this presentation are the views of the author/s and do not necessarily reflect the views or policies of the Asian Development Bank, or its Board of Governors, or the governments they represent. ADB does not guarantee the accuracy of the data included in this presentation and accepts no responsibility for any consequence of their use. The countries listed in this presentation do not imply any view on ADB's part as to sovereignty or independent status or necessarily conform to ADB's terminology. The Perils of Performance Measurement Olivier Serrat 2015
  • 2. Measuring What Matters The need to sell the idea that management improves things means that "SMART" performance indicators—customarily aligned in a results chain linking activities, inputs, outputs, and outcome to impact—proliferate. (Supposedly, pre–post comparisons can then be made to assess the relevance, efficiency, effectiveness, sustainability, and impact of endeavors.) But, not everything that counts can be counted and not everything that can be counted counts. What is more, people— responding to incentives—will often do what measurement summons them to; so, measure the wrong things and you will almost certainly encourage the wrong behaviors. "SMART" indicators are to be specific, measurable, achievable, relevant, and time-bound.
  • 3. SMART But Not So Smart • Conflicting definitions of performance indicators abound. In their shortest yet most stringent and frequent expression, indicators are a numerical measure of the degree to which an objective is being achieved. (So, they are prone to merge with objectives and effectively become targets.) Others consider them an discernible change or event that provides evidence that something has happened, be that an output delivered, an immediate effect occurred, or a long-term process observed. To such discerning interpreters, indicators do not offer proof so much as reliable clues that the change or event being claimed has actually happened or is happening: rather, evidence from several indicators will make a convincing case for claims being made. First
  • 4. SMART But Not So Smart • Complex issues of cause-and-effect are seldom considered. Obviously, performance indicators can only pertain to matters that an agency controls. But, agencies never command much and usually settle for subprime indicators that afford enough control for their purposes. This reality is intrinsic to all human endeavors, especially those that touch political decision making or aim to spark social change. Consequently, interest has grown in approaches to planning, monitoring, and evaluation of outcomes and their metrics that consider actor-centered development and behavioral change, continuous learning and flexibility, participation and accountability, as well as non-linearity and contribution (not attribution and control). Second
  • 5. SMART But Not So Smart • The dimensions of performance mentioned earlier— namely, relevance, efficiency, effectiveness, sustainability, and impact—intimate that there can be no single assessment of accomplishments overall. Performance is an amalgam of dimensions, some of which may conflict. Measuring it calls for an appropriate basket of benchmarks, developed with full knowledge of their interrelationships. Third
  • 6. SMART But Not So Smart • Performance measurement must have a purpose—it can never be an end in itself. Robert Behn notes that distinct reasons for engaging in it are to budget, celebrate, control, evaluate, improve, learn, motivate, and promote. Manifestly, no single metric is appropriate for all eight objectives. Therefore, practitioners had better consider the managerial purpose(s) to which performance measurement might contribute—these, alas, being ordinarily to budget and control—and how they might best deploy an informative blend of measures anchored in context. Only then will they be able to select valid yardsticks with the characteristics necessary to help meet each purpose, directly and indirectly, concentrating on what matters most. Fourth
  • 7. SMART But Not So Smart • Many other things besides performance indicators are needed to ameliorate achievements (after the indicators have been recognized for what they are, namely, individual links in a results chain). The other requisites include Board, Management, and staff who are focused on meeting the explicit and latent needs of client, audiences, and partners; leadership and commitment to developing and extending products and services; and a culture of openness in which personnel are encouraged and willing to question why they do what they do. Fifth
  • 8. Why Measure Performance? Budget On what programs, projects, or people should my organization spend the public's money? Celebrate What accomplishments are worthy of the important organizational ritual of celebrating success? Control How can I make sure that my staff is doing the right thing? Evaluate How well is my organization performing? Improve What exactly should who do differently to improve performance? Learn Why is something working or not working? Motivate How can I motivate line staff, middle managers, nonprofit and for-profit collaborators, stakeholders, or citizens to do what is needed to raise performance?Promote How can I convince political superiors, legislators, stakeholders, journalists, or citizens that my organization is doing a good job
  • 9. Matching Measure to Purpose Budget Efficiency measures—specifically, outputs or outcomes divided by inputs Celebrate Periodic and significant performance targets that, when achieved, provide people with a sense of personal and collective accomplishment Control Inputs that can be regulated Evaluate Outcomes—combined with input, output, and process measures—that also appreciate the effects of exogenous factors Improve Inside-the-black-box relationships that connect changes in operations to changes in outputs and outcomes Learn Disaggregated data that reveals deviancies from the projected Motivate Near real-time outputs compared with production targets Promote Easily understood aspects of performance about which people care
  • 10. Transforming Performance Measurement Cynics argue that performance measures are seldom used to make decisions. Yet, they do have effects from the suspicion that actions, e.g., sanctions or rewards, might be based on such information. People will search what behaviors and related activities are recompensed and then endeavor to perform or "game" these, often to the exclusion of things not rewarded. In general, moral codes and professional standards should suffice to prescribe right action. Next, measures must better match purpose. Lastly, what matters most are intangible sources of value, such as human, relational, and structural capital, that challenge traditional, technical approaches. Earnings can be pliable as putty when a charlatan heads the company reporting them.— Warren Buffett
  • 11. On Performance Leadership Good performance cannot be compelled, commanded, or coerced. Most professionals are self- motivated but intrinsic drive must be channeled skillfully to excite, engage, and energize. Therefore, performance measurement must restrain demotivators, e.g., office politics, and build motivators, e.g., fairness, so people may strive to do their best. In an environment of positive accountability, collaboration, truth-telling, and learning would be rewarded, not just hitting all- too-often senseless targets. It is more a matter of helping managers manage, not making or letting them manage. The better practices that Robert Behn recommends relate to (i) creating the performance framework, (ii) driving performance improvement, and (iii) learning to enhance performance.
  • 12. Better Practices to Ratchet Up Performance Creating the Performance Framework (What would it mean to do a better job?) • Practice 1: Articulate the organization's mission: proclaim—clearly and frequently—what the organization is trying to accomplish. • Practice 2: Identify the organization's most consequential performance deficit: determine what key failure is keeping the organization from achieving its mission. • Practice 3: Establish a specific performance target: specify what new level of success the organization needs to achieve next. • Practice 4: Clarify your theoretical link between target and mission: define (for yourself, at least) the mental model that explains how meeting the target will help accomplish the mission.
  • 13. Better Practices to Ratchet Up Performance Driving Performance Improvement (How can one mobilize people?) • Practice 5: Monitor and report progress frequently, personally, and publicly: publish the data so that every team knows that you know (and that everyone else knows) how well every team is doing. • Practice 6: Build operational capacity: provide your teams with what they need to achieve their targets. • Practice 7: Take advantage of small wins to reward success: find lots of reasons to dramatize that you recognize and appreciate what teams have accomplished. • Practice 8: Create "esteem opportunities": ensure that people can earn a sense of accomplishment and thus gain both self-esteem and the esteem of their peers.
  • 14. Better Practices to Ratchet Up Performance Learning to Enhance Performance (How must one change to do even better?) • Practice 9: Check for distortions and mission accomplishment: verify that people are achieving their targets in a way that furthers the mission (not in a way that fails to help or actually undermines this effort). • Practice 10: Analyze many and various indicators: examine many forms of data—both quantitative and qualitative—to learn how your organization can improve. • Practice 11: Adjust mission, target, theory, monitoring and reporting, operational capacity, rewards, esteem opportunities, and/or analysis: act on this learning, making the modifications necessary to ratchet up performance again.
  • 15. Further Reading • ADB. 2009. Learning from Evaluation. Manila. www.adb.org/publications/learning-evaluation • ADB. 2009. Understanding Complexity. Manila. www.adb.org/publications/understanding-complexity • ADB. 2010. The Perils of Performance Measurement. Manila. www.adb.org/publications/perils-performance-measurement • ADB. 2010. Bridging Organizational Silos. Manila. www.adb.org/publications/bridging-organizational-silos • Robert Behn. 2003. Why Measure Performance? Different Purposes Require Different Measures. Public Administration Review. September-October. Vol. 63, No. 5, pp. 586–606.
  • 16. Further Reading • Robert Behn. 2006. Performance Leadership: 11 Better Practices That Can Ratchet Up Performance. IBM Center for the Business of Government.
  • 17. Quick Response Codes @ADB @ADB Sustainable Development Timeline @Academia.edu @LinkedIn @ResearchGate @Scholar @SlideShare @Twitter