Back to top

Making sense of social impact metrics and measurements

Author: Naki B. Mendoza, Devex Impact
Published Date: 29 June 2015

Social enterprises the world over are bursting at the seams with innovations to tackle poverty and promote social good, but precisely how to measure the impact of their initiatives remains a recurring question.

The wealth of data amassed from an impact evaluation exercise is indeed informative, yet understanding how it applies to a specific social investment can be downright daunting. How and when should social impact data be interpreted? What does all that data say about a particular investment? And when does lots of data become too much data?

The topic took center stage at the Aspen Network of Development Entrepreneurs’ annual gathering last week in Washington, D.C. Understanding that no one-size-fits-all set of metrics can evaluate social enterprises across the board, researchers and business leaders instead honed in on best practices to alleviate so-called metrics angst — the idea that a culture of data can inundate an organization with development objectives.

“The danger is that you get so focused on the ‘how’ of what you are measuring, that you are distracted from what you are actually finding,” said Mike McCreless, director of impact and strategy at Root Capital.

To avoid such angst, research and donor organizations have rolled out various sets of principles meant to fine-tune the way organizations go about picking and choosing their data to evaluate.

Innovations for Poverty Action, a research organization that has carried out more than 200 impact evaluations, presented its approach called the Goldilocks principles — a reminder that data systems should neither be too big nor too small, but just right.

It challenges organizations to evaluate their data against four criteria and determine it if is:

● Credible — does the data reflect what you intended to measure?
● Actionable — will you change course if the data calls for it?
● Responsible — are your data goals within your means to collect it?
● And transportable — can you apply what you have learned?

The idea is that pointed questions can weed out which metrics an organization chooses to use in its impact assessment.

“It means saying no sometimes to data collection,” IPA’s Delia Welsh said. “Sometimes organizations struggle with what they want and what their external stakeholders want.”

The discord between funder and recipient priorities is a common tension.

“For too long, monitoring and evaluation has been about donors and what they want to collect,” said Venu Aggarwal, an impact evaluator for the Acumen Fund.

A donor entity itself, Acumen instead proposes to upend the traditional model of impact evaluation by basing its impact assessment on its recipients’ needs. It’s often as simple as asking chief executives of recipient businesses how they define success and what data they need to make decisions.

“The company needs to be engaged and invested,” Aggarwal said. “Company buy-in is really important to collect quality data that is actionable.”

Acumen has presented a new model of “lean data” principles that are characterized by two big shifts in monitoring and evaluation. The first is a shift from a donor-recipient relationship of compliance to one of collaboration, and the second is a shift from what it calls upward accountability to downward accountability.

Such changes can take time to implement.

For example, it took seven years for India-based NGO Educate Girls to reach this equilibrium with its donors. Through government and private foundation funds, the organization applies a customized curriculum and advocacy program to bridge the gap of female students enrolled in primary schools across India.

The program’s success in bringing 80,000 girls to school since 2007 and issuing the first development impact bond has helped rewrite the equation with its donors.

“Instead of them telling us how to use a report, we tell them how we calculate impact and how they should read it,” Zeeshan Samrani, Educate Girls’ senior program manager, told Devex. “It took seven years to build that trust and they know we are a high impact organization.”

For social enterprises such as Educate Girls, the metrics for social impact are more cut and dry. Enrollment rates and learning outcomes measured by national standardized tests can be qualified with relative ease. Enterprises that aim to improve farmer livelihoods, for example, can also measure their impact using metrics on agriculture yields, price premiums or crop quality. But impact assessments on other factors, such as livelihood gains, which can require measuring a wider set of social factors, can prove more challenging.

Ultimately metrics should be driven by the goal of a project, company or interventions, rather than an overanalysis of all the factors at hand. With impact assessments, the familiar adage that what you measure is what you get continues to ring true.