How We’re Building GG Rewards Together

Next week GlobalGiving will be launching the new GG Rewards Program. Here’s a post by Marc Maxmeister that provides a sneak peek into the work that’s gone into conceptualizing, building, and launching the program. 

_______

GlobalGiving‘s goal is to help all organizations become more effective by providing access to money, information, and ideas.

That is a lofty, aspirational goal. To everyone else, it might look like all we do is run a website that connects donors to organizations. But internally, I serve on a team that has met every week for the past 3 years to pour over the data, to find an efficient way to help organizations become more effective. We call ourselves the iTeam (i for impact).

GlobalGiving's i-team. We try not to take ourselves too seriously.

GlobalGiving’s iTeam. We try not to take ourselves too seriously.

It is hard to move thousands of organizations in one shared community forward. We use gamification, incentives, and behavioral economics to encourage organizations to learn faster and listen to the people in whatever corner of the world they happen to operate.

Before 2014 we used just six criteria to define “good,” “better”, and “best.” If an organization exceeded the goals on all six, they were Superstars. If they met some goals, they were Leaders. The remaining 70% of organizations were permanent Partners – still no small feat. Leaders and Superstars were first in line for financial bonuses and appeared at the top of search results.

In 2014 we unveiled a more complete effectiveness dashboard, tracking all the ways we could measure an organization on its journey to Listen, Act, Learn, and Repeat. We believe effective organizations do this well.

But this dashboard wasn’t good enough. We kept tweaking it, getting feedback from our users, and looking for better ways to define learning.

What is learning, really?

How do you quantify it and reward everyone fairly?

The past is just prologue. In 2015, GlobalGiving’s nonprofit partners  will earn points for everything they do to listen, act, and learn.

LALR cycle-dark-bglalr-2015-explained

This week I put together an interactive modeling tool to study how GlobalGiving could score organizational learning. When organizations do good stuff, they should earn points. If they earn enough points, they ought to become Leaders or Superstars. But how many points are enough to level up? That is a difficult question. We worked with our nonprofit partner Leadership Council to get their ideas, and we also created some data models to help us decide.

Here is the data; the current distribution of scores for our thousands of partners, leaders and superstars looks like this:

learning_default_model

How to read this histogram

On the x-axis: total learning points that an organization has earned.

On the y-axis: number of organizations with that score.

There are three bell curves for the three levels of status. It is significant to notice that these bell curves overlap. It means that some Superstar organizations in our old definition of excellence are not so excellent under the new set of rules. Other Partner organizations are actually far more effective than we thought; they will be promoted. Some of the last will be first, and some of the first will be last.

The histogram shown mostly reflects points earned from doing those six things we’ve always rewarded. But in the new system, organizations are also going to earn points for doing new stuff that demonstrates learning:

new_learning_points

And that will change everything. “Learning organizations” will leapfrog over “good fundraising organizations” that haven’t demonstrated that they are learning yet.

old_vs_new_learning_points_model

Not only will different organizations level-up to Leaders and Superstars, everyone’s scores will likely increase. We’ll need to keep “moving the goal posts.” Otherwise the definition of a Superstar organization will be meaningless.

The reason this is a modeling tool and not an analysis report is that anyone can adjust the weights and rerun the calculations instantly. Here I’ve increased the points that organizations earn for raising money over listening to community members and responding to donors:

fundraising_focused_points_model

This weighting would run contrary to our mission. So obviously, we’re not doing that. But we also don’t want to impose rules that would discount the efforts organizations have made to become Superstars under the old rules.

So I created another visualization of the model that counts up gainers and losers and puts them into a contingency table. Here, two models are shown side by side. Red boxes represent the number of organizations that are either going to move up or down a level in each model:

status_change_table

We’d like to minimize disruption during the transition. That means getting the number of Superstars that would drop to Partner as close to zero as possible. It also means giving everybody advance warning and clear instructions on how to demonstrate their learning quickly, so that they don’t drop status as the model predicts. (We’ve talked this over with representatives from our Project Leader Leadership Council to get ideas about how to best do this.)

This is a balancing act. Our definition of a Learning Organization is evolving because our measurements are getting more refined, but we acknowledge they are a work in progress. We seek feedback at every step so that what we build together serves the community writ large, and not just what we think is best.

We’ll share more about the launch of our GG Rewards platform next week. This post is just the story of how we used data and feedback to get where we are. Here are a few lessons of what we’ve learned along the way:

Lessons:

  • Fairness: It is mathematically impossible to make everybody happy when we start tracking learning behavior and rewarding it.
  • Meritocracy: We will need to keep changing the definition of Superstar organizations as all organizations demonstrate their learning, or else it will be meaningless. The best organizations would be indistinguishable from average ones.
  • Crowdsourcing: The only fair way to set the boundaries of Partner, Leader, and Superstar is to crowdsource the decision to our community, and repeat this every year.
  • Defined impact: We can measure the influence of our system on organizational behavior by comparing what the model predicts with what actually happens. We define our success as seeing everybody increase their score every year, and earning more points each year than in the previous year. Success is also seeing a normal distribution (e.g. “bell curve”) of overall scores.
  • Honest measurement: I was surprised to realize that without penalties for poor performance, it is impossible to see what makes an organization great.
  • Iterative benchmarking: We must reset the bar for Leader and Superstar status each year if we want it to mean anything.
  • Community: We predict that by allowing everyone a say in how reward levels are defined, more people will buy into the new system.
  • Information is Power: By creating an interactive model to understand what might happen and combining it with feedback from a community, we are shifting away what could be contentious and towards what could inspire stronger community.

We were inspired by what others at the World Bank and J-PAL did to give citizens more health choices in Uganda. What the “information is power” paper finds is that giving people a chance to speak up alone doesn’t yield better programs (the participatory approach). Neither does giving them information about the program alone (the transparency approach). What improves outcomes is a combination of a specific kind of information along with true agency – the power to change the very thing about a program that they believe isn’t working through their interpretation of the data.

The model I built can help each citizen of the GlobalGiving community see how a rule affects everyone else, and hence understand the implications of their choice, as well as predict how they will fare. If we infuse this information into a conversation about what the thresholds for Partner, Leader, Superstar ought to be each year (e.g. how much learning is enough?), this will put us in the “information is power” sweet spot – a rewards paradigm that maximizes organizational learning and capacity for the greatest number of our partners.

I predict that giving others this power (to predict and to set standards) will lead to a fairer set of rules for how learning is measured and rewards doled out. It ain’t easy, but it is worthy of the effort.

Marc Maxmeister

Marc Maxmeister is a PhD neuroscientist who helps coordinate the GlobalGiving Storytelling project, an experiment to provide all organizations with a richer, more complex view of the communities they serve. His title reflects our focus on learning from experiments. He was formerly a Peace Corps Volunteer in The Gambia (1999-2001) and did a Fulbright research project around the impact of computers and the Internet on rural education in West Africa. He loves to teach, and has taught graduate-level Neuroscience at Kenyatta University in Kenya and Python to middle school students in London, UK. He blogs at chewychunks.wordpress.com and is the author of several books, including Ebola: Local voices, hard facts (2014).

No Comments

Comments Closed