No one likes worrying about the overhead costs associated with nonprofit work, and rightly so! For years, overhead ratio has been of the only metrics that donors could use to compare philanthropic choices. More recently, conversations like The Overhead Myth have pointed out that the world’s best businesses need operating capital to innovate and succeed, so why should nonprofits be any different? Even though better measures of impact and effectiveness are increasingly available and accepted, a typical donor’s natural reaction when they see a percentage come up in a conversation about nonprofit fees is to interpret it as an overhead ratio. And most donors still don’t like overhead.
For us at GlobalGiving, this presents a challenge. While we retain a 15% fee on donations through our website, our actual administrative overhead ratio is around 2%. Despite testing several different ways of demonstrating and explaining the difference between our fee and and our overhead, we still get lots of questions about our fees from users who assume that the two are the same. To help fix this, we recently asked ourselves: what if it’s not the explanation text that’s the problem, but how users are experiencing and processing the information it contains?
For inspiration, we turned to the world of cognitive psychology. In his famous Thinking Fast and Slow, Nobel laureate Daniel Kahneman describes how we all have two systems at work in our brains. System 1 is our intuitive, quick-reacting, subconscious mind, while System 2 is analytical, logical, and methodical. He mentions a 2007 study that tried to use the interaction between these two systems to improve scores on the “cognitive reflection test”. This short quiz consists of questions that seem simple at first, but have a “wrinkle” that makes them more complex than they appear (try them yourself). Half the participants in the study took the test normally, while the other half took the test under a cognitive load, meaning the questions they received were written in a lighter font that made them slightly harder to read. The researchers found that the second group performed much better on the test, presumably because the cognitive load caused their analytical System 2 processes to take over from their more reactionary System 1 minds. Once in this “more logical” frame of mind, they were much better equipped to tackle the tricky problems.
After reading about this study, I wondered if we could replicate the results on GlobalGiving to help donors process the explanation of our fee and the accompanying invitation to ‘add-on’ to their donation to cover this fee on behalf of their chosen nonprofit. Our hypothesis was that donors usually use System 1 when thinking about our add-on ask; they quickly assume that the 15% represents overhead and they’re less inclined to donate additional funds to cover it. But, if they engage their System 2 mindset that makes them process the text more analytically, hopefully they’ll find the explanation more convincing and be more likely to add-on. To find out if this would work, we planned a simple test in which a subset of users would be randomly chosen to see a slightly modified version of the add-on page during their checkout process. This page would have exactly the same text, just shown in a slightly lighter font that, we’d hope, would trigger the cognitive load and drive extra add-on contributions.
The plan made sense in theory, but we had to be careful as we put it into practice. First, we needed to make sure that the random assignment process, made possible by our Optimizely A/B testing framework, was running correctly and that all the data we would need to analyze the results was logged properly in our database. Even more importantly, we have an obligation to our nonprofit partners to make sure we’re doing everything possible to maximize the funds they can raise by offering a seamless website experience for donors. If this experiment caused users in the treatment group to become less likely to complete their donation, we’d need to know right away so we could stop the test.
We set up a pilot study where we closely monitored whether the cognitive load caused by the change in font color would cause potential donors to leave the checkout process prematurely. We also kept a close eye on post-donation survey feedback to see if anyone mentioned the changed font color. Fortunately, there was no difference in donation rates or feedback during this initial test, and we felt comfortable continuing with the larger experiment, which ran two weeks at the end of July (just before the launch of our new website). In the end, we collected results from about 700 eligible users.
So what did we find? 49.4% of our control group chose to contribute towards the fee, compared to 56.8% of users who saw the lighter font. This sounds like there’s reason to believe users were engaging their System 2 brains and processing the request for an additional donation. But, it would be premature to declare success without additional analysis. Specifically, we wanted to make sure there wasn’t another explanation for the difference in add-on rates.
For example, it’s possible that users who were new to GlobalGiving would be less familiar with our fee and therefore less likely to want to add-on to their donation to offset it. Similarly, donors contributing during a matching campaign might be especially inclined to make sure that the most money possible went to their favorite organization and, as a result, would add-on more often. So, in our analysis, we statistically controlled for these factors, along with the size and geographic origin of each donation, to get our most pure estimate of the effect of the cognitive load.
The final result was a 7.8 percentage point increase in add-on rates with a P-value of 0.046. This means that we have only a 4.6% chance of seeing results at least as large as these purely by chance. If we take this increase and estimate what might happen if we made the change on the whole site, we expect we’d see around another $27,000 in additional funding created for our project partners over the course of a year. That may not sound like much in the context of the $35M+ that will be donated through the site in 2015, but it’s not a bad return for our partners for just changing a font color!
These are exciting results that suggest the possibility of a new way of thinking about how we present our fee, and there’s still plenty of work to be done. Longer runtimes and larger sample sizes would give us even more confidence in our results and let us explore other potentially important factors, like seasonal effects. Thinking about how we might integrate these results into our new website also presents opportunities for follow-up experimentation as we continue to Listen, Act, Learn, and Repeat on behalf of our nonprofit partners around the world.
Special thanks to my classmates Vincent Chio and Wei Shi in the UC Berkeley Masters of Information and Data Science program for their help with this analysis and to Kevin Conroy for his support throughout the project.