Part 1: Testing Upworthy’s Technique
I spend a lot of time on the Internet and I’m always looking for inspiration. In late 2013 I clicked a link to an article on Upworthy, a site always on the forefront of testing stickiness and psychological hooks. When the page loaded I was presented with this statement:
After clicking “I Agree” (because who wouldn’t?), I was asked to sign up for the site’s mailing list.
This type of interface element is called a “modal dialog,” and I generally find them to be annoying, distracting, and intrusive. In this case however, the dialog made me smile, and even got me to click just to see what would happen.
That experience stuck with me and over the next few weeks I designed a way to take advantage of a similar psychological cue on GlobalGiving.
Our initial experiment (a minimum viable product)
We’re always looking for ways to build our newsletter list, and I saw an opportunity to let visitors to our nonprofit partners’ project pages subscribe to their quarterly email updates. I liked the fact that I was giving users a way to express their interest in and support for the project even if they weren’t ready to give monetarily.
Because of my dislike for modal dialogs (unless the message is critical), I looked for a way to catch the user’s attention without monopolizing it. Eventually I decided to have a small, unobtrusive box slide out from the page’s lower corner a few seconds after the it loaded, and make a similar appeal. Here’s my first attempt:
If the user chose to click “I Agree,” I offered them the opportunity to sign up for the project’s email updates and for GlobalGiving’s newsletter. I launched that to the site and saw a modest level of signups, so I knew I was onto something. But wanting to make the most of the opportunity, and doing my best to live our “Never Settle” core value, I decided to try to maximize the signup rate by testing a few of the assumptions I had made.
Variable 1 (Teaser): The point of this experiment was to catch people’s attention with the friendly language and statement nobody could possibly disagree with, then make the “big” ask of signing up for our mailing lists (using what’s known as the Foot-in-the-Door technique). Hopefully by first agreeing with the modest statement, the user would feel more inclined to sign up. But would people feel misled by this? Was I undermining their trust by posing a seemingly harmless question, then asking them for their email address? Would people not understand that they were being offered the option to sign up for the mailing list, and ignore the coy initial question completely? I decided to test dropping the pretense and just offer the user the ability to sign up for the mailing lists directly.
Variable 2 (Language): Relatedly, how would the playful language affect the user’s understanding of the offer? Would they be more likely to sign up if the language were more straightforward? I decided to test replacing the language above versus the direct but rather flat, “Sign up for updates about this project?” followed by the options “OK” and “No.”
Variable 3 (Timing): Finally, the duration of time I had chosen to wait before showing the offer was more or less arbitrary. I didn’t want the offer to pop up immediately, in order to give the user some time to digest the information they wanted in the first place, but could the time be too short, causing the offer to surprise, jar, or overwhelm the user? Could it be too long so that the user would already have left, or decided to take another action? I decided to test pauses of four and eight seconds before showing the offer.
I divided our audience into eight segments, and showed each segment a unique combination of those three variables. Over the course of about a month, we showed the offer nearly 80,000 times, and gained more than 1000 new subscribers.
Analyzing the results, I learned that more people signed up for the mailing lists if they saw the “teaser” first; the Foot-in-the-Door technique worked! That was the only statistically significant result the experiment produced; waiting eight seconds slightly outperformed waiting for four, while the two different wordings were nearly a statistical toss-up.
My team and I decided to make the offer permanent with “This project is doing great work” showing up after eight seconds.
Part 2: Tweaking the results
A few months went by, and our communications manager came to me with two observations about the offer. First, she explained that many of our website visitors are learning about these projects and their work for the first time, so the user would not be equipped to weigh in on whether or not the project is doing “great work.” This might make them less likely to respond to the prompt. Second, the statement parses oddly: the projects themselves aren’t doing any work at all, it’s the people that work at our nonprofit partner organizations who do the work.
She suggested an alternative wording that would remedy these issues: “This project is important.” We fired up another test.
There was no statistical difference in the signup rates between the cells with the two language variations after a month…or two months…or three months. After three months and 300,000 views, the two wordings were at a statistical dead heat, so we ended the experiment. We decided to stick with the “important” language if for nothing else than better grammar.
Even failed experiments (those without significant results) can result in learning, so I was prepared to accept these results and go forward with the knowledge that neither a project doing “great work” nor being “important” was more persuasive to our users in terms of convincing them to sign up for a mailing list. Perhaps there is other wording that would be more persuasive; perhaps there is a more effective UI treatment. Opportunities for further experimentation abound.
Part 3: The twist ending
And that’s where things would have ended, except there were other effects to consider. Newsletter sign-ups are not the primary goal for our partners’ project pages; ultimately, we want to help our partners receive donations. I had an inkling that our intervention might have some effect on users’ donation rates, so I compared the funds raised in each of the two cells, and the results speak for themselves:
Sure enough, users who are asked whether or not they agree that a project is “important” donate nearly 10% more money than those that are asked if they agree that it is “doing great work!” This was a statistically significant result.
I walked away from this experience happy that I found a way to increase donations, and humbly surprised that the biggest gain came from a source I hadn’t even considered. This experiment, well over a year in the making, serves as a reminder that by continually testing we can be continually improving, and to always remain open to positive effects from unexpected places. Plus it’s always a good idea to use correct grammar.