How fast does gasoline go?

(Reading time: 3m, 5s)

I sent you a letter yesterday that may have made it seem like measuring the impact of design research is simple or obvious, which isn’t always the case. Sometimes it’s satisfying to send a short email that makes things seem simpler than they are. But measuring the impact of research is important and deserves more consideration.

Sample size, the number of variables, and the context of the experiment are all things that can confound our ability to measure the impact of a design intervention.

Let’s look at some examples of how you might use research to inform design interventions — from what’s easier to measure, or can be measured with greater confidence, to situations where it’s harder to measure or assign value to a particular intervention.

Easier: We’ve got some evidence that suggests people are having trouble finding the information they need on your website. We run a series of tests to see whether users can complete tasks using the current website’s navigation — checking to see where visitors get lost, double back, and how long it takes them to complete each task. We make changes based on those tests and then test again to verify that the new model reduces the time it takes users to complete tasks. In this case, we’re gathering information from multiple sources to assess the problem (triangulation), and we’re changing just one system (variable). Finally, because pretty much all users interact with the navigation, it’s easier to gather enough data to measure with confidence.

Harder: User testing suggests that people who visit the membership page on your site may be having trouble comparing benefits. We think emphasizing the most popular membership levels will help people decide and buy online. We make the change, and testing shows that people seem to understand benefits more easily, but because there aren’t many people purchasing online to begin with, it’s harder to say with confidence that the intervention had the intended effect.

Much harder: Making lots of changes at once across different contexts. The California Symphony’s transformation in recent years, which I wrote about a few weeks ago, is a good example of this.

The symphony’s executive director, Aubrey Bergauer, restructured the organization around the audience’s journey and made many changes over the course of just a few years. Lots of good things happened — For example, subscription revenue increased 71% and donations increased by 41%.

When I wrote about the symphony earlier this month, I focused on how the symphony merged marketing and development, changing when and how employees approached various audience segments. That change alone is a big one with several moving parts. If you’re trying to measure impact on organizational goals — to assign value to any one intervention — things are already getting difficult.

Now, mix in the changes Bergauer made elsewhere based on the surveys she ran. For example, she threw out longstanding rules and norms for attendees by allowing people to bring drinks to their seats and use their phones during the concert. The symphony’s website says, “This isn’t your grandma’s orchestra:”

California Symphony’s new rules

Consider this article, where Bergauer describes how they stopped soliciting donations from first-year subscribers and “renewal rates skyrocketed because of it.” But she also says the symphony began greeting new subscribers with a gift left on their seat. New subscribers arrive and find a CD recording of the orchestra that’s not commercially available waiting for them.

There are so many changes happening, how can we know which one is impacting renewal rates?

You might be thinking, “Who cares! It’s working, isn’t it?”

I agree, but it’s harder to replicate success if we throw up our hands and uncork the champagne without trying to understand what interventions had the greatest impact.

Maybe it’s more important to adopt underlying principles than any one tactic.

In the case of the California Symphony, I think one of those principles is that you have to act on what you learn from studying your audience. It’s impossible to measure the impact of research if you don’t follow up by implementing changes.

I keep thinking about that question I wrote about yesterday — “How would we even measure the impact of interviews?”

That’s like asking “How fast does gasoline go?”

Gasoline doesn’t go at all unless you put in your car, start it up, and drive. It’s the same for design research. The first principle has to be that we’ll test design research insights IRL — otherwise, the organization runs on fumes.

Thanks for reading,

Kyle