How do you measure the impact of design research?
(Reading time: 1m, 32s)
“How would you even measure the results of interviewing constituents?”
Sometimes I’ll get a question like that from an executive director.
Before I answer, I touch my forehead to make sure I don’t have a horn growing out of my head or a third eye that’s causing her to speak to me as if I’m from another planet.
I explain that design research can generate insights to inform an organization’s communications and products, and it can also be used to measure the impact of those efforts. For example, if user interviews lead you to try a new value proposition in your website and newsletter campaigns, analytics might help you measure how those changes impact behavior. In other words, you’d measure the results the same you’d measure the results of any other decision — did this have a positive or negative influence on a particular organizational goal?
It’s good that they’re thinking about measurement and impact, but there’s something lurking behind the question that’s unsettling. I’ve been struggling to put my finger on it.
Let’s try a thought experiment.
Imagine a world where most museums are interviewing a handful of constituents every few months. Imagine this habit of interviewing different audience segments was just part of the culture.
A hornless, two-eyed creature comes along and asks museum leaders why they conduct interviews. They shrug and say, “We started interviewing before I became the director,” or “The board wanted to interview patrons.”
But the interviews are rarely used to inform decisions — the transcripts wind up in desk drawers, collecting dust.
The hornless stranger asks, “Have you ever thought about surveying your audience?”
Silence. Then: “How would even measure the impact of surveys?”
Do you see what I mean? It’s an odd question.
There’s an assumption that surveying is the end goal. As if the research activity is the product or should automatically produce results.
You wouldn’t measure the impact of surveys. You’d measure the impact of the decisions and changes you make to the product based on the surveys.
I understand where the question comes from, to an extent. You want to invest in methods that are more likely to produce advantageous results than those that are less likely to do so.
But imagine a world where many museums aren’t using audience research to inform their decisions or are just doing the bare minimum. They aren’t studying their audiences in any systematic way. They’re planning programs and events based on what other museums are doing. The content they develop comes from staff — internal discussions of what they’d like to share and what people might like.
Walking around in that world, you might scratch your horn and wonder, “How are they measuring the impact of not studying their audiences?”
Thanks for reading,