Developing services for museums with museums
(Reading time: 3 minutes, 47 seconds)
When I talk with people who are in some stage of redesigning their organization’s website, I ask:
“What are you basing your design decisions on?”
Then they might share a story about some complaint from a board member or the CEO. There’s something about the site’s design or functionality that that individual doesn’t like. So whoever redesigns the site chases these personal preferences, hoping for better outcomes.
Alarm bells go off in my head when I hear that. There’s always a chance that the individual’s complaint speaks to a real problem in the design that’s negatively impacting a business goal — but probably not. And it’s unlikely they’re operating in an environment where user research will be a welcome check to that single stakeholder’s intuition.
Or they might say they’re doing some competitive research. “We’re looking at the websites of other organizations like ours to see how they’re handling things.”
That makes sense — it’s good to regularly run competitive analyses just as if you were operating in the for-profit sector. Competitive analysis can lead to new ways to handle old problems. But it’s possible that the people you’re looking to for potential answers are also looking to you for answers. We assume others have their act together, but if you traded places for a day, you’d likely find that they’re facing all the same uncertainties you are. The solution you’re about to adopt because they’re doing it is what they’re looking for an alternative to when they evaluate similar organizations.
Finally, people who are redesigning their org’s website might say something along the lines of “We’re trying to look at the site from the perspective of a new visitor to improve user experience.”
That’s rare music to my ears. I take it as a signal we might be a good fit to work together. There may be a shared understanding that empathy is a tool for better outcomes.
The only problem with that approach is that the person who’s trying to put themselves in the visitor’s shoes is a professional with years of experience that isn’t easily washed away. And, even if you’re new to an organization and have some fresh perspective, your memory of using the website for the first time is going to be fuzzy and fading every day.
Treat every hunch as a hypothesis
All of these approaches can be valid starting points. Even the CEO’s implacable desire to have the entire website consist of auto-rotating carousels can be treated as a hypothesis. (Here’s why that’s a bad idea, by the way.)
Cultural institutions seem more likely to adopt that sort of objective approach than most. Maybe it comes from so many years of pressure to prove to outsiders that they’re making an impact in some way. The same thing that’s led them to rely so heavily on quantitative metrics is what’s made them open to evidence-based decision making. They also seem to have consideration for the audience baked into their DNA. There are plenty of companies that would give little thought to the first-time visitor’s experience.
The problem is, at least when it comes to digital content and design systems, museums don’t often seem to go beyond that admirable attempt to imagine what it’s like for other people. They don’t seem to test the hunches and hypotheses that rise to the surface. (I know I’m generalizing from what I’ve seen. Please let me know if you’re an exception.)
From heuristics to testing
That’s why I’ve been developing my upcoming Live Website Evaluation service to include some basic user research methods.
The goal of a live eval is to help museums identify how they can improve their website to create a better user experience and increase conversions using the resources they have today.
I had initially planned for the live evaluation to consist of a 30-minute video call. I’d gather some initial information from the client and then use that to help guide the heuristic evaluation.
But after listening to museum decision-makers, I decided it would be helpful to add some user testing. Testing with actual first-time visitors adds another layer of meaning to the evaluation. Testing invariably uncovers unforeseen issues and can go a long way toward validating or disproving internal assumptions.
Now I’m considering opening it up to other methods as well, depending on the initial information I gather.
For example, if a museum believes that visitors are struggling to find the information they need around a specific task, it may be better to conduct a tree test with 50 people rather than user testing. Combined with analytics data, they’d be in a better place to evaluate whether they should invest resources in a system architecture than they were before the evaluation.
Introducing a bit more data diversity would allow a museum to get a specific answer to more kinds of questions in a short amount of time.
So, I’m still trying to determine the scope of the service. It is fun work — co-creating a fixed scope engagement with the same people I hope to serve in the future.
I’m still learning based on input from a handful of museums, which means there may still be more to be gained by testing with more museums. If you’re interested in participating — free of charge — just hit reply and let me know.
Thanks for reading,