Looking for user research answers in a user experience study

(Reading time: 4 minutes, 49 seconds)

This morning, I took out the garbage, cleaned up after the raccoons who made a midnight snack of last night’s dinner, attached my headphones to my skull, put Julie Byrne on repeat shuffle, and started writing this email to you.

Looking through my ever-growing list of writing topics, I noticed a link to a study by Craig MacDonald called Assessing the user experience (UX) of online museum collections: Perspectives from design and museum professionals along with some notes (emphasis added):

Studies show that online museum collections are among the least popular features of a museum website, which many museums attribute to a lack of interest. While it’s certainly possible that a large segment of the population is simply uninterested in viewing museum objects through a computer screen, it is also possible that a large number of people want to find and view museum objects digitally but have been discouraged from doing so due to the poor user experience (UX) of existing online-collection interfaces.

Ok — let’s freeze time right there.


When I read those opening sentences, I thought — well, in hindsight, I realize that a cluster of assumptions swirled in the back of my head. From here, I expected to read about a study that would describe:

  1. the results of talking to people about whether they are interested in online museum collections.
  2. researchers’ observations of how people use museum websites that have (and perhaps don’t have) different sorts of online collections — for example: internal search patterns, workflows, patterns of intent and behavioral patterns, and comparisons of these attributes between different kinds of visitors.
  3. the results of usability testing on museums' online collections.

You can’t flat-out ask people if like online collections or not because you may not get a reliable answer. But you can get to an understanding of whether people want to use online museum collections to achieve certain outcomes through interviewing.

(Related: You know that tired quote by Henry Ford — “If I had asked people what they wanted, they would have said faster horses”? The next time you hear someone use it, try to figure out if they’re arguing for user research or if they’re trying to avoid research. It can go either way.)

So let me summarize that first bit the way I heard it in my head when I read it:

“Museum folks have some evidence that people are not interested in online museum collections — but maybe people aren’t interested because online museum collections are a pain to use.”

My first question is: Who are ‘people’?

Is an online museum collection for everyone? Do online collections need to provide value to everyone to justify the resources required to support them? (I don’t know, and I’ll bet the answer is “it depends”, but I’d start there, and I do think that knowing how to approach a problem in a beneficial way is sometimes more important than knowing the answer.)

Then I might wonder what jobs do people hire an online collection for.

To find out, you could start by talking to different sorts of people who visit museum websites and use (or don’t use) online collections. You don’t have to ask them whether they like online collections — for active users, find out why they visited the collection on a specific day, what they did with the information, where they went next, what other places they visited online to get that “job” done, and so forth. A website intercept with a tool like Hotjar could help you recruit people from just that portion of the website.

I’d combine interview insights with analytics data to find out how much usability and interest contribute to usage among different kinds of visitors.

You could even layer in some surveys if that’s your jam, but I did not expect the study to go on to try to answer this question by establishing a UX assessment rubric for online museum collections.

Now let’s unfreeze time …


… and get back to the article (emphasis added):

This paper describes the creation and validation of a UX assessment rubric for online museum collections. Consisting of ten factors, the rubric was developed iteratively through in-depth examinations of several existing museum-collection interfaces. To validate the rubric and test its reliability and utility, an experiment was conducted in which two UX professionals and two museum professionals were asked to apply the rubric to three online museum collections and then provide their feedback on the rubric and its use as an assessment tool. This paper presents the results of this validation study, as well as museum-specific results derived from applying the rubric.

It seems like we skipped past the part where we study user behavior and try to talk with people to understand their needs, and we went straight to improving the UX by developing a heuristic framework for online collections.

The paper concludes with a discussion of how the rubric may be used to improve the UX of museum-collection interfaces and future research directions aimed at strengthening and refining the rubric for use by museum professionals.

But why would any museum invest resources in improving the UX of their online collection if they haven’t answered the question of whether UX is the problem?

To be fair, it is helpful to have a set of heuristics to use when evaluating the UX of online museum collections. I was expecting a user-research-based answer to an initial question — but that’s a different study.

And I was interested to find this comparison of how museum professionals and UX professionals perceive the relevance of the rubric dimensions:

Perceived relevance of rubric dimensions by participant type,  Craig McDonald

Perceived relevance of rubric dimensions by participant type, Craig McDonald

See how UX folks value uniqueness of virtual experience, integration of social features, and personalization far less than museum folks? Those were the three characteristics that I found myself most skeptical of while reading the study.

Web content ideally provides unique value, but if you try to create a “unique experience” you often wind up abandoning the conventions that people rely on to complete the task at hand. Integration of social features means — adding social share icons and comments? No one uses the former and you’ll wish no one used the latter. And, in my view, personalization would be far down the list of priorities, at least until there’s some proven desire or need for personalization by the people who use online collections.

See, every time I begin to dig into one of these dimensions, I wind up asking:

Does that matter to the people who use online collections? In what context or scenario does it matter?

The study didn’t set out to answer that question, so it shouldn’t be judged in those terms, but I think it’s an interesting question all the same.

Do you use online museum collections? Does your museum have an online collection? How do you evaluate that content? Let me know in a reply.

Thanks for reading,


Kyle Bowen