Blackbaud whistles past the graveyard

(Reading time: 3 minutes, 46 seconds)

Yesterday, I mentioned that I had noticed that few museums seem to use patron experience as criteria in assessing software investments. Let’s dig in to that a little more today.

Museums often look for all-in-one software solutions. That’s understandable. The closer you can get to a single solution for ticketing, donation management, membership, and all the related CRM functions the better. Fewer moving parts for you to have to deal with, right? 

But the software isn’t only used by employees.

Patrons use it to buy tickets, memberships, gift memberships, make donations, and so forth. But patrons aren’t a part of the buying decision. Museums weigh many factors when deciding what software to buy, and sometimes usability is a consideration, but it seems very few museums test the public-facing elements of a software solution with the outside world before buying.

Does that situation sound familiar?

It may remind you of the traditional procurement process for so many IT services. A company sells a solution to an IT director, which is used by employees — not the IT director. The IT director evaluates the software based on price and ease of deployment, which is how she will interact with the product. The software is deployed to employees who rue the day they were saddled with The Solution.

When organizations purchase a CRM without testing with all users — including the public — they’re following the same model.

It’s important to let patrons have some representation in the process because, unlike employees who have less choice and have to use what’s given to them, patrons can always choose not to participate.

This isn’t unique to museums. When I think of the nonprofits I’ve worked with that have introduced new software systems with some public-facing component — applying for a job, making a donation, becoming a member, etc. — the only ones that have included usability testing in the process are those that I’ve persuaded to do so.

It’s easy to forget that a core goal of the switch to a new system is to get people to take some sort of action that benefits the organization.

If a software solution promises to be a significant improvement for your workflow but will likely reduce your conversion rate — the number of transactions through your website — by 20%, is that a trade-off you’d be willing to make?

Maybe you would say that’s acceptable. If there are trade-offs to be made, maybe employee productivity or sanity is more important to you. That’s ok.

The thing is — few decision-makers seem to have the information they need to decide about that potential trade-off. How could they know if they don’t test with real people? 

In fairness, I know some museums do consider usability as a factor in their decision. But I don’t think many in that minority are evaluating usability in anything resembling objective terms. Staff look at public-facing design systems and see if it’s usable to them. But that’s like a lawyer looking over an app’s terms and conditions to see if it’s easily understood by the average user.

It’s incredibly hard for us to evaluate our content and design systems. We’re too close to the subject to see all the quirks.

When evaluating a new software system, it’s easy to forget that employees aren’t the only users and that a core goal of the software to facilitate conversions because:

  • Employees are so fed up with whatever system they’ve been using to get their work done; they become over-focused on all the post-conversion processes that they’re trying to improve.

  • Employees aren’t aware that they can test with users; they don’t know how to begin doing so.

  • There may be a lack of awareness around the real impact of usability on visitor behavior. Again, we become familiar with the quirks and pains of the systems we use and assume others will forgive them as well. In reality, many people won’t.

I spoke with one museum marketing director recently who was aware of how low many people’s tolerance is for usability issues and inconveniences online (I’m paraphrasing):

“I know some people will pick up the phone and call us if they can’t get things done online, but I also wonder how many won’t bother.”

She got it. But she didn’t seem to be aware that it was possible to test with real people to find those issues that might be driving people away.

You be thinking: “Look, we have to make compromises. We can’t afford to pursue some ideal user experience.”

I’m sure the idea of adding another element to an already complicated and exhausting process doesn’t excite you, but I’m not suggesting organizations should only consider the patron’s experience. It may not even end up being your top priority — you have to weigh internal needs, cost, and so forth.

But if you make patron experience a factor in the decision-making process, it’s more likely that the decision you make will yield better outcomes for the metrics you measure most, like donations and enrollment.

(If anyone from one of these software companies were to read this, I wonder if their blood would run cold at the thought of prospective customers running user tests on their product to compare it to others. Few seem to be optimizing for UX.)

What do you think about including conversion metrics as a factor in choosing software? Hit reply and let me know.

Thanks for reading,

Kyle

Kyle Bowen