You’ll need to turn on JavaScript in your browser settings to use this site.
Skip to content
Indeed Design
Indeed DesignArticlesTeamsJobs
A large pixelated cursor crashes into hills, and numerous pixels contort and explode into the sky.

Breaking the Myth of Big Click Energy

Why measuring clicks to evaluate usability isn’t meaningful and what you can measure instead.

Author 
Audrey Bryson
Illustrator 
May 2024
DesignUX

After more than 10 years in interaction design, I had hoped to encounter increasingly fewer requests for either counting the clicks or reducing the amount of clicks needed for a user to complete a task.

But this is one usability myth with unique staying power.

Stakeholders have always clamored for numeric rules. And why wouldn’t they? Numeric rules are easy to measure, understand, and leave little room for nuance. People want to know answers to things like:

  • How many clicks is too many for this task?
  • What is the maximum amount of items we should display at once?
  • How long will our users wait before abandoning our site?

For stakeholders, fast and quick numbers are far more desirable than the timeless UX answer, “it depends.” There is no one answer to any of these questions. Even if we had answers to these questions, ultimately they are useless without context.

Design doesn’t exist in a binary state in which you’ll experience either total failure or complete success. Design is more of a spectrum based on the sheer amount of variables and factors that contribute to its evaluation. Various elements, such as functionality, aesthetics, usability, accessibility, and user experience, exist on a continuum.

The choices designers make often fall within a range rather than being confined to two discrete options. Determining which designs are more successful than others depends on which problems have been prioritized and which solutions have the least amount of trade-offs.

One way we can consider trade-offs in usability is by examining the different types of mental processing users encounter during tasks. So let’s take a look at how to rethink that click myth and start gaining better insights.

Each mental process requires a different level of effort

Humans have a finite amount of mental resources. In fact, most of usability work is focused on reducing the amount of mental resources, or loads, required of users when they complete tasks. And we can categorize different types of loads in a few ways:

  • Cognitive: When the user has to recall an item from memory, perform a mental calculation, or otherwise really consider their next step.
  • Visual: When a user perceives something on a screen.
  • Motor: When the user performs a physical action like typing or clicking.

For humans, cognitive loads require the most amount of resources while motor loads require the least. Biologically, our bodies are naturally inclined to reserve as many mental resources as possible. This is why we find time-consuming or tedious tasks frustrating.

Let’s say a situation requires a user to click 15 times to complete their task. That seems like it might be a bad experience. But if each click was logical and required minimal thought process, the user likely won’t interpret that task as difficult. On the other hand, a task that requires 3 clicks but requires the user to think hard between each click could be perceived as more frustrating.

This is why usability professionals consider clicks to be one of the least important metrics. Instead of asking how many clicks or steps a path should take, we should investigate how easy it is for users to get where they’re going.

In design decisions, we often make trade-offs in order to reduce the amount of cognitive and visual processing that users perform within an interface. If adding a few clicks means the user doesn’t have to think as much, that’s a worthwhile trade-off.

Better ways to measure user experience

So how do we know we’ve reduced cognitive and visual loads, and what kinds of research activities can we use to capture these metrics? Below are a handful of research activities that can unearth whether or not your users are overwhelmed and help you target specific problems.

Usability tests

In-person or remote usability testing is set up to capture both quantitative and qualitative data. Some qualitative insights include observing user behavior and gathering quotes and anecdotes on how well a user performed a specific task.

This format is also perfect for asking additional questions about a product or feature since the user is fully in context and won’t have to recall details or hypothesize about what they would do. Use this method to discover where the issues are in a product.

Some quantitative insights gathered from usability testing include task success and time on task. These metrics are great for creating benchmarks to test against at a later date.

The usability test is the gold standard of user research, but it often takes more resources to prepare and conduct analysis than some other available methods.

System usability scale

The system usability scale was one of the first systematic ways of assessing user satisfaction. This is a set of 10 questions with five options to choose from that results in a score somewhere between zero and 100. Since it’s been used for over 30 years, you can pull from a vast data set to compare against your product.

For instance, a score of 68 on the scale will put your product in the 50th percentile of usability. The scale is considered to be a quick and inexpensive method to gather perceived usability of a product. It’s a great tool for comparing versions of an application and gathering benchmark metrics, but it won’t uncover specific usability issues like the usability test.

Heatmaps and eye tracking

While usability testing and the system usability scale focus on overall usability, heatmaps and eye tracking are ways to evaluate the visual load. Heatmaps can show where users click, what they’re looking at, how far they scroll, and even how long they spend on a particular page.

As a result, they can show where users focus on the wrong elements or spend too much time focusing on something irrelevant. This data can tell you that the visual language of your product isn’t communicating well or that users spend too much time thinking about their next move.

Similarly, eye tracking heatmaps can be a great way to quickly collect information that tells you what the user is doing. Bonus: A complimentary usability study can tell you why a user behaves the way they do.

Ditch the big click energy

While requiring fewer clicks or interactions from your users is not a bad goal, it should be one of the last criteria you consider. Reducing cognitive and visual loads should take priority when evaluating the usability of a product every time.

One golden metric to tell you everything you need to know about usability might sound too good to be true — because it is. A combination of methods and metrics often provides a more comprehensive understanding of usability issues and opportunities for improvement.

When we seek out insights from usability tests, the system usability scale, and heatmaps, we go beyond a single number to make a user’s experience that much easier.

Author
Audrey Bryson headshot
Audrey BrysonSr. UX Designer at Indeed
Illustrator
Chip Martinez headshot
Chip MartinezEditor at Indeed
PreviousAll articles

Featured articles

The hands of a gardener holds a small, potted succulent and gently surrounds the roots with dirt. Gardening tools, more dirt, and more plants are scattered around.

Why UX Maturity and Culture Matter to Your UXR Career

Heena Khatri
April 2024
A hand builds a set of stairs that a person is climbing.
A person sits at a desk, holding an upload icon while similar icons float around above their workspace

Indeed Design

TwitterInstagramLinkedInMediumMedium
  • Home
  • Brand guidelines
  • Articles
  • Resources

© 2024 Indeed
  • Your Privacy Choices