User research is at the core of the user-centered design process. Without it, you’re flying completely blind in your product development, or as Bruce Tognazzini once put it: you’re “throwing buckets of money down the drain.” But knowing which research method to use isn’t always easy. There are so many different ways to gather feedback from users to uncover unmet needs and opportunities for product improvements. In this article I’d like to explore some of those methods with the hope of making the decision-making process a little bit easier.

Let’s start by clarifying the difference between needs and features.

We often make the mistake of equating product features with user needs.

Tweet This

If you’ve ever used a household appliance you’ll know that this isn’t the case. Have you ever used more than one or two of the preset cycles on your washing machine? And how many different ways do you need to toast your bread? The evolution of household appliances is a perfect example of what happens when features are equated with value. We don’t need more ways to wash our clothes. We might need faster or quieter ways, sure. But as we know, more isn’t necessarily better. And that’s when users sometimes take matters into their own hands.

user-research-remotes-600x800Source: Reddit “My buddy dad-proofing his remotes

When the first reviews and usage statistics for Facebook Home started appearing, John Gruber used a phrase that stuck with me: “It’s a well-designed implementation of an idea no one wants.” Hyperbole aside, this is what happens when features (cover feed, friends filling the screen, chat heads, app launcher…) are mistaken for user needs (why would people want to replace their phone’s operating system with an app?).

The distinction between features & needs is important but can be difficult to spot...That’s where user research comes in

Tweet This

Research methods for gathering user needs are powerful because they rely more on observation and deduction than gathering answers to a bunch of predetermined questions. But before we get into the different methods we can use to make better products, we need to take a little detour to define some basic research terms.

Qualitative Research and Quantitative Research

First, we need to distinguish between quantitative research and qualitative research. With quantitative approaches, data tends to be collected indirectly from respondents, through methods like surveys and web analytics. Quantitative research allows you to understand what is happening, or how much of it is happening. With qualitative approaches, data is collected directly from participants in the form of interviews or usability tests. Qualitative research helps you understand how or why certain behaviors occur.

Market Research and User Research

We also need to make a distinction between market research and user research. Both are important, but they serve different purposes. Market research seeks to understand the needs of a market in general. It is concerned with things like brand equity and market positioning. Attitudinal surveys and focus groups are the bread-and-butter tools for market researchers. They are tasked to figure out how to position a product in the market. Surveys and focus groups are very useful to understand market trends and needs, but they won’t help you very much when it comes to the design of your product.

User research, on the other hand, focuses on users’ interactions with a product. It is concerned with how people interact with technology, and what we can learn from their wants, needs, and frustrations. Those are the methods we’ll focus on in this section.

There are many ways to classify different research methodologies, but my preference is for a classification that lines up the different methods with the outcomes required by different phases of the product development process.  In that approach, there are three classes:

  • Exploratory Research
  • Design Research
  • Assessment Research

User Research Methods: Which to Use When

1. Exploratory Research

Exploratory research is most useful when the goal is to discover the most important (and often unmet) needs that users have with the products and services around them. The goal here is to find out where there are gaps in the way existing products solve users’ problems. New product or feature ideas often develop out of these sessions.

Here are some of the methods that fall into this class:

  • Ethnography. This is a technique long used in Anthropology that has only recently found its way into the toolkit for research on interactive products. It involves going to users’ homes or offices, and observing how they use your products in their natural environment. It allows the researcher to see users in action where they feel most comfortable. Ethnography is all about observing and listening. It is generally not task-based like usability studies, but follows a loose interview script with the goal of uncovering needs and insights that users are unable to articulate. I have an extreme positive bias towards ethnographic research, especially in contrast to focus groups (of which I’m not a fan at all, but that’s a subject for a different post). I have seen first-hand how ethnography sparks innovation when it shows how users make up their own workarounds for the limitations of software, which in turn reveals opportunities for product improvements. When it comes to exploratory methods to help with product strategy and roadmaps, there simply is nothing better than a good ethnography study.
  • Participatory design. Another favorite, this technique brings users together to solve design problems in a way that would make sense to them. The purpose is not to take users’ designs and implement them, but to find out which elements and activities are most important to them. My preference is to do this as dyad sessions, where 2 users work together on a design. This forces both participants to be active, and you learn as much from their conversation with each other as with the designs they put together. The usual process is to provide users with a blank page or basic framework, cut-outs of various elements that could go on a page, and watch and listen as they make trade-offs and design decisions on what should go on the page based on how they would use the product. This technique especially helps guide interaction design because it gives a glimpse into users’ process as they go through the site.
  • Concept testing. This is a good way to gather feedback on an approach before wireframes or prototypes are created. Storyboards or comics are great artifacts to use for this kind of testing, since they take the interaction and visual design out of the process, and gather feedback from users on the process you intend to design. Although mostly done in-person with 6-8 users, there are also tools for large-scale concept testing, such as Invoke. Below is an example of a storyboard one of our researchers at eBay used during early concept testing for one of our products:


2. Design research

Design research helps to develop and refine product ideas that come out of the user needs analysis. Some of the methods include:

  • usability-testing-research-800x450Usability testing. This is, of course, the most well-known user research method, and what most people default to for any kind of design feedback.  Task-based usability testing in a lab is a fantastic tool, but it has become a little bit too much of a “when all you have is a hammer…” method. Usability testing should be used to uncover usability problems with a proposed interaction design. It should not be used for feedback on visuals, finding out which design users prefer, or uncovering new product ideas.  There are other techniques that are much better suited to that task. Usability testing involves 1-on-1 sessions with users where the researcher observes them as they perform assigned tasks. This kind of direct observation is a great way to understand what users would actually do on the site (as opposed to what they say they would do), as well as to uncover the reasons why they do what they do.
  • RITE testing. Rapid Iterative Testing & Evaluation (RITE) is a very specific form of usability testing, but I wanted to call it out because it is my preferred way of testing. It involves a day or two of focused usability testing, followed by a design cycle where the feedback is immediately injected into the design before the next round of testing. Doing several of these cycles quickly means your outcome isn’t a bloated Powerpoint deck with a bunch of recommendations; your outcome is a better design that incorporated user feedback in real time. As the debate continues on how UX can be more involved in Agile development, this technique should become increasingly important since it fits in perfectly with the Agile mindset.
  • Desirability studies. Invented by researchers at Microsoft (see this Word article where they outline the approach), this has become another favorite technique for me when I need feedback on the visual aspects of a site (not so much interaction), and specifically which visual approach users like more when there is more than one alternative. It involves a survey to a large number of users where they are asked to rate one of the proposed design alternatives using a semantic differential scale. The survey is done as a between-subjects experiment, meaning each user sees only one of the proposed designs, so that they are not influenced but the other design alternatives. The analysis then clearly shows differences in the visual desirability of the proposed design alternatives.

3. Assessment research

Assessment research helps us figure out if the changes we’ve made really improve the product, or if we’re just spinning our wheels for nothing. This class of research is often overlooked, but it’s a crucial part of the product development cycle. This requires larger sample sizes to ensure the ability to compare before/after metrics with statistical significance, so these methods are mainly quantitative in nature.  Methods include:

  • Product surveys. The humble survey remains an effective way to assess how design changes are affecting user perceptions of the site. Different from most market research surveys you receive (and delete) in your inbox, these surveys deal specifically with user perceptions of the interaction and design. It’s not always effective as standalone research studies since there is so much bias in surveys with their <5% response rates, but if you can run surveys over time, and control the sample so that the bias remains the same, it can be a very good tool to measure the success of your design changes.
  • Online user experience assessments. Another favorite, these tools allow researchers to gather real-time click-through data as well as subjective user feedback. It uses a proxy or a browser download to give users tasks on a site, and ask them questions about the experience while their activity is being tracked. This often produces a mountain of data, which can be quite overwhelming and not always effectively used if there is not enough time/resources available to analyze the data.
  • Analytics. Web analytics need no introduction, but there are several tools specifically aimed at user experience, with my favorite being Clicktale. It provides data on how users interact with forms, including what error messages they receive, how much time they spend in each field, the last field they were on before closing the form, scrolling data, and the list goes on. It’s a great way to improve form conversion.

So that’s an overview of user research methods—there are many more, but I wanted to focus on the ones I’ve found especially useful in my own work.

The real power of user research starts to happen when you combine methods and triangulate results to come up with a product strategy that takes a variety of quantitative and qualitative insights into consideration. If you’re interested to learn more about this, Catriona Cornett’s Using Multiple Data Sources and Insights to Aid Design and Bill Sellman’s Why Do We Conduct Qualitative User Research? are good posts on the topic.

And with that, please go forth and research!