Beyond the Backlog

Product Management, Marketing, Design & Development.


Conducting Product Usability Studies

Conducting Product Usability Studies

Usability studies are a critical part of developing any new product or feature. As a product manager, overseeing usability studies is essential to ensure you are building something that meets customer needs and provides a good user experience.  

In this comprehensive guide, we will cover everything you need to know as a Product Manager for Conducting Product Usability Studies, including:

  • What is a usability study and why it’s important
  • Different types of usability studies
  • Creating a test plan: determining goals, what to measure, number of test users
  • Recruiting test participants 
  • Preparing and setting up for a usability test session  
  • Writing a test script and facilitator guide  
  • Observation: capturing feedback during the sessions  
  • Debriefs and analysis after the sessions
  • Using the findings to influence product decisions and development 


Let’s get started!

What is a Usability Study?  

A usability study involves observing real users interact with a product (whether it’s an existing product, prototype, or design concept) to identify usability issues and collect feedback. The goal is to uncover areas where users are struggling so that those pain points can be addressed before launching a product to market.

During a usability study session, participants are given real tasks to complete using the product while observers watch them and take notes. The tasks should represent actions target users would take when using the product being tested.  

In addition to having users complete specific tasks, usability testing also includes asking questions about users’ perceptions of and satisfaction with the product.

A few key things usability studies aim to uncover:

  • How easy or difficult the product’s features are to use 
  • Where users are confused or encountering obstacles
  • If the product’s navigational elements make sense to users
  • How satisfied users feel about various aspects and functions of the product  

Benefits of Conducting Usability Studies

Conducting usability studies provides many benefits throughout the product development process, including:

  • Catch issues early before significant development resources have been invested
  • Provide insights directly from target users to influence requirements and design  
  • Identify improvements that make products easier and more intuitive to use
  • Increase customer adoption and satisfaction 
  • Reduce support tickets and costs
  • Build a product that better matches users’ needs and mental models

The feedback and learnings uncovered from usability testing are extremely valuable to product managers and development teams in understanding target users’ pain points and steering the product roadmap.

It’s always cheaper, easier, and less resource-intensive to make changes earlier in the product development process. Usability testing helps you catch issues while it’s still early enough to adjust course.


Subscribe to Beyond the Backlog for Free!

…and get all new posts direct to you inbox:


Types of Usability Studies

There are several different types of usability studies that can be conducted depending on what phase of product development you are in and what specific insights you hope to uncover. The main types include:

Exploratory Usability Testing

Exploratory usability testing is typically done early on when building wireframes or an MVP product prototype. The goal is to quickly identify usability issues, get broad user feedback, and gain directional insights to improve on the initial concepts.

Assessment Usability Testing 

Assessment usability studies involve testing a more polished working software product in later stages of development or after an initial launch. Assessment testing aims to benchmark the product’s ease of use against other solutions, identify deeper usability issues, set goals for improvement targets, and continue optimizing the user experience.

Comparative Usability Testing

In a comparative usability test, users use two or more competitive products to compare the designs and gain perception metrics. Comparative testing gives you insights into your product strengths/weaknesses and areas for differentiation from competitors.

Moderated vs Unmoderated Testing 

In a moderated usability test, a trained facilitator guides participants through tasks and asks follow-up questions in real-time during the sessions (either conducted remotely or in person). 

Unmoderated testing involves users completing tasks on their own without any guidance or interaction – typically by providing remote access to a prototype. Unmoderated testing is faster and cheaper but doesn’t allow for follow-up questions.

Both approaches have pros and cons to consider when deciding on your approach.

Hallway Testing

Hallway usability testing is an informal test where random people such as coworkers are pulled aside and asked to complete quick tasks or provide feedback. Hallway tests can provide early insights but results may not represent real target users.

No matter which format you choose, the most important thing is gaining actionable feedback from users representative of your target customer audience.

Creating a Test Plan 

The first step in conducting effective usability studies is developing a test plan and getting very clear on your objectives, metrics, methods, and logistics. Important components to include in your usability test plan include:

Objectives 

What insights are you hoping to gain from the usability study sessions? What product features or areas will you have users test? Get very specific in determining your research objectives and list them out.  

Success Metrics

What data and metrics will constitute a successful test that meets your goals? This includes metrics like:

  • Task success rate – % of users that can complete critical tasks 
  • Task time – average time to complete key tasks
  • Net Promoter Score (NPS) – product satisfaction rating  
  • Number/type of errors encountered
  • Subjective feedback scores on ease of use 

Set benchmark targets for each metric based on past data or industry research that will help you determine if the usability testing meets success criteria.

Participant Recruiting 

Determining your test participant criteria is crucial for useful results. Your participants should match as closely as possible to the real target users of the product. Outline:

  • Demographic criteria   
  • Technical proficiency requirements
  • Other qualifying traits and attributes 

Recruit at least 5 users per core user segment. Budget additional incentive money to recruit qualified participants that conform to the ideal user profile.

Test Environment

Will testing be conducted in person or remotely? Specify any equipment, software, and materials needed to administer the usability test sessions. 

Tasks & Scenarios

Outline the specific tasks and usage scenarios you’ll have testers complete during sessions to simulate real-world workflows. Prioritize testing tasks that are critical for engagement and retention. 

Session Script & Facilitator Guide  

Creating a detailed script and question guide for facilitators ensures consistency across sessions. The guide should cover:

  • Introduction script 
  • Task scenarios
  • Probing questions to gather feedback 
  • Closing questions on the overall experience

Analysis Plan

Highlight how data will be processed and analyzed to uncover insights and align to the original objectives. This includes reviewing:

  • Task success rates
  • Identifying severe or recurring usability issues  
  • Feedback themes and user suggestions
  • Competitive benchmarking metrics 

With a comprehensive test plan developed, you can confidently proceed to the participant recruitment and testing execution phase.

Recruiting Test Participants

Your participant recruitment strategy can make or break the success of your usability study. Without getting the right participants that match your core target audience, the feedback may not lead to meaningful product improvements.

Here are some best practices for recruiting high-quality test participants:

Leverage Screener Surveys 

Create a short screening survey to send to prospective testers to ensure they meet all demographic, behavioral, and technical requirements for participation. The screener will confirm the participant belongs squarely in your target customer segment.

Provide Incentives

Offer incentives for participation. Best Buy gift cards, online vouchers, and sweepstakes entries can all help attract more willing participants. Incentives should reward time provided and generally range from $50-$150 per test session depending on length.

Search Customer Lists  

Check with your marketing or sales team for existing customer contact lists you can source from. Reach out to existing users that match target criteria to gauge their interest. This helps find engaged prospects.

Use Recruiting Platforms

Services like Respondent and Google Surveys make it easy to recruit diverse panels of test participants that match specific demographic filters. Costs range from $100-300 per recruited participant. 

Leverage Social Platforms 

Post requests explaining your ideal tester criteria on LinkedIn groups, Facebook groups, Reddit forums, and relevant online communities frequented by your target audience. Offer incentives!

Avoid Friends & Family 

As tempting as it may be, don’t use close contacts for testing as they likely won’t provide fully candid and unbiased feedback.  

Set a Recruiting Deadline 

Give yourself enough lead time – aim to complete recruitment 1-2 weeks before the planned testing dates to account for scheduling changes, screening disqualifications, etc. 

By leveraging the right mix of recruiting tactics and incentives tailored to your audience, finding participants who fit the ideal user profile is very feasible for most products.

Preparing for a Usability Test Session 

To ensure your usability test sessions run smoothly, proper planning and setup are key. Here are some best practices to follow:

Choose a Session Length

Sessions typically range from 30 minutes to 1 hour max. This provides enough time for a facilitator introduction, 5-8 main task scenarios, probing/follow-up questions, and a closing interview.

Schedule Breaks 

For remote sessions, build 5-minute breaks every 25-30 minutes to reduce participant fatigue. For in-person tests, provide snacks/refreshments.  

Set up Equipment

Confirm screen sharing, microphone, webcam, software tools, and interview materials work properly beforehand through equipment checks. Prepare paper prototypes if applicable.   

Secure a Dedicated Space

For in-person studies, secure a room or dedicated lab space with table seating arrangements conducive to observation, screen sharing, and device mounting equipment.  

Practice Introductions  

Have facilitators practice reading through the scripted introduction and initial warm-up questions to ensure smooth delivery and consistent messaging around the goals of the test. First impressions matter!

Plan Analysis Tasks

Ensure you have dedicated time scheduled shortly after each session to compile notes, screen recordings, and feedback while details remain fresh. This will save you loads of time when consolidating findings.

With the logistics locked down and a clear session roadmap defined, facilitators will be equipped to effectively guide each usability testing participant through the critical scenarios and tasks.

Writing a Test Script & Facilitator Guide

To enable consistent and valuable participant feedback across testing sessions, a clearly defined script and facilitator guide is essential. This document serves as the roadmap that steers the entire session.  

Key sections to cover in your facilitator guide:

Introduction

  • Welcoming script and warm-up questions 
  • Explanation of test goals and what data will be collected

Priming Tasks 

  • Basic orientation tasks to allow participants to familiarize themselves with the product UI

Critical Tasks

  • Prioritized list of 5-8 key tasks that participants will be asked to complete representing critical user workflows 

Probing Questions

  • Follow-up questions after each task to uncover areas of confusion, specific pain points encountered, or suggestions

Preference Ratings 

  • Quantifiable ratings on perceived effectiveness, ease of use, and likelihood to recommend key features 

Closing Interview

  • Open-ended questions to elicit general impressions on the biggest areas for improvement and strongest likes/dislikes

The facilitator script keeps the session flowing smoothly through each section while methodically progressing participants toward critical feedback opportunities.

Well-crafted probing and closing questions based on the original test objectives are key to eliciting maximum insights. Leave room for facilitators to ask customized follow-up questions as well based on participant comments and behaviors observed.  

Observation: Capturing Feedback During Testing Sessions

The most important job of facilitators during a usability test session is closely observing how users interact with the product and taking thorough notes. This includes tracking:

  • Verbal reactions and quotes  
  • On-screen behaviors and movements  
  • Facial expressions conveying emotion/confusion
  • Hesitations before taking action
  • Deviation from expected workflows

Tools like UserTesting, Validately, and Reframer make capturing test session data easy with features like screen recordings, session transcripts, click maps, and automated video highlight reels showing user reactions.

In addition to digital tools, dedicate 1-2 notetakers to document detailed observational notes on paper templates designed to capture UX friction points and emotional sentiment. 

Prepare your analysis work ahead of sessions by creating templates, documents, and folders where data can be quickly imported and compiled. Taking rapid action on feedback discussions while session details are top of mind for observers leads to the most meaningful conclusions rather than letting notes languish. 

Debriefs and Analysis After Testing

Shortly after completing your slate of usability test sessions, facilitators, notetakers, observers, and other stakeholders need to regroup and align on major themes while the details remain fresh. 

Key debriefing activities include:

  • Share highlights and surprises that stood out from each session
  • Review aggregated metrics and statistics like task success rates 
  • Discuss patterns of issues that repeated across multiple testers
  • Capture compelling participant quotes supporting major findings
  • Brainstorm potential solutions based on user suggestions
  • Prioritize usability issues using the severity rating model
  • Align on conclusions and recommendations to provide the development team

By thoroughly analyzing and distilling mounds of feedback data into meaningful recommendations supported by customer quotes and metrics, your insights will carry much more weight with executives and inform crucial product decisions.

Using Findings to Improve Products

At the end of the day, the ROI of usability testing comes from implementing fixes and optimizations that improve customer experiences. So what do we do with all these great findings?

Share a Written Report

Document highlights, statistics, graphs, user quotes, suggested solutions, and priorities in a visually impactful presentation. This condenses feedback into consumable insights that provide backing for proposed development investments.  

Show Video Clips 

Nothing tells a more powerful story than showing actual user video clips of encountering poor experiences. Edit short snippets that showcase painful usability issues to rally urgency around fixes from stakeholders.

Influence Requirements 

Add usability findings into product/feature requirement documents to align user testing insights directly with engineering backlogs. Set benchmarks for goals like reducing task time or increasing prominent feature usage.

Map to User Stories

For development teams practicing agile methodologies, map usability findings to impact user stories weighting priorities –  making benefits highly tangible to engineering. 

Track Against OKRs

Track progress toward usability and customer satisfaction goals via Objectives and Key Results (OKRs) commonly used in technology organizations to connect user outcomes with broader company goals.

Usability insights mean little if not properly socialized, documented, tracked, and most importantly – acted upon via product changes. However, applying user feedback effectively positions products to better resonate with target audiences.

Conducting Product Usability Studies – Conclusion & Next Steps

Usability studies provide invaluable customer insights that guide strategic product decisions and UX optimizations. By revealing precisely how end-users interact with products, usability testing identifies friction points that impact conversion, satisfaction, and retention and prevents wasting extensive resources developing unused features.

As product managers, conducting product usability studies should be a top responsibility on our plates. Leveraging the helpful frameworks around research planning, test design, recruiting, analysis, and reporting covered in this guide will set up your next customer study for maximum benefits and impact.  

I hope these usability study guidelines provide a helpful resource to conduct your own successful studies and uncover insights that influence the product roadmap. Be sure to check back soon as we continue sharing frameworks, templates, and best practices to equip aspiring product managers with skills to deliver customer-focused products users love.


If you liked this post on Conducting Product Usability Studies, you may also like:



Leave a Reply

BROWSE BY CATEGORY

Discover more from Beyond the Backlog

Subscribe now to keep reading and get access to the full archive.

Continue reading