top of page

USABILITY  

PROCESS  

REMOTE  TESTING

Remote Testing: Zappos.com Via Validately
Methodology

Sessions conducted remotely over 10-25 minutes from November 2-11, 2016. All sessions were conducted via Validately website using 3 tasks for “think out loud” interview techniques and usability testing methods. My fourth respondent didn’t think out loud so the results weren’t incorporated into study and personas fully.

 

Four participants from various parts of the continental United States and a mix of genders, ages 28-36, completed the same three tasks and were asked at the conclusion of each task to rate the ease with which they were able to complete the task.

Task Prompts

All 3 participants were supplied the same tasks, questions and follow up questions as well as a rating system and open comments.

Participant 1 Task Feedback and Recommendations

"Add filters, for example, on Old Navy or Land’s  End website. Let me check out without signing  up."

Participant 2 Task Feedback and Recommendations

“The choice of logging in with Amazon was helpful or another payment or log in method. I appreciated the sort by reviews option.”

Participant 3 Task Feedback and Recommendations

“Sort by size filter is necessary. Add a refresh button because when using filters they aren’t dynamically refreshing my choices. Somehow spotlight reviews, key to purchasing decisions. Check out as guest option requested, again.”

Analytics Breakdown
Follow Up Question Responses
Summary

In summary, the most insightful gain from my initial foray into remote testing. The discrepancy and the added nuance between a participants ‘self-reporting’ (what is typed in response) and their actual recorded verbalization of the tasks (how they narrate their activity and decision making process). This is the greatest takeaway as a practitioner and facilitator.

 

I experienced a pronounced difference between Nate Bolt’s convincing  arguments in favor of remote testing, in theory, and the reality of actual testing. Though it has it’s merits, the good old fashioned face-to-face can’t be beat. I had trouble recruiting respondents and participation that was valuable and valid.

 

I can’t imagine a real-world situation that this would be superior, unless the scale and recruitment were much more sizable and effective. The missing magic ingredient? The ‘think out loud’ prompt, if this were live and moderated remote testing, the outcome and results may have been much different and more acceptable. I can only speculate without professional or academic experience of my own.

bottom of page