top of page

USABILITY  

PROCESS  

EYE TRACKING

Eye Tracking Study Methodology

The sessions ranged in student population testing the usability of our university website and in this report we focus on the eye tracking recorded to reconfigure our library site access. A usability test is intended to determine the extent an interface or app facilitates a user’s ability to complete routine tasks. 

 

Sessions are recorded in a lab geared with Tobii Pro 3.4.5 eye tracking software. In order to employ design changes for the new web site roll out we must discover pain points with the current site. Findings from these sessions will guide and inform that redesign. 

Executive Summary 

The participants were tasked with finding how to suggest a book for the university library to purchase. Overall, none of the participants succeeded with the exception of the User 1 and she was struggling at the beginning. Interestingly the first noticeable outcome was the F-shaped scanning pattern, contained primarily in the boxed in menu help us area. The users that landed on this page for the book-seeking task stayed within this bounded area and glanced around other sections. Simply put, this part of the website is the only area with any visual guidance for the viewer. 

Task and Participant Information
User 2 Heat & Gaze Map

This participant didn’t complete the task and ditched the site entirely in favor of calling the library directly. This is the worst-case scenario for usability. When your user is so frustrated they abandon the task and forego it all together. User 2 was all over the maps with his gaze. He was frantically scrambling to find any location to find the information. Initially he stayed around the menu areas that most users gravitated  towards, later he hopped around a lot in a scattered F-shaped pattern. He navigated to the drop down menus.

 

There are 9 navigation bars across the top menu, 3 tabs in the menu area to pick from, with 5 sub-navigations, and a 3 column with 4 deep lists in the menu area to choose from. And this is in the most concentrated area of the web page. This user really fell through the cracks of the poor design of information layout and architecture.

User 4 Heat & Gaze Map

User 4 began on a different landing page than the other participants. She then navigated to the University Library page. She then followed suit of other users and her gaze when to the menu tab navigation area. Then she took a detour route than other participants. She went to the “About Us” and “Ask Us” area and spent a lot of time scrolling and guessing in the menu bar on the left. She went to the “Circulation” and “Find Books” section, which made sense but was another false information lead.

 

Eventually, after searching thoroughly she proposed a hypothetical approach and alternate solution for this task. She suggested that when you actively look for the specific book, when you didn’t find it, then you would have a “request the book” button to click on. She didn’t complete the task and would have resorted to calling or email to complete. Interestingly, this participant spent more time on camera with her gaze on the moderator.

User 1 Heat & Gaze Map

This participant was our clever and lucky winner. She went down the wrong information scent initially with the false lead of “Support” navigation. She quickly backed out and decided “I don’t know, I’d ask my librarian first.” She then found the “How are we doing & Suggestion Box” and successfully completed the task with “Suggest a Resource for Purchase.” Even though she was the only success story, she still rated the task as extremely difficult.

 

She was very focused on the two key areas (Figure 1 and 2) mentioned in the summary. She didn’t hop around very much with her eye movements, she honed right in, but it was more of a puzzle or brain twister she thought through, not the ideal user experience. This was doubly impressive because the moderator was distracting her with irrelevant questions that didn’t pertain to the task during both attempts. She was able to navigate, not a poorly designed task and patchy moderation, but badly organized information architecture.

User 3 Heat & Gaze Map 

This participant I empathized with the most. He seemed overtly uncomfortable and his rapid hopping around the page and the links to other sub-sites and selections within the drop down menu left an awkward sense best embodied by his mumbled “I’m not sure.” He started in the same menus tab area as the rest of the participants but he clearly fell off the cliff several times following totally unrelated information scents. The comparison of his gaze plot and the standard heat map goes to show he was ‘all over the maps’ as the saying goes.

 

In this particular instance the eye tracking and gazing was rapid fire and very unfocused. I think this analysis as compared to User 1’s laser focusing in shows the best successful comparison of how eye tracking does work as a methodology and practice.

User 5 Heat & Gaze Map

This participant began in what seems at this point to be the default position of the menu tabs location. He looked all around there and then hopped up to the top right. He went to the “Kent Link Library” account page, attempting to perform the real task and bring up a book search. This approach underscores User 3’s input that an actual task, versus a hypothetical task, would be better suited for this usability test. He primarily spent most of the time outside of that practical approach wandering around and flitting about all  of the many areas of menus.

 

He never gets anywhere literally or figuratively. He never gets closer to finding any approach or information scent, he hops around the most number of areas and very rapidly. He just ends up sort of quitting and giving up with shrugged shoulders. There was no follow up difficulty questioned asked during this session.

Summary of Aggregated
Eye Gaze & Heat Maps

Figure 13 is a nice consolidated and aggregate view of eye tracking at it’s finest. This cluster highlights what couldn’t be viewed any other way outside of eye tracking. A luxury that all projects should hopefully include.

 

Figure 15 really underscores my earlier comment and the Figure 14 heat map commonality, that every user made the menu tabs area the site’s home plate as it were. Additionally both figure 14 and 15 pronounce the user behavior of being reeled in by image and motion that we discussed earlier about vision.

 

Figure 16 is the embodiment of Jeff Suaro’s “Eye-tracking data can be difficult to interpret and should be used with caution. Just because an element is in a user's visual field doesn't mean they perceive it.”1 This is the visual representation of the open refrigerator and staring at the ketchup bottle and not ‘seeing it.’ It is amazing how much territory and words that user’s do not see. On this particular website, the need to narrow down from information overload may cause this.

Task Critique

User 2, the moderator changed the parameters of the questioning, though the task remained the same, she prompted and asked “any labels you are looking for right now?” He answered cagily “request might work? I don’t know.” When this participant failed to complete the task she asked what he would do at home and importantly failed to ask his difficulty rating score. With User 3, the participant failed the task and she asked nothing further or the difficulty of the level of the task as well. It was on odd set up in that it was almost an open-ended scenario but with a directed task.

 

User 5 also attempted to actually log into his account and take this practical approach. The biggest critique would be to make a specific and measurable task and take the guesswork out of the task because with that layout of the web page and the glut of available options and the information architecture of the page, there is already plenty of built in guesswork just navigating it all.

Task Creation

Task 1

For eye tracking specific tasks I would first start with the “Quick Jump To” drop down menu. Why? In order to facilitate how participants operate off the grid of the menu tab items and light box slide show home plate areas. I would give very specific directs and be sure to ask the difficulty rating in all cases. “Figure out how to request an interlibrary loan for your economics class. As you work I won’t ask you to think out loud, but I will ask you to retroactively tell me what you were thinking as you worked in order to take notes on your process without interference.”

 

Task 2

For the second task I would request they “shuffle through the numbered light box slide show screens and pick an interesting one and pursue it.” This would be a more open-ended task, which my critique of the study advised to shy away from. In this case, intentionally luring the user away from this repeated grid that was discovered in the first study, we can build on that and expose another flaw of the original one, it didn’t include how the user lands on similar alternate pages, and how the eye tracking is effected by tasking into a landing page. The inclusion of these paths will increase the data of how users succeed or fail. Just as the scattershot approach to the page could predict the incompletion of the task in the previous study.

 

 

Task 3

For the final task I would ask them to “go to the page footer, row two and select and option to explore further.” This was a very neglected area in the first study, and though some of the information appears repetitive, it was the location that the lucky winner that successfully completed the task in the previous   study found her answers. This task is really throwing a long fly-fishing line out there in order to just see what comes back. The basic tasks may elicit basic results. This is an ideal area to push the limits of an established zone that is off the heat and gaze maps. So why not put it back on and see what comes out of it.

bottom of page