- Client Bridgestone Americas Tire Operations (BATO) (Bridgestone and Firestone Tires)
The Americas branch of a multinational auto and truck parts manufacturer
- My Role Lead researcher (contract) — planned and conducted all user research
- Team Collaborated with a UX Designer, a UX Director, and members of the Data and Analytics Team
- Timeline 3 Months
- Key Tools Omnigraffle, Photoshop, Google Sheets, PURE (Pragmatic Usability Rating by Experts) framework, and Adobe Analytics
- Deliverables Summary report with suggested usability improvements (with substantiation and justification) and a detailed user flow analysis and UX benchmarking scores
Both Bridgestone and Firestone e-commerce websites were not performing up to desired expectations in terms of traffic and tire purchases. Our team inherited the design of these sites from another design agency and were tasked to discover what usability and technical issues might be causing low performance and come up with suggestions on how to improve the user experience.
Increase online sales of tires by determining the weakest performing and most problematic parts of each website and developing recommendations on how to improve usability, enhance functionality, and increase conversion.
This project consisted of seven steps, where the outputs of each step served as the inputs of the next. First, I identified the main ways users were likely to use the site, which our team called “intent cases”. Those intent cases were used by the Data and Analytics Team to define user segmentation and each segment’s behavior could be analyzed in Adobe Analytics. For each intent case, audits and reviews of the UX/UI of each site would be conducted.
The first part of the project focused on determining how the sites were being used and how that usage would be reflected in analytics data, which would serve as an initial baseline.
I developed a detailed criteria that would be used as a rubric to evaluate the quality of each Intent case. Together with another UX professional, we focused on evaluating the usability and optimization for Ecommerce for both websites. After each intent case was scored, I went through each flow and noted all UX and UI issues with recommendations to improve the design.
The last part of the project focused on defining how we would rate each intent case, and using that rating criteria to evaluate the user experience. Finally, I traversed each site following each intent case’s flow, and documented all UX/UI problems and suggested improvements.
Defining User Intent
I was brought on board to develop personas and use cases / scenarios for those personas for testing purposes. I quickly discovered that the Data and Analytics Team had a very different understanding of personas and their role in the design process. They wanted to use analytics data and clickstream analysis to create user segments, and design parts of each website for specific kinds of users / customers based on behavior on each site.
When I asked the team why they wanted to create user segments, they told me they wanted to optimize the site based on how people wanted to use the site and what their goals were. I suggested that the way to understand user’s true goals would be to conduct true-intent studies via surveys or pop-up questionnaires on each site, where users could self report. I was told there was not time or budget for this so we’d need to figure out another way.
We didn’t have the time to conduct true intent studies.
Since we were not able to directly ask users what they were trying to accomplish, I had to use inductive logic to indirectly determine probable user intent. To do this I analyzed the information architecture of the site and the goals it appeared to be facilitating. Based on the site’s navigation, content, and functionality, I came up with 7 main intents:
- Educate & Research
- Select Tires (Shopping)
- Find a Store (Local)
- Schedule an Appointment
- Get Support
- Discount & Rewards
- Register Tires
I was asked to turn each use case into a formally written use case, in order to describe how people might be using both sites to complete tasks based on their intent.
After writing a few use cases, I realized that traditional text-based use cases were too rigid and did not easily account for non-linear branching in the paths users might take through each site. They did not capture the richness of the experience in an easy to follow manner, were cumbersome to create, and ultimately didn’t serve the project.
Example use case for the intent of “Customer Shops for Tires (Select Tires)”
I did some research and exploration on various ways to document and visualize how users traverse websites and apps, and based on the needs of the Data and Analytics team, we decided to combine two methods shown below, UI flow shorthand and screen flows. This approach resulted in rich diagrams that were easy to understand and translate into Adobe Analytics queries.
Options provided to stakeholders on alternatives to text-based case studies. We decided on using a hybrid of UI Flow Shorthand and Intra Screen Flows.
Screen Flows (Intent Cases)
Since text-based use cases weren’t working, I recommended that we switch to a visual wireflow format that was more effective and easier to communicate. The team started calling these visual representations of use cases “Intent Cases”
These Intent cases did not capture an exhaustive list of all possible paths a user could take. Rather, they captured the most common or likely path(s) to accomplish a goal. These primary paths through a system focus attention on key elements that make up a user’s experience on each site.
- Educate & Research
- Select Tires (Shopping)
- Find a Store (Local)
- Schedule an Appointment
- Get Support
- Discounts & Rewards
- Register Tires
Overview of the 7 screen flows (intent cases)
Detailed example screenflow (intent Case): Select Tires — Shopping
- Description A prospect / customer wants to discover and select the right tire(s) for their vehicle
- determine what tire options are available and suitable
- have enough information to make a confident and informed purchase.
- End of Life (Time / Warranty): Tire has reached the end of its lifespan (5 years or so). A customer wants to replace them before they are worn out or doesn’t want to use them out of warranty.
- Damage (Accident): A customer’s tire was punctured or otherwise damaged in some way. The car’s handling and/or performance might be degraded as a result.
- Seasonal Changes (Climate): The seasons are changing and the weather conditions are significant enough to warrant specialized tires.
- A customer has found the tire they would like to purchase and is ready to purchase it in person.
- A customer has decided that they want to purchase tires, and is ready to purchase, but needs support or guidance choosing a particular set of tires
- A customer needs more information than is on the website, and wants to find out more in person
- A customer has enough information needed to know what kind of tire they may want to buy. They are ready to purchase a specific set of tires.
- A customer has enough information needed to come up with a small and manageable set of tires they may want to buy. They are ready to have a focused discussion with a salesperson to decide amongst a few options.
Those wireflow “Intent Cases” were used as inputs (data criteria) to measure usage and conversion rates (behavioral signatures), as determined in the Adobe Analytics tracking system, resulting in a data-driven perspective on how users actually used each website.
I guided the Data and Analytics Team through the branching paths in each “Intent Case” verbally, and they would translate the journeys into a query that would search user activity that adhered to the paths in each “Intent Case”. Groups of users who traversed the site in similar ways were considered the same user segment
For each user segment, the Data Team performed analysis to understand the user’s behavior based on where traffic came from and where it goes throughout the website. This information would make it possible to determine how successful users were in achieving their goal. Users that reached a page and took action at the last part of an “Intent Case” were considered successes, and those that dropped off, who didn’t complete a full “intent case” were considered failures.
This step would also ensure we could track business metrics in the future, after the proposed UX and UI changes were implemented.
Clickstream analysis — *blurred to protect confidentiality
The previous work was to determine what should be analyzed, so the next step was determining how to analyze the user experience.
What to evaluate
The project’s goals required that we evaluate both categories of usability and conversion. There were many possible things that we could evaluate, and we considered about a dozen listings of heuristics developed by various UX professionals. Where it made sense, we combined these lits, and eliminated any heuristics that weren’t pertinent to the project to develop a manageable set of criteria. We broke each usability and conversion down into five elements.
Explanation of the rating criteria rubric
Usability of the Experience: Ease of Use, Usefulness, Structure, Discoverability, Refined for Mobile
Optimization for Conversion: Increasing Focus, Clarifying Understanding, Expediting Outcomes, Driving Decisions, Enhanced Experience
How to Evaluate
To keep measurements easy for stakeholders to understand, we wanted to make sure the ratings were simple enough to understand easily and provided nuance beyond bad, ok, and good, to answer questions “how good is good, exactly”? Scores would be calculated by adding up each element for a total category score.
Usability rubric sample — each category and score has a specific definition
Optimization (Ecommerce) rubric sample — each category and score has a specific definition
Rating the UX of each Intent Case
We did not have the time or budget for traditional usability testing, and the elements in the optimization category would be difficult to objectively rate, so I considered ways to efficiently and to the best of our ability, effectively evaluate these websites given our constraints.
I considered conducting a heuristic evaluation since it would be fast and would capture many glaring issues, but I was concerned about missing problems and the bias I would bring if I was the only one doing the evaluations. I’d need to find a method that was more thorough and less biased. After reading through both academic HCI literature and UX Research blogs, I determined that the best usability inspection method would be the “Pragmatic Usability Rating by Experts” method, also known as PURE. The pure method is fast, simple, thorough, and encourages the collaboration of multiple researchers to arrive at an average score.
We based our approach on the Pragmatic Usability Rating by Experts framework in order to work within project constraints and achieve reliable, valid results that were quick, cheap and cheap to implement.
When I approached the stakeholders about this approach, they were on board and we hired one additional contractor to help with this scoring. I had already set up the Heuristics analysis framework, and all I would need to do is explain our process
- Review the heuristics analysis ratings and definitions
- Follow the flows in each “Intent Case”
- Rate each part of the experience, taking qualitative notes on things we observed that warranted the rating
- Calculating the mean (average) of our combined scores
- Discussing the final score, tweaking it slightly if one of us had a compelling argument based on criteria the other had overlooked.
Detailed usability and optimization notes from two UX evaluators — Shopping Intent Case
Final usability and optimization scoring (average of two UX evaluators) — Shopping Intent Case
Summary and explanation of scoring for all Intent Cases
1. Educate/Research Usability: 7 | Optimization: 5 | Total: 12
The article pages have a wealth of helpful information, however, it can be inundating. There are some simple UI components that can help to sort/filter/organize content to make finding relevant information quicker.
2. Shopping Usability: 3 | Optimization: 5 | Total: 8
The “Super Search” bar does not follow typical / common UI conventions. It’s too complicated functionally to be used easily and requires a lot of steps and effort on the part of the user. Improvements to how searches are initiated and modified should be improved and a more common design pattern should be used to match user’s mental models of search. Since this is the primary way people shop for tires, this is a major roadblock to prospective customers.
3. Find a Store Usability: 7 | Optimization: 6 | Total: 13
The biggest issues for finding a store are the ineffective search, and inefficient and unrefined visual layout. Elements on the desktop version were missing in mobile which made the design ineffectual.
4. Schedule Appointment Usability: 7 | Optimization: 5 | Total: 12
Scheduling an appointment is a fairly easy process and the pacing of the flow is repetitive and poor for those that found a store or tire, or entered their vehicle. The initial screen is a roadblock to driving appointments first, accounts second.
5. Get Support Usability: 8 | Optimization: 7 | Total: 15
Overall chat and email worked well. Small cosmetic improvements and better organization would elevate the professionalism and facilitate a smoother user flow to bring this at parity with other consumer-facing brands.
6. Discount / Rewards Usability: 7 | Optimization: 6 | Total: 13
The experience of finding discounts or redeeming rewards worked fairly well. There are opportunities to drive more engagement and toward conversion outcomes by creating individual coupon experiences and making redeeming rewards easier.
7. Register Tires Usability: 7 | Optimization: 5 | Total: 13
Tire registration process is fairly easy to complete, however, the flow could be improved with better form organization, error messaging and incorporating cues or hints to aid users in finding correct information while filling out the form.
UI Recommendations (for Usability and Optimization)
To provide specific tactical recommendations for improvements, I revisited the Bridgestone and Firestone websites, following the “intent case” paths once again. Pretending I was the user, I would try to complete the goal of each case and take screenshots and notes of each element that negatively affected my ability to use the site or make a purchase.
Final scores and summary — shopping intent case
Example desktop UX and UI suggestions for the shopping intent case
Example mobile UX and UI suggestions for the shopping intent case
Key Takeaways and Recommendations
The search functionality on both websites is the most problematic part of both websites. Since searching is the main way that people shop for tires, it has the biggest impact on sales and should be prioritized above all other changes.
More specifically, The search bar does not follow typical UI conventions. It’s too complicated functionally to be used easily and requires a lot of steps and effort on the part of the user. Improvements to how searches are initiated and modified should be improved and a more common design pattern should be used to match user’s mental models of search.
Exact site performance metrics are confidential, but after my contract ended, my main stakeholder confirmed the following results with me verbally several months after implementing the suggested changes:
- Overall increase in the number of pages per visit and time spent on site
- Significantly Increased tire detail page views and dealer searches
- Increased number of tire searches and calls to dealers (more attributed sales)
- Decreased loading and processing times
“The Experience Optimization Framework, which included the proprietary Expert Review Analysis and Scorecard (lead by Mo) helped our team identify key problems and areas of opportunity across the key flows within the website. This allowed us to focus our optimization efforts on “quick win” items that would be easy to achieve while having the highest impact in the near term. Our team is still using the findings from your work as part of the insights for a full website rethink and redesign in 2020.”
- Steve Shay ECD UX Design at iCrossing
Importance of shared terminology and getting team/client alignment At the onset of this project, the UX and Analytics Teams were using the same words, but both had very different ideas of what those words meant. It’s important not to assume everyone is on the same page, and make sure that everyone involved agrees on how to define the parts of a project.
Clickstream analysis can, in certain circumstances, be more complicated and limited than anticipated I had assumed that both websites would have code that would track full clickstream paths for all users. By working with the Data and Analytics team, I realized that it was not possible to track everyone’s path through the entire website, nor would it be useful as user’s journeys can be seemingly erratic and vary so wildly and often venture off into areas of the site not pertaining to a main goal. I thought the analytics tools would produce an easy to digest summary of common paths through each site, but realized that there’s more work involved to determine the paths users take on websites.