Beacon Car Insurance App

Overview

Beacon DriveScore is a telematics program that tracks driving behavior through a mobile app and uses that data to inform customers of their personalized insurance rate. Beacon needed a redesign of the existing telematics experience and the introduction of new features, with a major focus on making the DriveScore overview easier to understand, easier to navigate, and easier to trust.

I leaned heavily on a research heavy design process to establish baseline usability metrics, explore two concept directions, validate them through A/B testing, and synthesize a final concept that balanced business goals and customer clarity.

Challenge

The existing experience created friction in a few key areas:

  • Users struggled to understand how DriveScore worked and what actions impacted their score and rate.
  • New feature needs introduced additional complexity, increasing the risk of cognitive overload.
  • Stakeholders needed a validated direction backed by measurable usability outcomes, not opinions.

The goal was to redesign the experience in a way that improved comprehension, reduced friction, and created a foundation for ongoing measurement post MVP.

Research

Before designing solutions, I focused on establishing a clear baseline and research approach that could support future iteration.

Baseline Metrics and Discovery

I started with stakeholder interviews and user interviews to understand business goals, existing pain points, and the current performance of the app. This work helped define what success needed to look like, both from the business side and customer experience side. From there, I documented:

  • Research background and assumptions to validate
  • Success criteria and KPIs to measure
  • Key questions to answer through testing
  • Target user criteria and screener requirements

Usability Testing

To validate designs for MVP and set benchmarks for post MVP testing, I planned and ran an unmoderated usability study focused on the DriveScore overview experience. Participants completed task based scenarios designed to surface friction, confusion, and breakdown points across navigation, comprehension, and trust. I developed the study background, participant expectations, KPIs and success benchmarks, structured task flows, targeted evaluation questions, and screening criteria.

A/B Concept Testing

Based on baseline findings, stakeholder input, and usability best practices, I created two low-fidelity design concepts and tested them both.

  • Graph forward version: more visual and data rich, designed to emphasize transparency and score comprehension.
  • Minimal version: streamlined and lightweight, designed to reduce cognitive load and accelerate navigation

I ran A/B testing on the two concepts and captured both quantitative and qualitative data, including where users hesitated, what language or visuals drove confusion, and what increased confidence.

Click here for the lo-fi prototype

Research Analysis and Decision Making

After testing, I analyzed qualitative and quantitative results across both concepts. These data points allowed me to identify:

  • What performed well and why
  • What introduced friction or confusion
  • What should be combined into a final direction
  • What should be removed entirely
  • Where user outcomes aligned with business needs and where they did not

That analysis directly informed the final concept, which was used to create high fidelity prototypes and validated again through additional usability testing.

Solutions

The final experience focused on helping users understand the program quickly, navigate confidently, and feel in control of what impacted their score. Key improvements included:

  • A clearer DriveScore overview that supported comprehension
  • A more intuitive information hierarchy that reduced cognitive load
  • Stronger guidance and next steps so users knew what to do and what to expect
  • A balance of transparency and simplicity by keeping the most helpful elements from both concept directions

Impact and Results

  • Increased task completion rate from 61% percent to 89%
  • Decreased navigation errors by 42%
  • A validated redesign direction backed by baseline metrics, usability testing, and A/B testing results
  • Increased user confidence rating from 3.1 to 4.4 out of 5

Conclusion

This project reinforced the value of research led design in a space where user trust and clarity matter. By grounding decisions in baseline data, structured usability testing, and A/B concept validation, I was able to move the team from opinions to evidence and deliver a final experience that balanced business needs with customer understanding and product goals.

Click here for the prototype