top of page

HMI for Rideshares for People with Visual Impairments

A human-centered, multimodal HMI (audio + visual) that bridges the accessibility gap in autonomous ridesharing.

ChatGPT Image Sep 3, 2025, 11_27_41 PM.png
accessibility.png

Accessibility focused

Team of 4

3.5 months

Gemini_Generated_Image_r3halwr3halwr3ha_edited.jpg

Project Overview

Imagine you were injured and didn't have a partner/friend to drive you where you needed to go. How would your life be impacted? 25 million Americans alone experience transportation insufficiency stemming from cognitive, sensory, and/motor impairments. Without reliable, usable transportation, these individuals are further left behind and isolated from society.

 

Autonomous vehicles promise independent mobility, but most HMIs are vision-first. Blind and low-vision riders face barriers during identity verification, trip confirmation, en-route updates, unexpected events, and safe exit. We set out to design a multimodal interface that restores confidence, safety, and autonomy.

​

Design Challenge

How might we design a human-machine interface (HMI) that promotes independent transportation for visually impaired individuals by integrating their other senses?

Project Details

Role

UX Researcher & Designer

Timeline

September - December 2024

Responsibilities

Research, wireframing, analysis, usability testing, VADER sentiment analysis​

Tools

Matlab

Figma

Cambridge Disability Simulator

Research & Discovery

We broke down the ridesharing process with impairment needs included. From start to finish, the ridesharing process was broken into six tasks and ran a between-subjects usability study to measure trust, navigation accuracy, and satisfaction across tasks. The six tasks were: identity verification, trip confirmation, driving updates, unexpected events, destination arrival, and exit interaction.

Screenshot 2025-10-06 152151_edited.jpg

Identification verification

Speak 4-digit ride code; confirm rider-vehicle pairing.

Screenshot 2025-10-09 155534.png

Trip Confirmation

Destination read back and verbal confirm or address correction.

Screenshot 2025-10-06 152134.png

Driving Updates

Proactive status: intersections, signals, ETA changes.

Screenshot 2025-10-06 152211.png

Unexpected Events

Special mode explains hazards and evasive actions.

Screenshot 2025-10-09 142131.png

Destination Arrival

3-minute pre-arrival alert; curbside context.

ChatGPT Image Oct 9, 2025, 02_51_33 PM.png

Exit Interaction

Guided, safe, step-by-step disembarkment.

Design Principles & Prototype

Attention Management

Salience, urgency mapping, and interruption handling.

Perception Optimization

Low access cost, redundancy, progressive disclosure.

Memory Supports

Predictive cues, consistent language, knowledge-in-the-world.

Prototype Stack

  • Rear-seat mounted display for route status & environmental descriptions

  • External speaker with natural voice (Maple) for synchronized audio prompts

  • High-contrast hardware buttons (#F4B300, #B87AFF) for tactile control

Screenshot 2025-10-09 154741.png
Screenshot 2025-10-09 154723.png
Screenshot 2025-10-09 154908_edited.png
Screenshot 2025-10-09 155503.png

Experiment Design

Between-subject design (N = 24) with 3 conditions:

1. Non-Visually Impaired (NVI): Multimodal interface​

2. Visually Impaired (VI): Multimodal interface

3. Visually Impaired (VIX): Visual-only (no audio)

Measures

Trust, satisfaction, navigation accuracy (Likert; Kruskal–Wallis + post‑hoc).

Interviews & Literature Review

Preference for audio‑visual redundancy; clarity & directionality themes. 27 articles reviewed.

Simulation

Cambridge Disability Simulator at 20/200 acuity blur for standardization.

Key Findings

Trust & Satisfaction

VI with audio ≈ NVI; visual‑only significantly worse

(p < .05).

Correct Rideshare

Audio restored confidence in vehicle identification.

Navigation

Proactive narration improved route understanding & comfort.

Participant's thoughts

"Without audio prompts, I couldn’t tell if the car was moving or stopping."

— Participant VIX‑03

Prototypes & Mockups

Screenshot 2025-10-09 155405.png

Unimpaired vs impaired view of screen

Wizard-of-Oz simulation for usability testing

ChatGPT Image Sep 3, 2025, 11_27_41 PM.png
Screenshot 2025-10-09 154908_edited_edit
Screenshot 2025-10-09 154741_edited.jpg
Screenshot 2025-10-09 154723_edited.jpg

Mockup of multimodal system if fully autonomous vehicles existed.

Results & Next Steps

Screenshot 2025-10-09 171723.png

Next Steps

  • Field trials with blind and low-vision riders in real AVs across varied road conditions.

  • Introduce haptics (seat vibration, localized speakers) for tri‑modal redundancy.

  • Personalization preferences: verbosity, pace, tone, and cue frequency.

  • Safety sandbox studies for edge cases: reroutes, obstructions, emergency stops.

Screenshot 2025-10-09 171744.png

Laura Weisz

UX Researcher & Designer passionate about accessible. engaging, user-friendly digital experiences that make a meaningful impact on people's lives.

Navigation

2025 Laura Weisz

Contact Info

Location: USA

bottom of page