top of page

Autonomous Rideshare

Imagine you were injured and didn't have a partner/friend to drive you where you needed to go. How would your life be impacted? 25 million Americans alone experience transportation insufficiency stemming from cognitive, sensory, and/motor impairments. Without reliable, usable transportation, these individuals are further left behind and isolated from society.

Autonomous vehicles promise independent mobility, but most HMIs are vision-first. Blind and low-vision riders face barriers during identity verification, trip confirmation, en-route updates, unexpected events, and safe exit. We set out to design a multimodal interface that restores confidence, safety, and autonomy.

My Role

UX Research

My Responsibilities

Research, analysis, usability testing, prototyping, WCAG 2.1/2.2 compliance

Project Duration

2.5 months

Gemini_Generated_Image_r3halwr3halwr3ha.png
IMG20241208155859.jpg

A human-centered, multimodal HMI (audio + visual) that bridges the accessibility gap in autonomous ridesharing.

-The Idea

-The Process

1. Discovery

2. Define

3. Experiment Design & Prototype/Mockup

4. Results

1. Discovery

27 articles were ultimately read for this literature review. The literature review was first started by keyword searching term like "accessible HMI", "accessible rideshare", "visually impaired HMI", "visual impairment design", "fully autonomous vehicles", and "multimodal design". Because the topic of fully autonomous vehicles is so new and rapidly changing, we set a boundary on article publication dates (2014) to try and gather the most accurate and up to date information. After reading the abstracts, several articles were remove due to not being applicable to our topic. After the filtering process, 27 articles were identified and read.

Literature Review

2. Define

We broke down the ridesharing process with impairment needs included. From start to finish, the ridesharing process was broken into six tasks and ran a between-subjects usability study to measure trust, navigation accuracy, and satisfaction across tasks. The six tasks were:

  • identity verification

  • trip confirmation

  • driving updates

  • unexpected events

  • destination arrival

  • exit interaction

Task Analysis

Identity Verification

Screenshot 2025-10-06 152151.png

Speak 4-digit ride code; confirm rider-vehicle pairing.

Trip Confirmation

Screenshot 2025-10-09 155534.png

Destination read back and verbal confirm or address correction.

Driving Updates

Screenshot 2025-10-06 152134.png

Proactive status: intersections, signals, ETA changes.

Unexpected Events

Screenshot 2025-10-06 152211.png

Special mode explains hazards and evasive actions.

Destination Arrival

Screenshot 2025-10-09 142131.png

3-minute pre-arrival alert; curbside context.

Exit Iteraction

ChatGPT Image Oct 9, 2025, 02_51_33 PM (1).png

Guided, safe, step-by-step disembarkment.

Design Principles & Prototyping

Attention Management

Salience, urgency mapping, and interruption handling.

Low access cost, redundancy, progressive disclosure.

Perception Optimization

Predictive cues, consistent language, knowledge-in-the-world.

Memory Supports

Prototype

  • Rear-seat mounted display for route status & environmental descriptions

  • External speaker with natural voice (Maple) for synchronized audio prompts

  • High-contrast hardware buttons (#F4B300, #B87AFF) for tactile control

Screenshot 2025-10-09 154723.png
Screenshot 2025-10-09 154741.png
Screenshot 2025-10-09 154908_edited_edited.png
Screenshot 2025-10-09 155503.png

How might we help adults plan meals and shop more easily by using narrow AI to auto-build budget-aware lists, find the best store(s) and prices, plan multi-stop routes, and guide in-store item finding—while staying accessible for diverse needs?

Design Question

3. Experiment Design

-The Solution

Between-subject design (N = 24) with 3 conditions:

1. Non-Visually Impaired (NVI): Multimodal interface​

2. Visually Impaired (VI): Multimodal interface

3. Visually Impaired (VIX): Visual-only (no audio)

Measures

Trust, satisfaction, navigation accuracy (Likert; Kruskal–Wallis + post‑hoc).

Interviews & Literature Review

Preference for audio‑visual redundancy; clarity & directionality themes. 27 articles reviewed.

Simulation

Cambridge Disability Simulator at 20/200 acuity blur for standardization.

Key Findings

Trust & Satisfaction

VI with audio ≈ NVI; visual‑only significantly worse (p < .05).

Correct Rideshare

Audio restored confidence in vehicle identification.

Proactive narration improved route understanding and comfort.

Navigation

Prototype & Mockup

Screenshot 2025-10-09 155405.png

Unimpaired vs impaired view of screen

IMG20241208155914.jpg

Wizard-of-Oz simulation for usability testing

Gemini_Generated_Image_r3halwr3halwr3ha_edited.jpg

Mockup of multimodal system if fully autonomous vehicles existed.

4. Results & Next Steps

Results

Screenshot 2025-10-09 171723.png
Screenshot 2025-10-09 171744.png
  • Field trials with blind and low-vision riders in real AVs across varied road conditions.

  • Introduce haptics (seat vibration, localized speakers) for tri‑modal redundancy.

  • Personalization preferences: verbosity, pace, tone, and cue frequency.

  • Safety sandbox studies for edge cases: reroutes, obstructions, emergency stops.

Next Steps

bottom of page