Image

EmojiCaption

We developed a system to transcribe nonverbal emotion cues in speech into emojis, to help DHH individuals better understand implicit information under transcribed texts.

Abstract

DHH individuals encounter substantial challenges when accurately perceiving affective information conveyed in spoken language. Previous attempts to use visual cues to represent nonverbal speech information, such as prosody and emotion, have primarily focused on video captions, exploring real-time communication scenarios. Furthermore, previous research has yet to compare various visual prompts systematically and ascertain their priority in aiding DHH individuals. We undertook a three-stage approach to address these gaps by (1) collecting survey data from 102 DHH participants to discern the preferences for visual prompts, (2) designing EmojiCaption according to the preliminary results from the survey, and (3) evaluating EmojiCaption system through a within-subject experiment, scenario simulation, and interview. Our findings revealed that DHH individuals prefer using emojis as visual cues due to their high-level clarity, explicit emotional indication, and reduced cognitive loads. Our EmojiCaption prototype significantly facilitated DHH participants’ comprehension of non-congruent information and peculiarly helped in voice-only remote communications.

The Team

Image

Xin Tong

Project director: Dr. Xin Tong (PI)

Image

Xiangrong (Daniel) Zhu

Project Leader/Developer

Image

Jiaxun (Jessie) Cao

Designer

Loading...