Exploring Audiovisual Emotion Perception using Data-Driven Computational Modeling

There is a critical clinical need for quantitative objective measures that can be used to assess and treat individuals with mood disorders. This research study addressed this need by investigating computational methods to distinguish between the emotion perception patterns of healthy controls (HC) and individuals with Major Depressive Disorder (MDD) or Bipolar Disorder (BP).

Both MDD and BP carry a significant personal and societal burden and are associated with cognitive abnormalities including cognitive distortions, impulse control deficits, and the focus of this study, disturbances in emotion perception. For example, individuals in a depressed state may display more negative biases when interpreting facial and vocal cues compared with HCs, misidentifying neutral faces as angry or sad. However, it is not yet understood how individuals interpret audio-visual cues.

The experiments in this study probed the link between mood state, audio-visual cues, and emotion perception using novel stimuli. The results from this study will not only increase our understanding of this relationship, but will have important implications for developing therapies to correct distortions in emotion perception and provide an additional measure of severity for mood disorders.

Participants in this study were asked to evaluate emotional content in the form of audio and visual clips once during a depressive episode and again when they were euthymic. The study team collected information through clinician ratings and self-report questionnaires at both points.

This study has received approval from IRBMED: HUM00065605