Skip to main content

Research Repository

Advanced Search

Exeter Internal Fund - Towards Inclusive AI: Investigating Fairness in Emotion Recognition Algorithms Across Cultures

People Involved

Project Description

This project seeks to address a critical challenge in artificial intelligence (AI): ensuring fairness in emotion recognition systems across culturally diverse populations. As AI technologies become increasingly embedded in sectors such as healthcare, education, recruitment, and law enforcement, the risk of bias in these systems poses a significant societal concern. Emotion recognition systems, often trained on datasets that overrepresent Western populations, may yield inaccurate or inequitable results for underrepresented groups, perpetuating discrimination and eroding trust in AI technologies. This research aims to identify and address such biases, contributing to the development of more equitable AI systems. Our study combines rigorous quantitative and qualitative methods to comprehensively evaluate the fairness of three widely used open-source AI emotion recognition models: VGGFace, Dlib, and OpenCV. In the quantitative phase, we will recruit 100 participants across four demographic groups: East Asian, Middle Eastern, African-European, and a predominantly white Western control group. Participants will be shown carefully standardised stimuli designed to elicit specific emotional responses, such as happiness, sadness, anger, surprise, fear, and disgust. The outputs of the three AI models (predicted emotion labels) will be compared to the ground truth labels associated with the stimuli. By calculating performance metrics such as accuracy, precision, recall, and F1-score, we will evaluate the models’ ability to detect emotions across these demographic groups. The inclusion of the control group is critical, as it allows us to isolate the effects of ethnicity on the models’ performance. By comparing the results of underrepresented groups with the control group, we can determine whether observed disparities are due to ethnicity-related factors or inherent limitations in the models themselves. This method ensures a robust and unbiased assessment of AI fairness in emotion recognition. In the qualitative phase, we will conduct focus groups and semi-structured interviews with 24 participants, drawn from the quantitative phase, to explore their perceptions of fairness, trust, and bias in AI systems. These discussions will provide rich insights into how individuals from different cultural backgrounds experience and interpret the outputs of AI systems, offering a crucial social dimension to complement the technical analysis. The outcomes of this project will be transformative. It will produce a fairness evaluation framework that combines statistical performance metrics with social and cultural insights, providing a comprehensive tool for developers, policymakers, and researchers. This framework will inform the creation of inclusive and ethical AI technologies and offer recommendations for mitigating biases. The timeliness of this research is paramount. With AI systems increasingly influencing critical decisions, the risks of emotional misinterpretation are substantial. For example, biases in healthcare applications could lead to misdiagnoses, while inaccuracies in recruitment or law enforcement contexts could perpetuate systemic inequalities. Addressing these challenges now is essential to prevent harm and foster trust in AI as it continues to evolve. Ultimately, this project contributes to a growing body of research aimed at ensuring AI systems are not only technologically advanced but also socially responsible and equitable, addressing one of the most urgent challenges of our time.

Type of Project Research Grant
Status Project Live
Funder(s) University of Exeter
Value £0.00
Project Dates Feb 1, 2025 - Jul 31, 2025
This project contributes to the following UN Sustainable Development Goals
SDG 10 - Reduced Inequalities