Skip to main content

Research Repository

Advanced Search

On the use of AI for generation of functional music to improve mental health

Williams, DAH; Hodge, VJ; Wu, CY

Authors

VJ Hodge

CY Wu



Contributors

RE Dannenberg
Editor

G Xia
Other

A Bonnici
Other

Abstract

Increasingly music has been shown to have both physical and mental health benefits including improvements in cardiovascular health, a link to reduction of cases of dementia in elderly populations, and improvements in markers of general mental well-being such as stress reduction. Here, we describe short case studies addressing general mental well-being (anxiety, stress-reduction) through AI-driven music generation.
Engaging in active listening and music-making activities (especially for at risk age groups) can be particularly beneficial, and the practice of music therapy has been shown to be helpful in a range of use cases across a wide age range. However, access to music-making can be prohibitive in terms of access to expertise, materials, and cost. Furthermore the use of existing music for functional outcomes (such as targeted improvement in physical and mental health markers suggested above) can be hindered by issues of repetition and subsequent over-familiarity with existing material.

In this paper, we describe machine learning (ML) approaches which create functional music informed by biophysiological measurement across two case studies, with target emotional states at opposing ends of a Cartesian affective space (a dimensional emotion space with points ranging from descriptors from relaxation, to fear). We use Galvanic skin response (GSR) as a marker of psychological arousal and as an estimate of emotional state to be used as a control signal in the training of the ML algorithm. This algorithm creates a non-linear time series of musical features for sound synthesis ‘on-the-fly’, using a perceptually informed musical feature similarity model. We find an interaction between familiarity (or more generally, the featureset model we have implemented) and perceived emotional response so focus on generating new, emotionally-congruent pieces. We also report on subsequent psychometric evaluation of the generated material, and consider how these - and similar techniques -might be useful for a range of functional music generation tasks, for example in nonlinear sound-tracking such as that found in interactive media or video games.

Citation

Williams, D., Hodge, V., & Wu, C. (2020). On the use of AI for generation of functional music to improve mental health. Frontiers in Artificial Intelligence, 3, 497864. https://doi.org/10.3389/frai.2020.497864

Journal Article Type Article
Acceptance Date Oct 19, 2020
Online Publication Date Nov 19, 2020
Publication Date Nov 19, 2020
Deposit Date Oct 20, 2020
Publicly Available Date Dec 1, 2020
Journal Frontiers in Artificial Intelligence
Publisher Frontiers Media
Volume 3
Pages 497864
DOI https://doi.org/10.3389/frai.2020.497864
Publisher URL https://doi.org/10.3389/frai.2020.497864
Related Public URLs https://www.frontiersin.org/journals/artificial-intelligence#
Additional Information Funders : Engineering and Physical Sciences Research Council (EPSRC)
Projects : Digital Creativity Labs
Grant Number: EP/M023265/1

Files






You might also like



Downloadable Citations