Mr Duncan Williams D.A.H.Williams@salford.ac.uk
Senior Lecturer
RE Dannenberg
Editor
G Xia
Other
A Bonnici
Other
Increasingly music has been shown to have both physical and mental health benefits including improvements in cardiovascular health, a link to reduction of cases of dementia in elderly populations, and improvements in markers of general mental well-being such as stress reduction. Here, we describe short case studies addressing general mental well-being (anxiety, stress-reduction) through AI-driven music generation.
Engaging in active listening and music-making activities (especially for at risk age groups) can be particularly beneficial, and the practice of music therapy has been shown to be helpful in a range of use cases across a wide age range. However, access to music-making can be prohibitive in terms of access to expertise, materials, and cost. Furthermore the use of existing music for functional outcomes (such as targeted improvement in physical and mental health markers suggested above) can be hindered by issues of repetition and subsequent over-familiarity with existing material.
In this paper, we describe machine learning (ML) approaches which create functional music informed by biophysiological measurement across two case studies, with target emotional states at opposing ends of a Cartesian affective space (a dimensional emotion space with points ranging from descriptors from relaxation, to fear). We use Galvanic skin response (GSR) as a marker of psychological arousal and as an estimate of emotional state to be used as a control signal in the training of the ML algorithm. This algorithm creates a non-linear time series of musical features for sound synthesis ‘on-the-fly’, using a perceptually informed musical feature similarity model. We find an interaction between familiarity (or more generally, the featureset model we have implemented) and perceived emotional response so focus on generating new, emotionally-congruent pieces. We also report on subsequent psychometric evaluation of the generated material, and consider how these - and similar techniques -might be useful for a range of functional music generation tasks, for example in nonlinear sound-tracking such as that found in interactive media or video games.
Journal Article Type | Article |
---|---|
Acceptance Date | Oct 19, 2020 |
Online Publication Date | Nov 19, 2020 |
Publication Date | Nov 19, 2020 |
Deposit Date | Oct 20, 2020 |
Publicly Available Date | Dec 1, 2020 |
Journal | Frontiers in Artificial Intelligence |
Electronic ISSN | 2624-8212 |
Publisher | Frontiers Media |
Volume | 3 |
Pages | 497864 |
DOI | https://doi.org/10.3389/frai.2020.497864 |
Publisher URL | https://doi.org/10.3389/frai.2020.497864 |
Related Public URLs | https://www.frontiersin.org/journals/artificial-intelligence# |
Additional Information | Funders : Engineering and Physical Sciences Research Council (EPSRC) Projects : Digital Creativity Labs Grant Number: EP/M023265/1 |
frai-03-497864.pdf
(688 Kb)
PDF
Licence
http://creativecommons.org/licenses/by/4.0/
Publisher Licence URL
http://creativecommons.org/licenses/by/4.0/
Sonic enhancement of virtual exhibits
(2022)
Journal Article
What our bodies tell us about noise
(2022)
Journal Article
Psychophysiological approaches to sound and music in games
(2021)
Book Chapter
Neural and physiological data from participants listening to affective music
(2020)
Journal Article
About USIR
Administrator e-mail: library-research@salford.ac.uk
This application uses the following open-source libraries:
Apache License Version 2.0 (http://www.apache.org/licenses/)
Apache License Version 2.0 (http://www.apache.org/licenses/)
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2025
Advanced Search