Skip to main content

Research Repository

Advanced Search

All Outputs (42)

Timbral Metrics for Analysis of Metal Production: Then, Now and What Next? (2023)
Book Chapter
Williams, D. (2023). Timbral Metrics for Analysis of Metal Production: Then, Now and What Next?. In . J. Herbst (Ed.), The Cambridge Companion to Metal Music (70-82). Cambridge University Press (CUP). https://doi.org/10.1017/9781108991162.006

This chapter presents an overview of timbre in metal production from a psychoacoustic and computational musicology perspective, particularly focusing on the use of acoustic feature extraction. Both performance and recording technology have undoubtedl... Read More about Timbral Metrics for Analysis of Metal Production: Then, Now and What Next?.

Stress Detection And Alleviation Via Electrodermal Activity And Generative Music (2023)
Thesis
Corradine, C. (2023). Stress Detection And Alleviation Via Electrodermal Activity And Generative Music. (Thesis). University of Salford

Accurate psychological stress detection systems have been created using a variety of methodologies and can provide users with real-time stress monitoring. Such systems can aid with providing early intervention and therapies for alleviation on order t... Read More about Stress Detection And Alleviation Via Electrodermal Activity And Generative Music.

Sonic enhancement of virtual exhibits (2022)
Journal Article
Al-Taie, I., Di Franco, P., Tymkiw, M., Williams, D., & Daly, I. (2022). Sonic enhancement of virtual exhibits. PLoS ONE, 17(8), https://doi.org/10.1371/journal.pone.0269370

Museums have widely embraced virtual exhibits. However, relatively little attention is paid to how sound may create a more engaging experience for audiences. To begin addressing this lacuna, we conducted an online experiment to explore how sound infl... Read More about Sonic enhancement of virtual exhibits.

Generation and analysis of artificial warning sounds for electric scooters (2021)
Report
Torija Martinez, A., Elliott, A., Harris, L., Podwinska, Z., Welham, C., Nicholls, R., …Williams, D. Generation and analysis of artificial warning sounds for electric scooters

• emTransit B.V (Dott) is a European mobility operator currently operating over 30,000 electric scooters in Belgium, France, Germany, Italy, Poland and now the UK. The company aims to expand its UK operations and has recently won a tender for the Tra... Read More about Generation and analysis of artificial warning sounds for electric scooters.

Psychophysiological approaches to sound and music in games (2021)
Book Chapter
Williams, D. (2021). Psychophysiological approaches to sound and music in games. In M. Fritsch, & T. Summers (Eds.), The Cambridge Companion to Video Game Music (302-318). Cambridge University Press (CUP). https://doi.org/10.1017/9781108670289.019

Psychological research investigating sound and music has increasingly been adapted to the evaluation of soundtracking for games, and is now being considered in the development and design stages of some titles. This chapter summarizes the main finding... Read More about Psychophysiological approaches to sound and music in games.

On the use of AI for generation of functional music to improve mental health (2020)
Journal Article
Williams, D., Hodge, V., & Wu, C. (2020). On the use of AI for generation of functional music to improve mental health. Frontiers in Artificial Intelligence, 3, 497864. https://doi.org/10.3389/frai.2020.497864

Increasingly music has been shown to have both physical and mental health benefits including improvements in cardiovascular health, a link to reduction of cases of dementia in elderly populations, and improvements in markers of general mental well-be... Read More about On the use of AI for generation of functional music to improve mental health.

On performance and perceived effort in trail runners using sensor control to generate biosynchronous music (2020)
Journal Article
Williams, D., Fazenda, B., Williamson, V., & Fazekas, G. (2020). On performance and perceived effort in trail runners using sensor control to generate biosynchronous music. Sensors, 20(16), e4528. https://doi.org/10.3390/s20164528

Music has been shown to be capable of improving runners’ performance in treadmill and laboratory-based experiments. This paper evaluates a generative music system, namely HEARTBEATS, designed to create biosignal synchronous music in real-time accordi... Read More about On performance and perceived effort in trail runners using sensor control to generate biosynchronous music.

Neural and physiological data from participants listening to affective music (2020)
Journal Article
Daly, I., Nicolaou, N., Williams, D., Hwang, F., Kirke, A., Miranda, E., & Nasuto, S. (2020). Neural and physiological data from participants listening to affective music. Scientific Data, 7(1), 177. https://doi.org/10.1038/s41597-020-0507-6

Music provides a means of communicating affective meaning. However, the neurological mechanisms by which music induces affect are not fully understood. Our project sought to investigate this through a series of experiments into how humans react to af... Read More about Neural and physiological data from participants listening to affective music.

Designing vocational training for audio engineers at a distance : challenges, reflections, and recommendations (2020)
Book Chapter
Williams, D. (2020). Designing vocational training for audio engineers at a distance : challenges, reflections, and recommendations. In D. Walzer, & M. Lopez (Eds.), Audio education : theory, culture, and practice. New York: Routledge. https://doi.org/10.4324/9780429020780

This chapter addresses the design of a full credit remote access module as part of an undergraduate degree course in music technology with a particular focus on sound recording technology at a public university in Texas. It focuses on appropriate cur... Read More about Designing vocational training for audio engineers at a distance : challenges, reflections, and recommendations.

“Hello Computer, how am I feeling?”, case studies of neural technology to measure emotions (2020)
Book Chapter
Daly, I., & Williams, D. (2020). “Hello Computer, how am I feeling?”, case studies of neural technology to measure emotions. In C. Nam (Ed.), Neuroergonomics (193-219). Springer. https://doi.org/10.1007/978-3-030-34784-0_11

Emotion is a core part of the human experience. Many artistic and creative applications attempt to produce particular emotional experiences, for example, films, games, music, dance, and other visual arts. However, while emotional states are ubiquitou... Read More about “Hello Computer, how am I feeling?”, case studies of neural technology to measure emotions.

Electroencephalography reflects the activity of sub-cortical brain regions during approach-withdrawal behaviour while listening to music (2019)
Journal Article
Daly, I., Williams, D., Hwang, F., Kirke, A., Miranda, E., & Nasuto, S. (2019). Electroencephalography reflects the activity of sub-cortical brain regions during approach-withdrawal behaviour while listening to music. Scientific reports, 9(1), https://doi.org/10.1038/s41598-019-45105-2

The ability of music to evoke activity changes in the core brain structures that underlie the experience of emotion suggests that it has the potential to be used in therapies for emotion disorders. A large volume of research has identified a network... Read More about Electroencephalography reflects the activity of sub-cortical brain regions during approach-withdrawal behaviour while listening to music.

Evaluating BCI for Musical Expression: Historical Approaches, Challenges and Benefits (2019)
Book Chapter
Williams, D. (2019). Evaluating BCI for Musical Expression: Historical Approaches, Challenges and Benefits. In Brain Art. Springer. https://doi.org/10.1007/978-3-030-14323-7_5

A recurring challenge in the use of BCI (and more generally HCI) for musical expression is in the design and conduct of appropriate evaluation strategies when considering BCI systems for music composition or performance. Assessing the value of comput... Read More about Evaluating BCI for Musical Expression: Historical Approaches, Challenges and Benefits.

Evaluating BCI for musical expression : historical approaches, challenges and benefits (2019)
Book Chapter
Williams, D. (2019). Evaluating BCI for musical expression : historical approaches, challenges and benefits. In A. Nijholt (Ed.), Brain Art (145-158). Springer. https://doi.org/10.1007/978-3-030-14323-7_5

A recurring challenge in the use of BCI (and more generally HCI) for musical expression is in the design and conduct of appropriate evaluation strategies when considering BCI systems for music composition or performance. Assessing the value of comput... Read More about Evaluating BCI for musical expression : historical approaches, challenges and benefits.

Emotional congruence in video game audio (2019)
Book Chapter
Williams, D., Cowling, P., & Murphy, D. (2019). Emotional congruence in video game audio. In N. Lee (Ed.), Encyclopedia of Computer Graphics and Games (1-3). Springer. https://doi.org/10.1007/978-3-319-08234-9_199-1

Video game audio is more challenging in many regards than traditional linear soundtracking. Soundtracking can enhance the emotional impact of gameplay, but in order to preserve immersion, it is important to have an understanding of the mechanisms at... Read More about Emotional congruence in video game audio.

Stagger Lee : how violent nostalgia created an American folk song standard (2018)
Journal Article
Williams, D. (2018). Stagger Lee : how violent nostalgia created an American folk song standard. Journal of Extreme Anthropology, 2(1), 89-97. https://doi.org/10.5617/jea.5546

“Stagger” Lee Shelton (1865-1912) was an African-American carriage driver and sometime-pimp from Missouri. He became immortalized in song as a folklore antihero after murdering a drinking partner following a political argument gone bad in a St Louis... Read More about Stagger Lee : how violent nostalgia created an American folk song standard.

Unconsciously interactive Films in a cinema environment—a demonstrative case study (2018)
Journal Article
Kirke, A., Williams, D., Miranda, E., Bluglass, A., Whyte, C., Pruthi, R., & Eccleston, A. (2018). Unconsciously interactive Films in a cinema environment—a demonstrative case study. Digital Creativity, 29(2-3), 165-181. https://doi.org/10.1080/14626268.2017.1407344

‘Many worlds’ is a short narrative live-action film written and directed so as to provide multiple linear routes through the plot to one of four endings, and designed for showing in a cinema environment. At two points during the film, decisions are m... Read More about Unconsciously interactive Films in a cinema environment—a demonstrative case study.

Unconsciously interactive films in a cinema environment — a demonstrative case study (2018)
Journal Article
Kirke, A., Williams, D., Miranda, E., Bluglass, A., Whyte, C., Pruthi, R., & Eccleston, A. (2018). Unconsciously interactive films in a cinema environment — a demonstrative case study. Digital Creativity, 29(2-3), 165-181. https://doi.org/10.1080/14626268.2017.1407344

‘Many worlds’ is a short narrative live-action film written and directed so as to provide multiple linear routes through the plot to one of four endings, and designed for showing in a cinema environment. At two points during the film, decisions are m... Read More about Unconsciously interactive films in a cinema environment — a demonstrative case study.

Personalised, multi-modal, affective state detection for hybrid brain-computer music interfacing (2018)
Journal Article
Daly, I., Williams, D., Malik, A., Weaver, J., Kirke, A., Hwang, F., …Nasuto, S. (2020). Personalised, multi-modal, affective state detection for hybrid brain-computer music interfacing. IEEE Transactions on Affective Computing, 11(1), 111-124. https://doi.org/10.1109/taffc.2018.2801811

Brain-computer music interfaces (BCMIs) may be used to modulate affective states, with applications in music therapy, composition, and entertainment. However, for such systems to work they need to be able to reliably detect their user's current affec... Read More about Personalised, multi-modal, affective state detection for hybrid brain-computer music interfacing.