Gerardo Roa Dabike
The cadenza woodwind dataset: Synthesised quartets for music information retrieval and machine learning.
Roa Dabike, Gerardo; Cox, Trevor J; Miller, Alex J.; M. Fazenda, Bruno; Graetzer, Simone; Vos, Rebecca R; A. Akeroyd, Michael; Firth, Jennifer; M. Whitmer, William; Bannister, Scott; Greasley, Alinka; P. Barker, Jon
Authors
Prof Trevor Cox T.J.Cox@salford.ac.uk
Professor
Alex J. Miller
Dr Bruno Fazenda B.M.Fazenda@salford.ac.uk
Associate Professor/Reader
Dr Simone Graetzer S.N.Graetzer@salford.ac.uk
Research Fellow
Dr Rebecca Vos Rebecca.Vos@salford.ac.uk
University Fellow
Michael A. Akeroyd
Jennifer Firth
William M. Whitmer
Scott Bannister
Alinka Greasley
Jon P. Barker
Abstract
This paper presents the Cadenza Woodwind Dataset. This publicly available data is synthesised audio for woodwind quartets including renderings of each instrument in isolation. The data was created to be used as training data within Cadenza's second open machine learning challenge (CAD2) for the task on rebalancing classical music ensembles. The dataset is also intended for developing other music information retrieval (MIR) algorithms using machine learning. It was created because of the lack of large-scale datasets of classical woodwind music with separate audio for each instrument and permissive license for reuse. Music scores were selected from the OpenScore String Quartet corpus. These were rendered for two woodwind ensembles of (i) flute, oboe, clarinet and bassoon; and (ii) flute, oboe, alto saxophone and bassoon. This was done by a professional music producer using industry-standard software. Virtual instruments were used to create the audio for each instrument using software that interpreted expression markings in the score. Convolution reverberation was used to simulate a performance space and the ensembles mixed. The dataset consists of the audio and associated metadata. [Abstract copyright: © 2024 The Authors.]
Journal Article Type | Article |
---|---|
Acceptance Date | Nov 28, 2024 |
Online Publication Date | Dec 4, 2024 |
Publication Date | 2024-12 |
Deposit Date | Dec 19, 2024 |
Publicly Available Date | Dec 19, 2024 |
Journal | Data in brief |
Print ISSN | 2352-3409 |
Electronic ISSN | 2352-3409 |
Publisher | Elsevier |
Peer Reviewed | Peer Reviewed |
Volume | 57 |
Article Number | 111199 |
Pages | 111199 |
DOI | https://doi.org/10.1016/j.dib.2024.111199 |
Keywords | Audio, Deep learning, MIR, Ensemble |
Files
Published Version
(316 Kb)
PDF
Publisher Licence URL
http://creativecommons.org/licenses/by/4.0/
You might also like
Improving the measurement and acoustic performance of transparent face masks and shields
(2022)
Journal Article
Using scale modelling to assess the prehistoric acoustics of stonehenge
(2020)
Journal Article
Fast speech intelligibility estimation using a neural network trained via distillation
(2020)
Presentation / Conference
Pupil dilation reveals changes in listening effort due to energetic and informational masking
(2019)
Presentation / Conference
Downloadable Citations
About USIR
Administrator e-mail: library-research@salford.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2025
Advanced Search