Robust Real-Time Music Transcription with a Compositional Hierarchical Model

Matevz Pesek, Ales Leonardis, Matija Marolt

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)
162 Downloads (Pure)

Abstract

The paper presents a new compositional hierarchical model for robust music transcription. Its main features are unsupervised learning of a hierarchical representation of input data, transparency, which enables insights into the learned representation, as well as robustness and speed which make it suitable for real-world and real-time use. The model consists of multiple layers, each composed of a number of parts. The hierarchical nature of the model corresponds well to hierarchical structures in music. The parts in lower layers correspond to low-level concepts (e.g. tone partials), while the parts in higher layers combine lower-level representations into more complex concepts (tones, chords). The layers are learned in an unsupervised manner from music signals. Parts in each layer are compositions of parts from previous layers based on statistical co-occurrences as the driving force of the learning process. In the paper, we present the model’s structure and compare it to other hierarchical approaches in the field of music information retrieval. We evaluate the model’s performance for the multiple fundamental frequency estimation. Finally, we elaborate on extensions of the model towards other music information retrieval tasks.
Original languageEnglish
Article numbere0169411
Number of pages19
JournalPLoS ONE
Volume12
Issue number1
DOIs
Publication statusPublished - 3 Jan 2017

Fingerprint

Dive into the research topics of 'Robust Real-Time Music Transcription with a Compositional Hierarchical Model'. Together they form a unique fingerprint.

Cite this