Folk Music Challenge on Tempo and Music Meter Estimation

Organizers

Aggelos Gkiokas (ILSP, GR), agkiokas AT ilsp.gr

Aggelos Pikrakis (Univ. of Piraeus, GR) pikrakis AT unipi.gr

Vassilis Katsouros (ILSP, GR), vsk AT ilsp.gr



FMA-2018 challenge results



We would like to thank all the participants of this year's FMA challenge on music meter and tempo induction from folk music recordings.

Summary:
A) We received 2 submissions for the music meter task:
Submission #1 by Hendrik Schreiber, referred to as  <HS> in the sequel.
Submission #2 by Pierre Beauguitte, John Kelleher and Bryan Duggan, referred to as <BKD> in the sequel.
A submission scores a hit if it predicts correctly the numerator of the music meter. The two submissions were tested on 80 tracks and scored as follows:

<HS>: 43/80
<BKD>: 47/80

Therefore, <BKD> is named the winner. Congratulations to both participants.


B) We received 1 submission for the tempo induction task:
Submission #1 by Hendrik Schreiber, referred to as  <HS> in the sequel.
The submission was evaluated based on the acc1 evaluation metric. acc1 is the capability of an algorithm to predict the correct tempo with a 4% tolerance.
The submission was tested on the same 80 tracks as the previous task and scored as follows:

<HS>: 34/80

Congratulations to Hendrik Schreiber for his submission.

Performance per music recording
The following table presents the performance of each method on every individual recording. Note that, after careful consideration and seeking musicological advice, we  decided to treat as correct the case of a detected 4/4 meter when the annotated meter was 2/4. A similar decision holds for the detected tempo in this case, i.e., if the predicted tempo is twice the annotated one (with the annotated tempo corresponding to the 2/4 music meter).

You can download the FMA 2018 Challenge results here, or view it in simple form below



The Organizing Team

Description

This challenge aims at comparing algorithms for the tasks of tempo and music meter estimation of Folk Music Recordings, and focuses on Greek traditional music. Regarding the tempo induction task, methods will be evaluated based on the annotated tempo that corresponds to the denominator of the temporal key of each music excerpt. As an example, for an excerpt with meter 7/8, the groundtruth tempo corresponds to the eighth note. Regarding the music meter estimation task, the algorithms should predict one of seven target classes. For the meter estimation task, we consider only the numerator of the meter. Thus, we avoid confusions with similar meter (e.g. 5/4 and 5/8 are considered the same class). The target meters are 2/4, 3/4, 5/4 (or 5/8), 6/8, 7/8, 9/4 (or 9/8), and 11/8. Participants can choose if they participate to one or both tasks.

Challenge Dataset

The dataset will be the same for both challenges of tempo and meter estimation. It will include a set of Greek traditional music recordings that correspond to genres and meters across the various geographic areas of the Greek territory. It consists of a total 100 excerpts 30-secs long, annotated by their tempo and meter.

The set is split into two subsets: a “visible” or “public” dataset consisting of 20 excerpts that are representative of the tempos and meters of Greek traditional music and that will be published in the beginning of the challenge. Participants may train or finetune their algorithms on this dataset. The rest 80 excerpts constitute the “hidden” part, will not be known to the contest participants and will be used to evaluate the algorithms. The “hidden” part pf the contest  will be published after the announcement of the final results.

 

Data Format

For each excerpt in the dataset we provide a youtube video link, the start and end time of the excerpt, the annotated tempo  in BPM and the music meter signature. The dataset format is a csv file of the following format:

 

youtube_url, start_time, end_time, tempo, time signature

 

For running the submissions, all excerpts will be downloaded as 16 bit PCM audio files, with 1 channel sampled at 22.05 kHz.

 

Submission Format

The algorithms should be console based programs with detailed instructions on how to run (i.e. OS, software dependencies etc.). The algorithms should run as:

 

foo input.wav input_tempo.txt input_meter.txt

 

where input.wav is the audio file, input_tempo.txt is a text file containing the BPM output of the algorithm, and input_meter.txt contains the music meter output. In case a method requires computationally intensive initialization, it can be run as:

 

foo inputFolder outputFolder

 

where inputFolder is a folder containing the audio files and outputFolder is a folder containing the results (one text file per audio).

Evaluation Metrics

Tempo Estimation

All algorithms will be evaluated based on the acc1 evaluation metric. acc1 is the capability of an algorithm to predict the correct tempo with a 4% tolerance. Since the contest dataset mostly contains odd meters (7/8, 9/8 etc.) the common acc2 which considers as correct fractions of the groundtruth tempo (e.g. 2, 1/2, 3 and 1/3) is not representative of a method performance and thus will not be used in the evaluations.

Meter Estimation:

The standard classification accuracy will be used.

Contest Rules

Any student, researcher, professor or individual can participate in the contest under the restriction that they hold the rights of the algorithm they are submitting. Dependencies on third party code should be mentioned explicitly.

Participants should provide detailed instructions on how to run the program, as well as a technical report (up to two pages long) with the description of their approach.

Organizers cannot participate to the contest, but they can evaluate their methods as baseline and beyond competition.

Awards

As a symbolic prize, the first place is awarded one free registration to the conference. In case the first place is shared, one free registration is awarded for each winning submission.

 

Special Session

Already published methods are welcome and this should be explicitly mentioned in the respective technical report.

If a submitted algorithm is unpublished or contains sufficient novelty compared to an existing version, the author(s) are encouraged to submit a regular paper to the FMA-2018 workshop using the EasyChair submission platform. The paper will be peer-reviewed and, if accepted, it will be included in a special session dedicated to the challenge. If there are not enough accepted papers to form a special session, then they will be included in the main track of FMA-2018.

 

Important Dates

Submission of methods and technical reports: May 25, 2018

Submission of papers (if desired): May 25, 2018

Challenge results: June 6, 2018

Notification of paper decisions (if submitted): June 8, 2018