Tutorial Schedule
Tutorials will be scheduled during the first day of the conference (October 26, 2015) as follows:
Morning Tutorials
Tutorial 1. Why singing is interesting (S. Dixon, M. Goto, M. Mauch)
Tutorial 2. Addressing the music information needs of musicologists (R. J. Lewis, B. Fields, T. Crawford, K. Page, D. Weigl, M. Mueller, D. Lewis, C. Rhodes, J. Gagen)
Tutorial 3. Markov logic networks for music analysis (H. Papadopoulos)
Afternoon Tutorials
Tutorial 4. Why flamenco is interesting for MIR research? (E. Gomez, N. Kroher, J.M. Díaz-Báñez, S. Oramas, J. Mora, P. Gómez-Martín)
Tutorial 6. Automatic music transcription (Z. Duan, E. Benetos)
Tutorial 1. Why singing is interesting
(Simon Dixon, Masataka Goto, Matthias Mauch )
E.T.S.I. Telecomunicación. Sala de Grados A
The proposed tutorial aims to introduce to the ISMIR community the exciting world of singing styles, the mechanisms of the singing voice, and provide a guide to representations, engineering tools, and methods for analyzing and leveraging it. The singing voice is arguably the most expressive of all musical instruments, and all popular music cultures around the world use singing. Across disciplines, a lot is known about singing culture and the intricate physiological and psychological mechanisms of singing, but this knowledge is not exploited enough in much of the music information retrieval literature. The three parts of the tutorial (one hour each) are designed to remedy this: an introduction to singing styles, techniques and forms around the world (including a short introduction to the psychology of singing), a practical guide to the analysis of singing using music informatics tools, and an overview over various systems for singing information processing. Our aim is for music information retrieval specialists to walk away with a newly sparked passion for singing, and ideas of how to use our knowledge of singing, and singing information processing, to create new, exciting research.
Tutorial 2. Addressing the music information needs of musicologists
(Richard J. Lewis, Ben Fields, Tim Crawford, Kevin Page, David Weigl, Meinard Mueller, David Lewis, Christophe Rhodes, Justin Gagen)
E.T.S.I. Telecomunicación. Sala de Grados B
The music information needs of musicologists are not being met by the current generation of MIR tools and techniques. While evaluation has always been central to the practice of the music information retrieval community, the tasks tackled most often address the music information needs of recreational users, such as playlist recommendation systems; or are specified at a level which is not very relevant to the needs of music researchers, such as beat or key finding; or have focused on--and possibly even become over-fitted to--a narrow range of musical repertoire which doesn't cover musicological interests. In this tutorial we will present those music information needs through topics including at least the following: the metadata requirements of historical musicology; working with symbolic corpora; studying musical networks; passage-level audio search; and musical understandings of audio features. As as well as these scheduled presentations and discussions, we will ask the attendees to submit suggestions of musicologically motivated research questions suitable for MIR during the course of the tutorial. These will then be reviewed and discussed during the conclusion of the tutorial. Finally, we have invited Meinard Müller to conclude the tutorial by outlining his view on the current state of MIR for musicology. We are aiming to enable attendees, as experts in their own areas of MIR, to find new applications of their tools and techniques that can also serve the needs of musicologists. Given the selection of MIR topics we intend to cover, this tutorial will be of particular interest to those working in: musical metadata; symbolic MIR; audio search; and graph analytics. We believe contemporary musicology to be a rich source of new and exciting challenges for MIR and we are confident the community can rise to those challenges. In the long term, we hope this tutorial will give rise to a selection of new MIREX tasks that focus on musicological challenges.
Tutorial 3. Markov logic networks for music analysis
(Helene Papadopoulos)
E.T.S.I. Telecomunicación. Sala de Juntas de Telecomunicación
The automatic extraction of relevant content information from music audio signals is an essential aspect of Music Information Retrieval (MIR). Music audio signals are very rich and complex, both because of the intrinsic physical nature of audio (incomplete and noisy observations, many modes of sound production, etc.), and because they convey multi-faceted and strongly interrelated semantic information (harmony, melody, metric, structure, etc.). Dealing with real audio recordings thus requires the ability to handle both uncertainty and complex relational structure at multiple levels of representation. Until recent years, these two aspects have been generally treated separately, probability being the standard way to represent uncertainty in knowledge, while logical representation being used to represent complex relational information. Markov Logic Networks (MLNs), in which statistical and relational knowledge are unified within a single representation formalism, have recently received considerable attention in many domains such as natural language processing, link-based Web search, or bioinformatics. The goal of this tutorial is to provide a comprehensive overview of Markov logic networks and show how they can be used as a highly flexible and expressive yet concise formalism for the analysis of music audio signals. We will show how MLNs encompass the probabilistic and logic-based models that are classically used in MIR. Algorithms for MLN modeling, training and inference will be presented, as well as open-source software packages for MLNs that are suitable to MIR applications. We will discuss concrete case-study examples in various fields of application. Although background in machine learning and graphical models is suggested, no advanced knowledge is needed.
Tutorial 4. COmputation and FLAmenco: Why flamenco is interesting for MIR research?
(Emilia Gomez, Nadine Kroher, Jose Miguel Díaz-Báñez, Sergio Oramas, Joaquín Mora, Francisco Gómez-Martín)
E.T.S.I. Telecomunicación. Sala de Juntas de Telecomunicación
This tutorial provides an introduction to flamenco music with the support of MIR techniques. At the same time, the tutorial analyzes the challenges and opportunities that this music repertoire presents to MIR researchers, presents some research contributions and provides a forum to discuss about how address those challenges in future research. As ISMIR 2015 is in in Málaga, this tutorial will give ISMIR participants a unique chance to discover flamenco music in its original location. The tutorial will be structured in two main parts. First, we will provide a general introduction to flamenco music: origins and evolution, musical characteristics, instrumentation, singing and guitar. We will illustrate this introduction with multimedia material and live performance. Then we will analyze how MIR technologies perform for flamenco music. By discussing about several MIR tasks and how they should be addressed in this context, we will discover more about flamenco and how methods tailored to this repertoire can be exploited in other contexts. We will focus on automatic transcription, singer identification, music similarity, genre classification, rhythmic and melodic pattern detection and context-based music description methods. Participants will have the chance to interact with MIR annotated datasets and tools developed for flamenco music in the context of the COFLA project. Audience MIR researchers with an interest on oral music traditions in general and flamenco music in particular.
Tutorial 5. Using correlation analysis and big data to identify and predict musical behaviors
(Jeff C. Smith)
E.T.S.I. Telecomunicación. Sala de Grados B
New and significant repositories of musical data afford unique opportunities to apply data analysis techniques to ascertain insights of musical engagement. These repositories include performance, listening, curation, and behavioral data. Often the data in these repositories also includes demographic and/or location information, allowing studies of musical behavior, for example, to be correlated with culture or geography. Historically, the analysis of musical behaviors was limited. Often, subjects (e.g. performers or listeners) were recruited for such studies. This technique suffered from issues around methodology (e.g. the sample set of subjects would often exhibit bias) or an insufficient number of subjects and/or data to make reliable statements of significance. That is to say the conclusions from these studies were largely anecdotal. In contrast to these historical studies, the availability of new repositories of musical data allow for studies in musical engagement to develop conclusions that pass standards of significance, thereby yielding actual insights into musical behaviors. This tutorial will demonstrate several techniques and examples where correlation and statistical analysis is applied to large repositories of musical data to document various facets of musical engagement. Web site: https://ccrma.stanford.edu/damp/ Stanford University has created a new corpus of amateur music performance data, the Stanford Digital Archive of Mobile Performances, or DAMP, to facilitate the study of musical engagement through application of correlation and statistical analysis. [We are adding another corpus of geo-tagged of singing performance data to DAMP this month].
Tutorial 6. Automatic music transcription
(Zhiyao Duan, Emmanouil Benetos)
E.T.S.I. Telecomunicación. Sala de Grados A
Automatic Music Transcription (AMT) is a fundamental problem in music information retrieval. Roughly speaking, transcription refers to extracting a symbolic representation —a list of notes (pitches and rhythms)—from an audio signal. Music transcription is a fascinating but challenging task, even for humans: in undergraduate music education it is usually called dictation, and achieving a high level of proficiency requires years of practice and training. Empowering this ability to machines is an even more challenging problem, especially for automatically transcribing polyphonic music. To that end, the AMT problem has drawn great interest of researchers from several areas including signal processing, machine learning, acoustics, music theory, and music cognition. In terms of applications, a successful AMT system would be helpful for solving many MIR research problems, including music source separation, structure analysis, content-based music retrieval, and musicological study of non-notated music, just to name a few. This tutorial will give an overview of the AMT problem, including current approaches, datasets and evaluation methodologies. It will also explore connections with other related problems (i.e. audio-score alignment, source separation) as well as applications to related fields, such as content-based music retrieval and computational musicology. The tutorial is designed for students and researchers who have general knowledge of music information retrieval and/or computational musicology and are interested in getting into the field of AMT. A substantial amount of time will be spent in discussing challenges and research directions; we hope that this discussion will help move this field forward, and influence related fields in MIR and computational musicology to exploit AMT technologies. The tutorial will also include hands-on sessions on using AMT code and plugins - participants will be encouraged to bring their laptops and gain access to transcription datasets, as well as work on AMT examples.