Header image

PhD Thesis

«Motion-Sound Mapping by Demonstration»

My PhD work focused on developing the approach and computational models for Motion-Sound Mapping by Demonstration. The approach intersects the design principle of mapping through listening and interactive machine learning, to allow users to craft the mapping between motion and sound from movements performed while listening.

I did my PhD in the {Sound Music Movement} Interaction team at Ircam, supervised by Frédéric Bevilacqua and Thierry Artières. My grant for doctoral studies comes from the EDITE doctoral school at Université Pierre et Marie Curie.

Download DissertationDownload Slides

Abstract
Résumé (FR)

Supplementary Material

Chapter 4 – Probabilistic Movement Models

4.2 – Designing Sonic Interactions with GMMs

This video presents a system using GMMs for recognizing different modes of “scratching” from a contact microphone. We trained three GMM with recordings of three scratching modes. In Performance, we use the posterior likelihoods of each model to mix the filtering of the input audio by different resonant filters.

This application builds upon research in the ISMM team, notably by Nicolas Rasamimanana and Julien Bloit at Phonotonic, that was extended by Bruno Zamborlin with Mogees.

4.6 – Segment-level Mapping with the HHMM

This video presents an application of the Hierarchical HMM to the control of sound synthesis, as described in section 4.6.3. The video accompanies the article presented as SMC 2012.

Chapter 6 - Probabilistic Models for Parameter Generation

6.3 – HMR for Gesture-based Control of Physical Modeling Sound Synthesis

This video presents a system using HMR for learning the relationship between gestures and trajectoires of input parameters to a physical model. This video accompanies the demonstration presented at ACM Multimedia 2013.

Chapter 8 – Playing Sound Textures

8.3 – Siggraph’14 Installation

Demo Video:


The following recording presents the 8 sound examples used for the the SIGGRAPH’14 installation.


Screenshot of the application used in the installation:

8.4 – Gesture Imitation with Sonification

The following video illustrates the 4 demonstration gestures and sounds to reproduce in the experiment.

Chapter 9 – Motion-Sound Interaction through Vocalization

9.2 – Vocalization System Overview

This demonstration video, supporting our proposal for SIGGRAPH’14 Emerging Technologies, illustrates the system for performing vocalization based on continuous gestures.


This video illustrates Wired Gestures, developed by Greg Beller in the “Synekine” project. More information can be found on Greg Beller’s Website.

9.3 – The Imitation game

These sound examples were recorded during the SIGGRAPH’14 installation “The Imitation game”. We successively report the vocalization used for demonstration, and performed by player 1, and the attempts to reproduce the vocal imitation through gesture interaction by player 2.
Demonstration (Player 1):


Attempts to imitate (Player 2):


Figure:

Appendix

A.3 – Towards Continuous Parametric Synthesis

These examples presents the developments of Pablo Arias‘s Master Thesis —Description et synthèse sonore dans le cadre de l’apprentissage mouvement-son par démonstration, that I supervised with Norbert Schnell and Frédéric Bevilacqua.

Granular with Transient Conservation

This video demonstrates gesture-based synthesis of vocalizations with the granular engine with transient conservation. From Pablo Arias.

Hybrid Synthesis

This video demonstrates gesture-based synthesis of vocalizations with the four proposed approaches. From Pablo Arias.