Motion-Sound Interaction through Vocalization

March 24, 2015 #HMR #Mapping-by-Demonstration #Sonification #Vocalization

In this project, we investigate how vocalizations produced with movements can support the design of sonic interactions. We propose a generic system for movement sonification able to learn the relationship between gestures and vocal sounds, with applications in gaming, performing arts, and movement learning.

System Overview

The system is based on Hidden Markov Regression (HMR) [francoise2013multimodal] to learn the mapping between sequences of motion features and sequences of sound descriptors representing the vocal sounds — for example, MFCCs. During the demonstration phase, the user produces a vocalization synchronously with a gesture. The joint recording of motion and sound features is used to train a multimodal HMM encoding their relationships. For performance, we use HMR to continuously generate the sequences of sound descriptors associated to a new movement sequence, that drive the synthesis of the vocal sounds using descriptor-driven granular synthesis.

Vocalization Overview

The Imitation Game

For SIGGRAPH'14 Emerging Technologies, we created an imitation game based on the vocalization system [francoise2014mad]. The first step is to record a vocal imitation along with a particular gesture. Once recorded, the systems learns the mapping between the gesture and the vocal sound, allowing users to synthesize the vocal sounds from new movements. In the game, each player can record several vocal and gestural imitations, the goal is then to mimic the other player as precisely as possible to win the game!

 

Vocalizing Dance Movement

Vocalization is an essential component in dance and movement practice, and is often used to support movement expression. Many choreographers use vocalization to communicate to dancers a set of attributes of the movement, such as timing and dynamics. In Laban Movement Analysis, vocalization is used to support the performance of particular Efforts relating to movement qualities.

Dancer with Sensors

We conducted a study on the sonification of Laban Effort factors using the vocalization system [francoise2014vocalizing]. We trained a system using expert performances of vocalized movement qualities, that we used in an exploratory workshop to support the pedagogy of Laban Effort Factors with dancers.

The Synekine Project (Greg Beller)

Synekine is a project by composer Greg Beller that “brings together performance and scientific research to create new ways to express ourselves. […] In the Synekine project, the performers develop a fusional language involving voice, hand gestures and physical movement. This language is augmented by an interactive environment made of sensors and other Human-Computer Interfaces...”

Greg Beller used both our systems based on Hidden Markov Regression and Gaussian Mixture Regression in his prototypes “Wired Gestures” and “Gesture Scapes”:

 

 

References

  • Jules Françoise, Norbert Schnell, and Frédéric Bevilacqua, “A Multimodal Probabilistic Model for Gesture--based Control of Sound Synthesis,” in Proceedings of the 21st ACM international conference on Multimedia (MM'13), Barcelona, Spain, 2013, pp. 705--708. DOI: 10.1145/2502081.2502184. http://dl.acm.org/authorize?6951634.
    Abstract
    In this paper, we propose a multimodal approach to create the mapping between gesture and sound in interactive music systems. Specifically, we propose to use a multimodal HMM to conjointly model the gesture and sound parameters. Our approach is compatible with a learning method that allows users to define the gesture--sound relationships interactively. We describe an implementation of this method for the control of physical modeling sound synthesis. Our model is promising to capture expressive gesture variations while guaranteeing a consistent relationship between gesture and sound.
    Download
    Acceptance Rate: 20%
  • Jules Françoise, Norbert Schnell, and Frédéric Bevilacqua, “MaD: Mapping by Demonstration for Continuous Sonification,” in ACM SIGGRAPH 2014 Emerging Technologies (SIGGRAPH '14), Vancouver, BC, Canada, ACM, 2014, pp. 16:1----16:1. DOI: 10.1145/2614066.2614099. http://dl.acm.org/authorize?N88513.
    Download Project Page
  • Jules Françoise, Sarah Fdili Alaoui, Thecla Schiphorst, and Frédéric Bevilacqua, “Vocalizing Dance Movement for Interactive Sonification of Laban Effort Factors,” in Proceedings of the 2014 Conference on Designing Interactive Systems (DIS '14), Vancouver, Canada, ACM, 2014, pp. 1079--1082. DOI: 10.1145/2598510.2598582. http://dl.acm.org/authorize?N71679.
    Abstract
    We investigate the use of interactive sound feedback for dance pedagogy based on the practice of vocalizing while moving. Our goal is to allow dancers to access a greater range of expressive movement qualities through vocalization. We propose a methodology for the sonification of Effort Factors, as defined in Laban Movement Analysis, based on vocalizations performed by movement experts. Based on the experiential outcomes of an exploratory workshop, we propose a set of design guidelines that can be applied to interactive sonification systems for learning to perform Laban Effort Factors in a dance pedagogy context.
    Download Project Page
    Acceptance Rate: 26%
Featured image