CATEGORY: RESEARCH

GaussBox
May 6, 2016

GaussBox is a pedagogical tool for prototyping movement interaction using machine learning. GaussBox proposes novel, interactive visualizations that expose the behavior and internal values of probabilistic models rather than their sole results. Such visualizations have both pedagogical and creative potentials to guide users in the exploration, experience and craft of machine learning for interaction design.

SoundGuides: Adapting Continuous Auditory Feedback to Users
May 5, 2016

SoundGuides is a user adaptable tool for auditory feedback on movement. The system is based on a interactive machine learning approach, where both gestures and sounds are first conjointly designed and conjointly learned by the system. The system can then automatically adapt the auditory feedback to any new user, taking into account the particular way each user performs a given gesture.

Movement Sequence Analysis using Hidden Markov Models
May 4, 2016

Movement sequences are essential to dance and expressive movement practice; yet, they remain underexplored in movement and computing research, where the focus on short gestures prevails. We propose a method for movement sequence analysis based on motion trajectory synthesis with Hidden Markov Models.

Motion-Sound Interaction through Vocalization
March 24, 2015

In this project, we investigate how vocalizations produced with movements can support the design of sonic interactions. We propose a generic system for movement sonification able to learn the relationship between gestures and vocal sounds, with applications in gaming, performing arts, and movement learning.

Playing Sound Textures
March 23, 2015

This project focuses on developing a generic system for continuous motion sonification with sound textures. We initially created the system for an interactive installation we presented at SIGGRAPH'14 Studio.

Hierarchical Approach to Mapping
June 1, 2013

This work presents the study and implementation of Hierarchical Hidden Markov Models (HHMMs) for real-time gesture segmentation, recognition and following. The model provides a 2-level hierarchical (segmental) representation of gestures that allow for hybrid control of sound synthesis.