Skip to content →
Header image

Research Projects

GaussBox

GaussBox is a pedagogical tool for prototyping movement interaction using machine learning. GaussBox proposes interactive visualizations that expose the behavior and internal values of probabilistic models.

SoundGuides: Adapting Continuous Auditory Feedback to Users

SoundGuides is a user adaptable tool for auditory feedback on movement. Using interactive machine learning, the system can automatically adapt the auditory feedback to any new user, taking into account the particular way each user performs a given gesture.

Myo for Max

A Max external for communication with the Myo armband

The XMM Library

XMM is a portable, cross-platform C++ library that implements Gaussian Mixture Models and Hidden Markov Models for recognition and regression. The XMM library was developed for movement interaction in creative applications and implements an interactive machine learning workflow with fast training and continuous, real-time inference.

Motion-Sound Interaction through Vocalization

In this project, we investigate how vocalizations produced with movements can support the design of sonic interactions. We propose a generic system for movement sonification able to learn the relationship between gestures and vocal sounds, with applications in gaming, performing arts, and movement learning.

Playing Sound Textures

This project focuses on developing a generic system for continuous motion sonification with sound textures. We initially created the system for an interactive installation we presented at SIGGRAPH'14 Studio.

mubu.*mm: Probabilistic Models for Designing Motion & Sound Relationships

Machine Learning is an efficient design support tool, allowing users to easily build, evaluate and refine gesture recognizers, movement-sound mappings and control strategies. We propose 4 probabilistic models with complementary properties in terms of multimodality and temporality.

Hierarchical Approach to Mapping

This work presents the study and implementation of Hierarchical Hidden Markov Models (HHMMs) for real-time gesture segmentation, recognition and following. The model provides a 2-level hierarchical (segmental) representation of gestures that allow for hybrid control of sound synthesis.