Still, moving is an interactive sound installation focusing of designing for kinesthetic awareness. The sound responds to participants’ micro-movements and muscular activity.
GaussBox is a pedagogical tool for prototyping movement interaction using machine learning. GaussBox proposes interactive visualizations that expose the behavior and internal values of probabilistic models.
SoundGuides is a user adaptable tool for auditory feedback on movement. Using interactive machine learning, the system can automatically adapt the auditory feedback to any new user, taking into account the particular way each user performs a given gesture.
Movement sequences are essential to dance and expressive movement practice. We propose a method for movement sequence analysis based on motion trajectory synthesis with Hidden Markov Models.
A Max external for communication with the Myo armband
XMM is a portable, cross-platform C++ library that implements Gaussian Mixture Models and Hidden Markov Models for recognition and regression. The XMM library was developed for movement interaction in creative applications and implements an interactive machine learning workflow with fast training and continuous, real-time inference.
In this project, we investigate how vocalizations produced with movements can support the design of sonic interactions. We propose a generic system for movement sonification able to learn the relationship between gestures and vocal sounds, with applications in gaming, performing arts, and movement learning.
This project focuses on developing a generic system for continuous motion sonification with sound textures. We initially created the system for an interactive installation we presented at SIGGRAPH'14 Studio.
A Max external for communication with the Leap Motion controller
Machine Learning is an efficient design support tool, allowing users to easily build, evaluate and refine gesture recognizers, movement-sound mappings and control strategies. We propose 4 probabilistic models with complementary properties in terms of multimodality and temporality.
I developed a new PiPo module for Max6 dedicated to EMG envelope extraction. The object is based on a non-linear Bayesian filtering method that has 2 amazing properties: stability and reactivity.
This work presents the study and implementation of Hierarchical Hidden Markov Models (HHMMs) for real-time gesture segmentation, recognition and following. The model provides a 2-level hierarchical (segmental) representation of gestures that allow for hybrid control of sound synthesis.