Jules Françoise
Researcher in Movement and Computing
I am a postdoctoral fellow at the School of Interactive Arts and Technologies (SIAT) at Simon Fraser University (SFU) in Vancouver. I am currently working on the MovingStories project.

May 6, 2016

GaussBox is a pedagogical tool for prototyping movement interaction using machine learning. GaussBox proposes novel, interactive visualizations that expose the behavior and internal values of probabilistic models rather than their sole results. Such visualizations have both pedagogical and creative potentials to guide users in the exploration, experience and craft of machine learning for interaction design.

SoundGuides: Adapting Continuous Auditory Feedback to Users
May 5, 2016

SoundGuides is a user adaptable tool for auditory feedback on movement. The system is based on a interactive machine learning approach, where both gestures and sounds are first conjointly designed and conjointly learned by the system. The system can then automatically adapt the auditory feedback to any new user, taking into account the particular way each user performs a given gesture.

Movement Sequence Analysis using Hidden Markov Models
May 4, 2016

Movement sequences are essential to dance and expressive movement practice; yet, they remain underexplored in movement and computing research, where the focus on short gestures prevails. We propose a method for movement sequence analysis based on motion trajectory synthesis with Hidden Markov Models.

Myo for Max
November 17, 2015

I developed a new external for Cycling'74 Max to connect with the Myo armband.

The XMM Library
April 23, 2015

We released an open-source C++ library for continuous motion recognition and mapping, called XMM. XMM is a portable, cross-platform C++ library that implements Gaussian Mixture Models and Hidden Markov Models for recognition and regression. The XMM library was developed for movement interaction in creative applications and implements an interactive machine learning workflow with fast training and continuous, real-time inference.

Motion-Sound Interaction through Vocalization
March 24, 2015

In this project, we investigate how vocalizations produced with movements can support the design of sonic interactions. We propose a generic system for movement sonification able to learn the relationship between gestures and vocal sounds, with applications in gaming, performing arts, and movement learning.

Playing Sound Textures
March 23, 2015

This project focuses on developing a generic system for continuous motion sonification with sound textures. We initially created the system for an interactive installation we presented at SIGGRAPH'14 Studio.

Leap Motion skeletal tracking in Max
November 7, 2014

I developed a new object for using the Leap Motion in Max, based on the Leap Motion SDK V2 Skeletal Tracking Beta.

mubu.*mm: Probabilistic Models for Designing Motion & Sound Relationships
June 29, 2014

We just released the beta version of mubu.*mm, a set of objects for probabilistic modeling of motion and sound relationships.

PiPo.Bayesfilter: Bayesian Filtering for EMG Envelope Extraction
February 12, 2014

I developed a new PiPo for Max6 dedicated to EMG envelope extraction. The object is based on a non-linear Bayesian filtering method that has 2 amazing properties: stability and reactivity. It addresses the limitations of low-pass filtering techniques that suffer from a smoothing of rapid changes in the EMG signal. Here,

Hierarchical Approach to Mapping
June 1, 2013

This work presents the study and implementation of Hierarchical Hidden Markov Models (HHMMs) for real-time gesture segmentation, recognition and following. The model provides a 2-level hierarchical (segmental) representation of gestures that allow for hybrid control of sound synthesis.