Publications


2017

  • Jules Françoise, Yves Candau, Sarah Fdili Alaoui, and Thecla Schiphorst, “Designing for Kinesthetic Awareness: Revealing User Experiences Through Second-Person Inquiry,” in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17), Denver, CO, USA, ACM, 2017, pp. 5171--5183. DOI: 10.1145/3025453.3025714. https://dl.acm.org/authorize?N38827.
    Abstract
    We consider kinesthetic awareness, the perception of our own body position and movement in space, as a critical value for embodied design within third wave HCI. We designed an interactive sound installation that supports kinesthetic awareness of a participant's micro-movements. The installation's interaction design uses continuous auditory feedback and leverages an adaptive mapping strategy, refining its sensitivity to increase sonic resolution at lower levels of movement activity. The installation uses field recordings as rich source materials to generate a sound environment that attunes to a participant's micro-movements. Through a qualitative study using a second-person interview technique, we gained nuanced insights into the participants' subjective experiences of the installation. These reveal consistent temporal patterns, as participants build on a gradual process of integration to increase the complexity and capacity of their kinesthetic awareness during interaction.
    Download
  • Sarah Fdili Alaoui, Jules Françoise, Thecla Schiphorst, Karen Studd, and Frederic Bevilacqua, “Seeing, Sensing and Recognizing Laban Movement Qualities,” in Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI'17), Denver, CO, USA, ACM, 2017. DOI: 10.1145/3025453.3025530. https://dl.acm.org/authorize?N38828.
    Abstract
    Human movement has historically been approached as a functional component of interaction within human computer interaction. Yet movement is not only functional, it is also highly expressive. In our research, we explore how movement expertise as articulated in Laban Movement Analysis (LMA) can contribute to the design of computational models of movement's expressive qualities as defined in the framework of Laban Efforts. We include experts in LMA in our design process, in order to select a set of suitable multimodal sensors as well as to compute features that closely correlate to the definitions of Efforts in LMA. Evaluation of our model shows that multimodal data combining positional, dynamic and physiological information allows for a better characterization of Laban Efforts. We conclude with implications for design that illustrate how our methodology and our approach to multimodal capture and recognition of Effort qualities can be integrated to design interactive applications.
    Download

2016

  • Frederic Bevilacqua, Eric Boyer, Jules Francoise, Olivier Houix, Patrick Susini, Agnes Roby-Brami, and Sylvain Hanneton, “Sensori-motor Learning With Movement Sonification: A Perspective From Recent Interdisciplinary Studies,” Frontiers in Neuroscience, vol. 10, 2016, pp. 385. DOI: 10.3389/fnins.2016.00385. http://journal.frontiersin.org/article/10.3389/fnins.2016.00385.
    Abstract
    This article reports on an interdisciplinary research project on movement sonification for sensori-motor learning. First, we describe different research fields which have contributed to movement sonification, from music technology including gesture-controlled sound synthesis, sonic interaction design, to research on sensori-motor learning with auditory-feedback. In particular, we propose to distinguish between sound-oriented tasks and movement-oriented tasks in experiment involving interactive sound feedback. We describe several research questions and recently published results on movement control, learning and perception. In particular, we studied the effect of the auditory feedback on movements considering several cases: from experiments on pointing and visuo-motor tracking to more complex tasks where interactive sound feedback can guide movements, or cases of sensory substitution where the auditory feedback can inform on object shapes. We also developed specific methodologies and technologies for designing the sonic feedback and movement sonification. We conclude with a discussion on key future research challenges in sensori-motor learning with movement sonification. We also point out towards promising applications such as rehabilitation, sport training or product design.
    Download
  • Frédéric Bevilacqua, Baptiste Caramiaux, and Jules Françoise, “Perspectives on Real-time Computation of Movement Coarticulation,” in Proceedings of the 3rd International Symposium on Movement and Computing (MOCO '16), Thessaloniki, Greece, ACM, 2016, pp. 35--40. DOI: 10.1145/2948910.2948956. http://doi.acm.org/10.1145/2948910.2948956.
    Download
  • Jules Françoise, Frédéric Bevilacqua, and Thecla Schiphorst, “GaussBox: Prototyping Movement Interaction with Interactive Visualizations of Machine Learning,” in Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA '16), San Jose, CA, ACM, 2016, pp. 3667--3670. DOI: 10.1145/2851581.2890257. http://dl.acm.org/authorize?N03765.
    Abstract
    We present GaussBox, a design support tool for prototyping movement interaction using machine learning. In particular, we propose novel, interactive visualizations that expose the behavior and internal values of machine learning models rather than their sole results. Such visualizations have both pedagogical and creative potentials to guide users in the exploration, experience and craft of machine learning for interaction design.
    Download Project Page
    Acceptance Rate: 20%
  • Jules Françoise, Olivier Chapuis, Sylvain Hanneton, and Frédéric Bevilacqua, “SoundGuides: Adapting Continuous Auditory Feedback to Users,” in Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA '16), San Jose, CA, ACM, 2016, pp. 2829--2836. DOI: 10.1145/2851581.2892420. http://dl.acm.org/authorize?N03766.
    Abstract
    We introduce SoundGuides, a user adaptable tool for auditory feedback on movement. The system is based on a interactive machine learning approach, where both gestures and sounds are first conjointly designed and conjointly learned by the system. The system can then automatically adapt the auditory feedback to any new user, taking into account the particular way each user performs a given gesture. SoundGuides is suitable for the design of continuous auditory feedback aimed at guiding users' movements and helping them to perform a specific movement consistently over time. Applications span from movement-based interaction techniques to auditory-guided rehabilitation. We first describe our system and report a study that demonstrates a 'stabilizing effect' of our adaptive auditory feedback method.
    Download Project Page
    Acceptance Rate: 20%
  • Jules Françoise, Thecla Schiphorst, and Frédéric Bevilacqua, “Supporting User Interaction with Machine Learning through Interactive Visualizations,” in CHI'16 Workshop on Human-Centred Machine Learning, San Jose, CA, 2016. http://www.doc.gold.ac.uk/~mas02mg/HCML2016/HCML2016_paper_20.pdf.
    Abstract
    This paper discusses novel visualizations that expose the behavior and internal values of machine learning models rather than their sole results. Interactive visualizations have the potential to shift the perception of machine learning models from black-box processes to transparent artifacts that can be experienced and crafted. We discuss how they can reveal the affordances of different techniques, and how they could lead to a deeper understanding of the underlying algorithms. We describe a proof-of-concept application to visualize and manipulate Hidden Markov Models, that provides a ground for a broader discussion on the potentials and challenges of interactive visualizations in human-centered machine learning.
    Download Project Page

2015

  • Jules Françoise, Norbert Schnell, Riccardo Borghesi, and Frédéric Bevilacqua, “MaD,” interactions, vol. 22, no. 3, 2015, pp. 14--15. DOI: 10.1145/2754894. http://dl.acm.org/authorize?N98776.
    Download
  • Jules Françoise, “Motion-Sound Mapping by Demonstration,” PhD Dissertation, Université Pierre et Marie Curie. 2015. http://julesfrancoise.com/phdthesis.
    Download Project Page
  • Jules Françoise, Agnès Roby-Brami, Natasha Riboud, and Frédéric Bevilacqua, “Movement sequence analysis using hidden Markov models,” in Proceedings of the 2nd International Workshop on Movement and Computing (MOCO'15), Vancouver, BC, Canada, ACM Press, 2015, pp. 29--36. DOI: 10.1145/2790994.2791006. http://dl.acm.org/authorize?N05635.
    Abstract
    Movement sequences are essential to dance and expressive movement practice; yet, they remain underexplored in movement and computing research, where the focus on short gestures prevails. We propose a method for movement sequence analysis based on motion trajectory synthesis with Hidden Markov Models. The method uses Hidden Markov Regression for jointly synthesizing motion feature trajectories and their associated variances, that serves as basis for investigating performers’ consistency across executions of a movement sequence. We illustrate the method with a use-case in Tai Chi performance, and we further extend the approach to cross-modal analysis of vocalized movements.
    Download
  • Hugo Scurto, Guillaume Lemaitre, Jules Françoise, Frédéric Voisin, Frédéric Bevilacqua, and Patrick Susini, “Combining gestures and vocalizations to imitate sounds,” The Journal of the Acoustical Society of America, vol. 138, no. 3, sep 2015, pp. 1780--1780. DOI: 10.1121/1.4933639. http://scitation.aip.org/content/asa/journal/jasa/138/3/10.1121/1.4933639.
    Download
  • Max Rheiner, Norbert Schnell, Riccardo Borghesi, Frédéric Bevilacqua, Tuncay Cakmak, Holger Hager, Thomas Tobler, Fabian Troxler, Seki Inoue, Keisuke Hasegawa, Yasuaki Monnai, Yasutoshi Makino, Hiroyuki Shinoda, and Jules Françoise, “Demo hour,” interactions, vol. 22, no. 2, February 2015, pp. 6--9. DOI: 10.1145/2730891. http://dl.acm.org/authorize?N95437.
    Download

2014

  • Jules Françoise, Norbert Schnell, and Frédéric Bevilacqua, “MaD: Mapping by Demonstration for Continuous Sonification,” in ACM SIGGRAPH 2014 Emerging Technologies (SIGGRAPH '14), Vancouver, BC, Canada, ACM, 2014, pp. 16:1----16:1. DOI: 10.1145/2614066.2614099. http://dl.acm.org/authorize?N88513.
    Download Project Page
  • Baptiste Caramiaux, Jules Françoise, Norbert Schnell, and Frédéric Bevilacqua, “Mapping Through Listening,” Computer Music Journal, vol. 38, no. 3, 2014, pp. 34--48. DOI: 10.1162/COMJ_a_00255.
    Abstract
    Gesture-to-sound mapping is generally defined as the association between gestural and sound parameters. This article describes an approach that brings forward the perception-action loop as a fundamental design principle for gesture–sound mapping in digital music instrument. Our approach considers the processes of listening as the foundation – and the first step – in the design of action-sound relationships. In this design process, the relationship between action and sound is derived from actions that can be perceived in the sound. Building on previous works on listening modes and gestural descriptions we proposed to distinguish between three mapping strategies: instantaneous, temporal, and metaphoric. Our approach makes use of machine learning techniques for building prototypes, from digital music instruments to interactive installations. Four different examples of scenarios and prototypes are described and discussed.
  • Jules Françoise, Norbert Schnell, Riccardo Borghesi, and Frédéric Bevilacqua, “Probabilistic Models for Designing Motion and Sound Relationships,” in Proceedings of the 2014 International Conference on New Interfaces for Musical Expression (NIME'14), London, UK, 2014. http://julesfrancoise.com/blog/wp-content/uploads/2014/06/Françoise-et-al.-2014-Probabilistic-Models-for-Designing-Motion-and-Sound-Relationships.pdf.
    Abstract
    We present a set of probabilistic models that support the design of movement and sound relationships in interactive sonic systems. We focus on a mapping--by--demonstration approach in which the relationships between motion and sound are defined by a machine learning model that learns from a set of user examples. We describe four probabilistic models with complementary characteristics in terms of multimodality and temporality. We illustrate the practical use of each of the four models with a prototype application for sound control built using our Max implementation.
    Download Project Page
    Acceptance Rate: 25%
  • Jules Françoise, Sarah Fdili Alaoui, Thecla Schiphorst, and Frédéric Bevilacqua, “Vocalizing Dance Movement for Interactive Sonification of Laban Effort Factors,” in Proceedings of the 2014 Conference on Designing Interactive Systems (DIS '14), Vancouver, Canada, ACM, 2014, pp. 1079--1082. DOI: 10.1145/2598510.2598582. http://dl.acm.org/authorize?N71679.
    Abstract
    We investigate the use of interactive sound feedback for dance pedagogy based on the practice of vocalizing while moving. Our goal is to allow dancers to access a greater range of expressive movement qualities through vocalization. We propose a methodology for the sonification of Effort Factors, as defined in Laban Movement Analysis, based on vocalizations performed by movement experts. Based on the experiential outcomes of an exploratory workshop, we propose a set of design guidelines that can be applied to interactive sonification systems for learning to perform Laban Effort Factors in a dance pedagogy context.
    Download Project Page
    Acceptance Rate: 26%

2013

  • Frédéric Bevilacqua, Norbert Schnell, Nicolas Rasamimanana, Julien Bloit, Emmanuel Fléty, Baptiste Caramiaux, Jules Françoise, and Eric Boyer, “De-MO : Designing Action-Sound Relationships with the MO Interfaces,” in CHI '13 Extended Abstracts on Human Factors in Computing Systems, Paris, France, 2013. http://dl.acm.org/authorize?6812701.
    Abstract
    The Modular Musical Objects (MO) are an ensemble of tangible interfaces and software modules for creating novel musical instruments or for augmenting objects with sound. In particular, the MOs allow for designing action-sound relationships and behaviors based on the interaction with tangible objects or free body movements. Such interaction scenarios can be inspired by the affordances of particular objects (e.g. a ball, a table), by interaction metaphors based on the playing techniques of musical instruments or games. We describe specific examples of action-sound relationships that are made possible by the MO software modules and which take advantage of machine learning techniques.
    Download
  • Jules Françoise, “Gesture--Sound Mapping by Demonstration in Interactive Music Systems,” in Proceedings of the 21st ACM international conference on Multimedia (MM'13), Barcelona, Spain, 2013, pp. 1051----1054. DOI: 10.1145/2502081.2502214. http://dl.acm.org/authorize?6951792.
    Abstract
    In this paper we address the issue of mapping between gesture and sound in interactive music systems. Our approach, we call mapping by demonstration, aims at learning the mapping from examples provided by users while interacting with the system. We propose a general framework for modeling gesture-sound sequences based on a probabilistic, multimodal and hierarchical model. Two orthogonal modeling aspects are detailed and we describe planned research directions to improve and evaluate the proposed models.
    Download
    Best Doctoral Symposium Award
  • Jules Françoise, Ianis Lallemand, Thierry Artières, Frédéric Bevilacqua, Norbert Schnell, and Diemo Schwarz, “Perspectives pour l'apprentissage interactif du couplage geste-son,” in Actes des Journées d'Informatique Musicale (JIM 2013), Paris, France, 2013.
    Abstract
    L'apprentissage de mappings du geste vers le son constitue aujourd'hui un enjeu de recherche majeur. Dans un travail précédent, nous avons proposé un modèle hiérarchique permettant de modéliser des structures temporelles à différentes échelles. Nous nous intéressons ici à l'apprentissage de structures temporelles de plus haut niveau. Plus spécifiquement, nous nous proposons de formuler la problématique de l'articulation entre différents mappings geste-son dans un contexte d'apprentissage interactif. Ce champ de recherche émergent, aux croisements de l'apprentissage automatique et de l'interaction homme-machine, permet à notre sens de poser correctement la question de l'apprentissage "par démonstration". Nous présentons d'abord successivement les cadres de l'apprentissage interactif et de la modélisation du couplage geste–son, puis les perspectives ouvertes par la réunion de ces problématiques, ainsi qu'une première extension de nos travaux précédents dans ce cadre.
  • Jules Françoise, Norbert Schnell, and Frédéric Bevilacqua, “Gesture-based control of physical modeling sound synthesis,” in Proceedings of the 21st ACM international conference on Multimedia (MM'13), Barcelona, Spain, ACM Press, 2013, pp. 447--448. DOI: 10.1145/2502081.2502262. http://dl.acm.org/authorize?6951662.
    Abstract
    We address the issue of mapping between gesture and sound for gesture-based control of physical modeling sound synthesis. We propose an approach called mapping by demonstration, allowing users to design the mapping by performing gestures while listening to sound examples. The system is based on a multimodal model able to learn the relationships between gestures and sounds.
    Download
  • Jules Françoise, Norbert Schnell, and Frédéric Bevilacqua, “A Multimodal Probabilistic Model for Gesture--based Control of Sound Synthesis,” in Proceedings of the 21st ACM international conference on Multimedia (MM'13), Barcelona, Spain, 2013, pp. 705--708. DOI: 10.1145/2502081.2502184. http://dl.acm.org/authorize?6951634.
    Abstract
    In this paper, we propose a multimodal approach to create the mapping between gesture and sound in interactive music systems. Specifically, we propose to use a multimodal HMM to conjointly model the gesture and sound parameters. Our approach is compatible with a learning method that allows users to define the gesture--sound relationships interactively. We describe an implementation of this method for the control of physical modeling sound synthesis. Our model is promising to capture expressive gesture variations while guaranteeing a consistent relationship between gesture and sound.
    Download
    Acceptance Rate: 20%

2012

  • Jules Françoise, Baptiste Caramiaux, and Frédéric Bevilacqua, “A Hierarchical Approach for the Design of Gesture--to--Sound Mappings,” in Proceedings of the 9th Sound and Music Computing Conference, Copenhagen, Denmark, 2012, pp. 233--240.
    Abstract
    We propose a hierarchical approach for the design of gesture-to-sound mappings, with the goal to take into account multilevel time structures in both gesture and sound processes. This allows for the integration of temporal mapping strategies, complementing mapping systems based on instantaneous relationships between gesture and sound synthesis parameters. As an example, we propose the implementation of Hierarchical Hidden Markov Models to model gesture input, with a flexible structure that can be authored by the user. Moreover, some parameters can be adjusted through a learning phase. We show some examples of gesture segmentations based on this approach, considering several phases such as preparation, attack, sustain, release. Finally we describe an application, developed in Max/MSP, illustrating the use of accelerometer-based sen- sors to control phase vocoder synthesis techniques based on this approach.
    Project Page

2011

  • Jules Françoise, “Realtime Segmentation and Recognition of Gestures using Hierarchical Markov Models,” Master's Thesis, Université Pierre et Marie Curie, Ircam. 2011. http://articles.ircam.fr/textes/Francoise11a/index.pdf.
    Abstract
    In this work, we present a realtime system for continuous gesture segmentation and recognition. The model is an extension of the system called Gesture Follower developed at Ircam, which is an hybrid model between Dynamic Time Warping and Hidden Markov Models. This previous model allows for a realtime temporal alignment between a template and an input gesture. Our model extends it by proposing a higher-level structure which models the switching between templates. Taking advantage of a representation as a Dynamic Bayesian Networks, the time complexity of the inference algorithms is reduced from cubic to linear in the length of the observation sequence. We propose various segmentation methods, both offline and realtime. A quantitative evaluation of the proposed model on accelerometer sensor data provides a comparison with the Segmental Hidden Markov Model, and we discuss several sub-optimal methods for realtime segmentation. Our model reveals able to handle signal distortions due to speed variations in the execution of gestures. Finally, a musical application is outlined in a case study about the segmentation of violin bow strokes.
    Download