Skip to content →
Header image

The XMM Library

We released an open-source C++ library for continuous motion recognition and mapping, called XMM. XMM is a portable, cross-platform C++ library that implements Gaussian Mixture Models and Hidden Markov Models for recognition and regression. The XMM library was developed for movement interaction in creative applications and implements an interactive machine learning workflow with fast training and continuous, real-time inference.

XMM - feature image

I authored the library during my PhD thesis, supervised by Frederic Bevilacqua, in the {Sound Music Movement} Interaction team of the STMS Lab – IRCAM – CNRS – UPMC (2011-2015).

The library includes a number of classes for probabilistic models, from GMMs to Hierarchical HMMs, implemented for continuous real-time recognition and generation, following my PhD work on motion-sound mapping by demonstration. It comes with Python bindings, and is also implemented as a set of externals for Cycling’74 Max, that are released within Mubu (see this article).

Download

View on Github

For the Cycling’74 Max externals, see this article.

Documentation

The full documentation is available on Github Pages: http://ircam-rnd.github.io/xmm/

Licence

This project is released under the GPLv3 license. For commercial applications, a proprietary license is available upon request to Frederick Rousseau.

XMM is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

XMM is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with XMM. If not, see http://www.gnu.org/licenses/.

Citing this work

If you use this code for research purposes, please cite one of the following publications:

References

  • “Probabilistic Models for Designing Motion and Sound Relationships,” Proceedings of the 2014 International Conference on New Interfaces for Musical Expression (NIME'14) , London, UK, . .
  • “A Multimodal Probabilistic Model for Gesture--based Control of Sound Synthesis,” Proceedings of the 21st ACM international conference on Multimedia (MM'13) , Barcelona, Spain, , pp. 705--708. DOI: 10.1145/2502081.2502184.