entrain

entrain is a shared installation that uses active learning to incite social interaction in co-located mobile music-making.


Participants may fill circular sequences to generate rhythmic loops. Depending on their individual behaviour, the machine may designate specific participants by generating audiovisuals in an adaptive manner. The resulting expressive workflow leverages musical entrainment—or rhythmic synchronization—to stimulate social interaction between humans, as well as with the machine.

entrain was developed using a participatory design method. We started with an observation step to brainstorm interaction scenarios with stakeholders before deciding on the machine learning technique to be studied. We then implemented a model prototype, called Bayesian Information Gain, which enabled to steer participants toward new musical configurations, while remaining sufficiently complex to appear as a black-box to them—a feature that was of interest for such a public installation.

entrain builds on Coloop, an award-winning connected sequencer allowing up to eight mobile users to spontaneously play together by simply accessing a web page. It leverages on soundworks, a JavaScript library for collective mobile web interaction, which supports temporal synchronisation of mobile devices. The loudspeaked, designed by Nodesign.net, contains a RaspberryPi responsible for sending information to the loudspeaker and its embedded LEDs.

Year
2019

Credits
The project was developed with Abby Wanyu Liu, Benjamin Matuszewski, and Frédéric Bevilacqua in collaboration with the ISMM group of IRCAM, as well as Jean-Louis Fréchin and Uros Petrevski from Nodesign.net, and Norbert Schnell from Collaborative Mobile Music Lab of Furtwangen University, in the context of the Sorbonne Université Doctorate in Computer Science.

Event/Publication
Installation at ACM SIGGRAPH 2019 Studio
Paper at ACM SIGGRAPH Studio (2019)

Code
Available on GitHub

Ce diaporama nécessite JavaScript.