entrain

entrain is a shared installation that uses active learning to incite social interaction in co-located mobile music-making.


Participants may go on a web page using their smartphone to spontaneously play together and generate rhythmic loops. Depending on their individual behaviour, the machine may designate specific participants by generating audiovisuals in an adaptive manner. The resulting expressive workflow leverages rhythmic entrainment to stimulate social interaction between humans, as well as with the machine.

entrain was developed using a participatory design method. We started with an observation step to brainstorm interaction scenarios with stakeholders before deciding on the machine learning technique to be studied. We then implemented a model prototype, called Bayesian Information Gain, which enabled to steer participants toward new musical configurations, while remaining sufficiently complex to appear as a black-box to them—a feature that was of interest for such a public installation.

entrain builds on Coloop, an award-winning connected sequencer designed in collaboration with Nodesign.net. It leverages soundworks, a JavaScript library for collective mobile web interaction, which supports temporal synchronisation of mobile devices. The loudspeaker contains a RaspberryPi responsible for sending information to the loudspeaker and its embedded LEDs.

Year
2019

Credits
The project was developed with Abby Wanyu Liu, Benjamin Matuszewski, and Frédéric Bevilacqua in collaboration with the ISMM group of IRCAM, as well as Jean-Louis Fréchin and Uros Petrevski from Nodesign.net, and Norbert Schnell from Collaborative Mobile Music Lab of Furtwangen University, in the context of the Sorbonne Université Doctorate in Computer Science.

Event/Publication
Exhibition @ SIGGRAPH 2019 Studio, Los Angeles Convention Center, USA (July 2019)
Paper at ACM SIGGRAPH Studio (2019)

Code
GitHub

Ce diaporama nécessite JavaScript.