entrain is a connected sequencer that uses active learning to incite social interaction in mobile music making.
Participants may go on a web page using their smartphone to spontaneously play together and generate rhythmic loops. Depending on their individual behaviour, the machine may designate specific participants by generating audiovisuals in an adaptive manner. The resulting expressive workflow leverages rhythmic entrainment to stimulate social interaction between humans, as well as with the machine.
entrain was developed using a participatory design method. We started by brainstorming interaction scenarios with stakeholders before envisioning one particular type of machine learning. We then implemented an active learning prototype, called Bayesian Information Gain, which enabled to steer participants toward new musical configurations, while remaining sufficiently complex to appear as a black-box to them—a feature that was of interest for such a public installation.
entrain builds on Coloop, an award-winning connected sequencer designed in collaboration with Nodesign.net. It leverages soundworks, a JavaScript library for collective mobile web interaction, which supports temporal synchronisation of mobile devices. The loudspeaker contains a RaspberryPi responsible for sending information to the loudspeaker and its embedded LEDs.
Year
2019
Credits
The project was developed with Abby Wanyu Liu, Benjamin Matuszewski, and Frédéric Bevilacqua in collaboration with the ISMM group of IRCAM, as well as Jean-Louis Fréchin and Uros Petrevski from Nodesign.net, and Norbert Schnell from Collaborative Mobile Music Lab of Furtwangen University, in the context of a PhD thesis at Sorbonne Université.
Event/Publication
Exhibition @ SIGGRAPH 2019 Studio, Los Angeles Convention Center, USA (July 2019)
Paper at ACM SIGGRAPH Studio (2019)
Code
GitHub