entrain is a connected sequencer that uses active learning to incite social interaction in mobile music making.
Participants may go on a web page using their smartphone to spontaneously play together and generate rhythmic loops. Depending on their individual behaviour, the machine may designate specific participants by generating audiovisuals in an adaptive manner. The resulting expressive workflow leverages rhythmic entrainment to stimulate social interaction between humans, as well as with the machine.
entrain was developed using a participatory design method. We started by brainstorming interaction scenarios with stakeholders before envisioning one particular type of machine learning. We then implemented an active learning prototype, called Bayesian Information Gain, which enabled to steer participants toward new musical configurations, while remaining sufficiently complex to appear as a black-box to them—a feature that was of interest for such a public installation.
The project was developed with Abby Wanyu Liu, Benjamin Matuszewski, and Frédéric Bevilacqua in collaboration with the ISMM group of IRCAM, as well as Jean-Louis Fréchin and Uros Petrevski from Nodesign.net, and Norbert Schnell from Collaborative Mobile Music Lab of Furtwangen University, in the context of the Sorbonne Université Doctorate in Computer Science.
Exhibition @ SIGGRAPH 2019 Studio, Los Angeles Convention Center, USA (July 2019)
Paper at ACM SIGGRAPH Studio (2019)