grab-and-play

Grab-and-play is a software that uses supervised learning for rapid prototyping of motion-sound mappings.

It allows people to sketch gesture design ideas by only demonstrating how they might move using a given input device. Supervised learning is then used to rapidly generate alternative mappings that satisfies the constraints encoded by people’s demonstrated motion.

Grab-and-play results from a user-centered design process of supervised learning led in parallel with expert musicians and music therapy stakeholders. We were interested in understanding how interactive supervised learning could support similar patterns of creative expression in both expert and non-expert musicians. The final version of the software was applied to the Sound Control action research project, as well as to a performance for the yug music project.

The software implements four methods for steering supervised learning modelling, offering different degrees of control and discovery over the sketching process. It exists as a Java extension to the Wekinator, which uses the OSC protocol to link given gestural controllers to sound synthesis engines such as ChucK or Max/MSP.

Year
2016

Credits
The project was developed with Rebecca Fiebrink in collaboration with the Department of Computing of Goldsmiths University of London, in the context of the ENS Paris-Saclay Pre-doctoral Research program.

Publications
Pre-doctoral report (2016)
Paper at ICMC (2016)

Code
Available on GitHub

Ce diaporama nécessite JavaScript.