Grab-and-play

Grab-and-play is a software that uses supervised learning for rapid prototyping of motion-sound mappings.

It allows people to sketch gesture design ideas by only demonstrating how they might move using a given input device. Supervised learning is then used to rapidly generate alternative mappings that satisfies the constraints encoded in people’s motion.

Grab-and-play results from a user-centered design process of supervised learning led in parallel with expert musicians and music therapy stakeholders. We were interested in understanding how interactive supervised learning could support similar patterns of creative expression in both expert and non-expert musicians. The final version of the software was applied to the Sound Control action research project, as well as to a performance of the yug music project.

The software exists as a Java extension to the Wekinator, adding four methods to supervised learning with different degrees of control and discovery over the mapping process. It uses the OSC protocol to link gestural controllers to sound synthesis engines such as ChucK or Max/MSP.

Year
2016
Credits
The project was developed with Rebecca Fiebrink in collaboration with the Department of Computing of Goldsmiths University of London, in the context of the pre-doctoral research program of ENS Paris-Saclay.
Publications
Paper at ICMC (2016)
Pre-doctoral report (2016)
Events
Outreach @ BBC Radio 1 Academy, Exeter Phoenix, Exeter, UK (May.17.2016)
Code
GitHub

This slideshow requires JavaScript.