grab-and-play

« Grab-and-Play » is a software that uses machine learning for rapid prototyping of motion-sound mappings.

It allows people to sketch gesture design ideas by only demonstrating how they might move using a given input device. Machine learning is then used to rapidly generate alternative mappings that satisfies the constraints encoded by people’s demonstrated motion.

The software implements four methods for steering supervised learning modelling, offering different degrees of control and discovery over the sketching process. It exists as a Java extension to the Wekinator, which uses the OSC protocol to link given gestural controllers to sound synthesis engines such as ChucK or Max/MSP. It was evaluated and extensively used in a music therapy project and in a performance for yug.

Categories
computer programming
interaction design
data design

Year
2016

Credits
The project was developed with Rebecca Fiebrink in collaboration with the Department of Computing of Goldsmiths University of London, in the context of the ENS Paris-Saclay Pre-doctoral Research program.

Code
Available on GitHub