musicking deep reinforcement learning

This research-creation project investigated music performance with deep reinforcement learning.

It builds on the scientific study of two deep learning models, an audio VAE and the Co-Explorer, respectively developed for data-driven sound synthesis and interactive exploration. It details the design and implementation of an interface for embodied musical interaction that leverages the learning capabilities of the two models. It then describes ægo, an improvisational piece with interactive sound and image for one performer, composed and performed with our interface.

The practice-based approach let us to take a critical look at the outputs of our case study. In particular, I reflected on how deep reinforcement learning opened my performer’s expectations away from instrumental control of sound, to awaken heightened listening of sound and learn spiritual unification with music. Altogether, this process let us challenge current applications of deep learning to music, giving us insight on the art, design, and science aspects of machine learning in the realm of computer music multidisciplinary research.

Year
2019
Credits
The project was developed with Axel Chemla—Romeu-Santos in collaboration with the ISMM and ACIDS groups of IRCAM, in the context of a PhD thesis at Sorbonne Université.
Publications
Chapter at LNCS Springer (2021)
Paper at TENOR (2022)

This slideshow requires JavaScript.