on performing with deep reinforcement learning

This research and creation project investigated music performance with deep reinforcement learning.

It builds on the scientific study of two machine learning models respectively developed for data-driven sound synthesis and interactive exploration. It details the design and implementation of an interface for embodied musical interaction that leverages the learning capabilities of the two models. It then describes ægo, an improvisational piece with interactive sound and image for one performer, composed and performed with our interface.

The practice-based approach let us to take a critical look at the outputs of our case study. In particular, I reflected on how reinforcement learning opened my performer’s expectations away from instrumental control of sound, to awaken heightened listening of sound and learn spiritual unification with music. Altogether, this process let us challenge current applications of machine learning to music, giving us insight on the art, design, and science aspects of machine learning in the realm of computer music multidisciplinary research.

Year
2019

Credits
The project was developed with Axel Chemla—Romeu-Santos in collaboration with the ISMM and ACIDS groups of IRCAM, in the context of the Sorbonne Université Doctorate in Computer Science.

Publication
Paper at CMMR (2019)

Ce diaporama nécessite JavaScript.