This research and creation project consisted in practicing with machine learning for computer music multidisciplinary research.
It builds on the scientific study of two machine learning models respectively developed for data-driven sound synthesis and interactive exploration. It details how the learning capabilities of the two models were leveraged to design and implement a musical instrument focused on embodied musical interaction. It then describes how this instrument was employed and applied to the composition and performance of ægo, an improvisational piece with interactive sound and image for one performer.
Our work emerged from a research and creation process, in which we closely articulated a research methodology with a creation project. This process enabled us to take a critical look at the output of our case study. We expose our personal reflections emerging from practice with machine learning, and propose conceptual insight for future multidisciplinary inquiries in the realm of computer music.
Paper at CMMR (2019)