ægo

ægo is a performance for one human improvizer and one learning machine, co-created with Axel Chemla—Romeu-Santos.

ægo results from a music research and creation project with machine learning. Our artistic intention was to emphasize the human learnings that machine learning could enable toward sound and music—rather than the opposite, as is often framed in contemporary AI applications.

We opted for a performance format showing a performer and a machine improvising together to learn to interact with latent sound spaces—on an embodied level for the performer, and on a computational level for the machine. The slow-paced spectromorphologies, synthesized and projected in real-time over the stage and the performer, invite the audience to reflect on this joint human-machine learning.

Crucially, we directed the performance so that the human would progressively relinquish communication of accurate feedback to the machine, thus leaving the machine’s learning indeterminate on purpose. Released from the obligation of teaching and controlling its artificial alter ego, the human is allowed to let her or his embodied mind unify with sound, eventually learning to interact with music.

Year
2019

Credits
The project was developed in collaboration with IRCAM, in the context of the Sorbonne Université Doctorate in Computer Science.

Event/Publication
Performance @ Friche La Belle de Mai, Marseille, Fr (October 2019)
Music program at CMMR 2019

Ce diaporama nécessite JavaScript.