ægo is a performance for one learning machine and one human improvizer, co-created with Axel Chemla—Romeu-Santos.

ægo results from a research and creation project led with machine learning. Our artistic intention was to emphasize the human learnings that machine learning could enable toward sound and music—rather than the opposite, as is often framed in contemporary AI applications.

We opted for a performance format showing a human and a machine improvising together to learn to interact with latent sound spaces—on an embodied level for the human, and on a computational level for the machine. The slow-paced spectromorphologies, synthesized and projected in real-time over the stage and the performer, invite the audience to reflect on this joint human-machine learning.

Crucially, we directed the performance so that the human would progressively relinquish communication of accurate feedback to the machine, thus leaving the machine’s learning indeterminate on purpose. Released from the obligation of teaching and controlling its artificial alter ego, the human is allowed to let her or his embodied mind unify with sound, eventually learning to interact with music.


The project was developed in collaboration with IRCAM, in the context of the Sorbonne Université Doctorate in Computer Science.

Performance @ Friche La Belle de Mai, Marseille, Fr (October 2019)
Music program at CMMR 2019

Ce diaporama nécessite JavaScript.