ægo is a performance for one human improvizer and one learning machine, co-created with Axel Chemla—Romeu-Santos.

ægo results from a research and creation project of practice with machine learning. It intends to open a sensitive reflection on what may actually be learned on a musical level through interaction with machine learning, by humans, as well as by its articial alter ego—the machine. To share this reflection with members of an audience, we opted for a performance format that displays a human and a machine mutually learning to interact with each other—on an embodied level for the human, and on a computational level
for the machine—through live improvization.

The piece divides in six successive scenes, corresponding to different latent spaces and sonic dimensions learned by the machine. The performer expressively negotiates sound control with the machine by communicating positive or negative feedback using motion sensors placed on both hands. The slowly-evolving spectromorphologies, synthesized and projected in real-time on stage, create a contemplative, minimalist atmosphere intended to let members of the audience freely consider potential learnings of musical qualities by the human and the machine.


The project was developed in collaboration with IRCAM, in the context of the Sorbonne Université Doctorate in Computer Science.

Performance @ Friche La Belle de Mai, Marseille, Fr (October 2019)
Music program at CMMR 2019

Ce diaporama nécessite JavaScript.