ægo is a performance for one human improvizer and one learning machine, co-created with Axel Chemla—Romeu-Santos.

ægo results from a research and creation project of practice with machine learning. It intends to show and share an encounter between a learning machine and a human being to the audience. The learning machine possesses a latent sound space, as well as a distinctive expressive behaviour, that are both originally unknown to the human being. Through improvisation, the human and the machine will simultaneously learn to interact with each other—on an embodied level for the human, and on a computational level for the machine.

This mutual exploration is designed to be heard, seen, and experienced by the audience. The piece divides in six successive scenes, corresponding to different latent spaces and sonic dimensions learned by the machine. The performer will expressively negotiate sound control with the machine, by communicating positive or negative feedback using motion sensors placed on both hands. The slowly-evolving spectromorphologies, synthesized and projected in real-time on stage, intends to open a sensitive reflection on what is actually learned on a musical level, both by the human and its artificial alter ego—the machine.


The project was developed in collaboration with IRCAM, in the context of the Sorbonne Université Doctorate in Computer Science.

Performance @ Friche La Belle de Mai, Marseille, Fr (October 2019)
Music program at CMMR 2019

Ce diaporama nécessite JavaScript.