ægo

ægo is a performance for one learning machine and one human improvizer, co-created with Axel Chemla—Romeu-Santos.

A human and a machine improvise together, in the hope of learning to interact with latent sound spaces. The machine takes uncertain actions in the sound space, while the human communicates subjective feedback to the machine to guide its trajectory. The drone-like, machine-generated sounds, synthesized and projected in real-time over the stage and the performer, invite the audience to reflect on this joint human-machine learning—on an embodied level for the human, and on a computational level for the machine.

Crucially, the performance is directed so that the performer progressively relinquishes communication of accurate feedback to the machine. Released from the obligation of teaching and controlling its artificial alter ego, the human is allowed to let their embodied mind unify with sound, eventually learning to interact with music. The machine’s learning is left indeterminate on purpose to emphasize the human learnings that machine learning could enable toward sound and music—rather than the opposite, as is often framed in contemporary AI applications.

ægo results from a research and creation project led with two machine learning models: an audio VAE, and the Co-Explorer.

Year
2019

Credits
The project was developed in collaboration with IRCAM, in the context of the Sorbonne Université Doctorate in Computer Science.

Event/Publication
Performance @ Friche La Belle de Mai, Marseille, Fr (October 2019)
Music program at CMMR 2019

Ce diaporama nécessite JavaScript.