ægo

ægo is a performance for one learning machine and one human improvizer, co-created with Axel Chemla—Romeu-Santos.

A human and a machine improvise together to learn to interact with music. The machine takes uncertain actions in its latent sound space, while the human communicates subjective feedback to the machine to guide its trajectory. The drone-like, machine-generated sounds, synthesized and projected in real-time over the stage and the performer, invite the audience to reflect on this joint human-machine learning—on an embodied level for the human, and on a computational level for the machine.

Crucially, the performance is directed so that the performer progressively relinquishes communication of accurate feedback to the machine. Released from the obligation of teaching and controlling its artificial alter ego, the human is allowed to let their embodied mind unify with sound, eventually learning to interact with music. By leaving the machine’s learning indeterminate on purpose, the performance emphasizes the human learnings that machine learning could enable toward sound and music—rather than the opposite, as is often framed in contemporary AI applications.

ægo results from a research and creation project led with two machine learning models: an audio VAE, and the Co-Explorer.

Year
2019

Credits
The project was developed in collaboration with IRCAM, in the context of the Sorbonne Université Doctorate in Computer Science.

Event
Performance @ Friche La Belle de Mai, Marseille, Fr (October 2019)
Music program at CMMR 2019

Ce diaporama nécessite JavaScript.