ægo

ægo is a performance for one human improviser and one learning machine, co-created with Axel Chemla—Romeu-Santos.

A human and a machine learn to improvise together on stage. The machine takes exploratory actions in its latent sound space, while the human gives positive or negative feedback with their hands to teach the machine where to head for. The uncertainty of machine-generated sounds, synthesized and video-projected in real-time over the stage and the performer, invite the audience to reflect on this joint human-machine learning.

Crucially, the performance is directed so that the performer progressively relinquishes communication of accurate feedback to the machine. Released from the obligation of teaching and controlling their artificial alter ego, the human is allowed to let their embodied mind unify with sound, eventually learning to interact with music. By leaving the machine’s learning indeterminate on purpose, the performance seeks to emphasize the human learnings that machine learning could enable toward sound and music.

ægo results from a research-creation project led with two machine learning models: an audio VAE, and the Co-Explorer.

Year
2019

Credits
The project was developed in collaboration with IRCAM, in the context of the Sorbonne Université Doctorate in Computer Science.

Event
Performance @ Friche La Belle de Mai, Marseille, Fr (October 2019)
Music program at CMMR 2019

Ce diaporama nécessite JavaScript.