My PhD thesis introduced the notion of machine expression to describe music interactions created by machine learning.
Machine expression is a model for music interaction rooted in embodied music cognition. It pragmatically addresses the fact that humans may perceive expression in machines, regardless of machines’ abilities to express or be creative by themselves. While the notion of expression is not new for music research, our wish was to create an alternative discourse to machine learning applied to music, deconstructing the mainstream concept of artificial creativity by making it explicitly relational and situated in a network of expressive human and non-human bodies.
In my thesis, machine expression was identified as a prominent feature of several music dispositifs designed with machine learning, spanning practices such as motion-sound mapping, sonic exploration, synthesis exploration, or collective musical interaction.
Current research seeks to extend this musical notion to a broader range of art and design dispositifs.