This project investigated how humans communicate on sounds by means of expressive movements and vocal imitations.
We first led qualitative annotation of an audio-visual database of vocal and gestural imitations, which established our hypotheses. We then led a controlled experiment where we asked participants imitate sounds that we synthesized with respect to our hypotheses. We finally performed quantitative analysis of audio and motion data harvested.
Our results suggested a different role for vocalizations and gestures. Whereas the vocalizations reproduce all features of the referent sounds as faithfully as vocally possible, the gestures focus on one salient feature with metaphors based on auditory-visual correspondences. These results informed the design of several embodied sound design tools developed in the context of the SkAT-VG Eurorean research project.
The project was developed with Guillaume Lemaitre, Frédéric Bevilacqua, Patrick Susini, and Jules Françoise in collaboration with the PDS and ISMM groups of IRCAM, in the context of the Sorbonne Université / IRCAM Master’s program in Engineering.