My Master’s thesis investigated the expressive movements and vocal imitations commonly used by humans to communicate on sounds.
Humans often rely on gestures to communicate on sounds, which they often combine with idiosyncratic vocalizations to give their conversation partner a sense of its qualities. Inspired by speech communication studies, we were interested in studying the links between gestures and vocalizations in sound imitation. This embodied music cognition study would then inform the building of sound design tools developed in the context of the SkAT-VG European research project.
We first led qualitative annotation of an audio-visual database of vocal and gestural imitations, which established our hypotheses. We then led a controlled experiment where we asked participants imitate sounds that we synthesized with respect to our hypotheses. We finally performed quantitative analysis of audio and motion data harvested. Our results suggested a different role for vocalizations and gestures. Whereas the vocalizations reproduce all features of the referent sounds as faithfully as vocally possible, the gestures focus on one salient feature with metaphors based on auditory-visual correspondences.
The project was developed with Guillaume Lemaitre, Frédéric Bevilacqua, Patrick Susini, and Jules Françoise in collaboration with the PDS and ISMM groups of IRCAM, in the context of the Sorbonne Université Master’s program in Engineering Sciences.