Through a collaboration with researcher Andrea Giomi (Performance Lab, Université Grenoble-Alpes) and professional dancer Loredana Tarnovschi, this project aims to explore means for sonic augmentation of dance gestures (especially hand-based ones) using mass-interaction physical models and a multilayered mapping approach.
Through a collaboration with researcher Andrea Giomi (Performance Lab, Université Grenoble-Alpes) and professional dancer Loredana Tarnovschi, this project aims to explore means for sonic augmentation of dance gestures (especially hand-based ones) using mass-interaction physical models and a multilayered mapping approach.
The study aims to question the pertinence of physical models and structured gesture-sound mappings as a means of movement sonification, providing proprioceptive and sensorimotor feedback, and also to question the expressive qualities of such a system in the scope of interactive dance-music systems.
The study aims to question the pertinence of physical models and structured gesture-sound mappings as a means of movement sonification, providing proprioceptive and sensorimotor feedback, and also to question the expressive qualities of such a system in the scope of interactive dance-music systems.
A first week of residency took place in the Maison de La Création et de l’Innovation in September 2019, allowing to co-construct the mapping model (following a bottom-up strategy informed by the dancer’s feedback and impressions) and experiment with several physical models, in order to find relevant physical functions suited for qualitative movement sonification. Below, a short video demonstrates the bottom-up construction of mapping from gestural features to physical model “sound actions”, using a Myo armband and Bitalino device for hand and arm gesture sensors, and a bowed string physical model for sonifications. These preliminary results have lead to a publication in NIME2020, and future residencies are currently in discussion to continue this research.
A first week of residency took place in the Maison de La Création et de l’Innovation in September 2019, allowing to co-construct the mapping model (following a bottom-up strategy informed by the dancer’s feedback and impressions) and experiment with several physical models, in order to find relevant physical functions suited for qualitative movement sonification. Below, a short video demonstrates the bottom-up construction of mapping from gestural features to physical model “sound actions”, using a Myo armband and Bitalino device for hand and arm gesture sensors, and a bowed string physical model for sonifications. These preliminary results have lead to a publication in NIME2020, and future residencies are currently in discussion to continue this research.
Supporting paper (Abstract):
Towards an Interactive Model-Based Sonification of Hand Gesture for Dance Performance - NIME 2020, Birmingham, UK
Towards an Interactive Model-Based Sonification of Hand Gesture for Dance Performance - NIME 2020, Birmingham, UK
This paper presents an ongoing research on hand gesture interactive sonification in dance performances. For this purpose, a conceptual framework and a multilayered mapping model issued from an experimental case study will be proposed. The goal of this research is twofold. On the one hand, we aim to determine action-based perceptual invariants that allow us to establish pertinent relations between gesture qualities and sound features. On the other hand, we are interested in analysing how an interactive model-based sonification can afford useful and effective feedback for dance practitioners. From this point of view, our research explicitly addresses the convergence between the scientific understandings provided by the field of movement sonification and the traditional know-how developed over the years within the digital instrument and interaction design communities. A key component of our study is the combination between physically-based sound synthesis and motion features analysis. This approach has proven effective in providing interesting insights for devising novel sonification models for artistic and scientific purposes, and for developing a collaborative platform involving the designer, the musician and the performer.
This paper presents an ongoing research on hand gesture interactive sonification in dance performances. For this purpose, a conceptual framework and a multilayered mapping model issued from an experimental case study will be proposed. The goal of this research is twofold. On the one hand, we aim to determine action-based perceptual invariants that allow us to establish pertinent relations between gesture qualities and sound features. On the other hand, we are interested in analysing how an interactive model-based sonification can afford useful and effective feedback for dance practitioners. From this point of view, our research explicitly addresses the convergence between the scientific understandings provided by the field of movement sonification and the traditional know-how developed over the years within the digital instrument and interaction design communities. A key component of our study is the combination between physically-based sound synthesis and motion features analysis. This approach has proven effective in providing interesting insights for devising novel sonification models for artistic and scientific purposes, and for developing a collaborative platform involving the designer, the musician and the performer.