A two part workshop series on gestural sound topics including acquisition, analysis, mapping, and real-time sonification. Gestural sound implies spatialized fields of sound that are shaped in real-time by the movement and activities of inhabitants in an environment.
In part one, we will conduct an overview of the field, and present a selection of related literature. We will introduce the Topological Media Lab’s approach towards sonifying continuous, unanticipated movement that may be improvised freely by participants in a conditioned environment. We will present devices and techniques useful for gesture acquisition and feature extraction, such as piezo-electric microphones, wireless accelerometers, and camera tracking.
In part two, we will conduct a variety of real-world practical case studies. Together we will investigate strategies for the mapping of real-time gestural data to sonic parameters, utilizing components of the Topological Media Lab’s custom gesture-sound Max/MSP software library. Participants will learn how to take sensed activity and map it to sonic parameters through an array of synthesis techniques including granular synthesis, concatenative synthesis, physical modeling, filter models, and algorithmic processes.