In the scope of the MODI project (H2020 - Creative Europe funding), there will be three talks on May 8, starting at 2pm, at the classroom, at the -2 floor of Tecnopolo. These talks are open to the public in general.
Stephan Jürgens - "Which corporeal dimensions can be computed in contemporary dance? Novel strategies for the visualisation of choreographic thinking in the past decade"
This talk will start with a discussion of what we mean when we use the terms "dance" and "choreography," and present a classification of contemporary dance forms, styles and creative processes. During the past decade a few international research projects have documented and examined the creative processes of invited well-known choreographers. The dance data obtained in these research contexts has been visualised in a number of innovative ways, eventually leading to the creation of hybrid art-and-science objects. Case studies will be presented with particular focus on what corporeal data were collected, which techniques we used to capture and computed dance data, and how research results were visualised from a media art perspective.
Stephan Jürgens holds a Ph.D. in Contemporary Choreography and New Media Technologies. His research interests concentrate on designing creative strategies for live performance involving interactive systems. He has been teaching movement research, interdisciplinary choreography and interactive system design in many different learning environments and institutions. Stephan collaborated on several international research projects, all of which investigated the use of recent technology in Contemporary Dance and Digital (Live) Performance. As a choreographer, Stephan has presented several works in collaboration with artists from the most diverse fields, and as a result developed his own system of “transmedia choreography”.
Jochen Feitsch - "Motion Capturing Creative Area (MOCCA-Project)"
How to combine dance, medical technology, film production, computer graphics and human-technology interaction? The MOCCA project addresses this question by researching innovative technological interfaces around the subject of motion capturing. The project benefits from the ever-increasing availability of technologies that allow for interaction with the whole body, and thus from the growing importance for the community and the growing interest in the subject. During the project the application fields of art, culture, technology and science were treated. In this talk, results of the last three years will be shown.
Jochen Feitsch is part of the team “Mixed Reality and Visualization” with Prof. Dr. Christian Geiger at the University of Applied Sciences Düsseldorf since 2014. Since 2016, he is researcher in the position of laboratory manager and project coordinator in the field of human-technology interaction with a focus on motion capture and augmented human. The areas of application are especially in performative art and interaction design. Jochen completed his Bachelor of Arts in Multimedia and Communication at Ansbach University of Applied Sciences specializing in audio and computer science. This was followed by the consecutive Master of Science in Media Informatics at the University of Applied Sciences Düsseldorf. His research interest in recent years has been the use of various interaction techniques in audio programming (especially vocal synthesis) in the performative context, in the development of interactive performances in the artistic context, and in the field of audience participation. Currently he is preparing for his doctorate. His PhD project aims at the extension of the expressive possibilities of performative artists and the greater participation of the audience in such performances through the use of modern technologies.
William Primett - "Coupling Biomedical Sensors with Choreographic Language"
The expressive scope recognized in human movement provides a strong potential for facilitating rich interactive experiences. New sensory devices have made it possible to capture our physical gestures, however, the detection and measurement of particular kinematic qualities cannot be intuitively hard-coded. Years of Research within the Movement Computing community propose a machine learning approach to substantiate a semantic meaning towards digital representations of human movement. In this talk, we discuss the use of embodied sensors to record bodily experiences, and then using this data to develop new computational models for autonomous movement analysis and generative multimedia applications.
William Primett is a Ph.D. researcher at Plux Wireless Biosignals, representing partnering research group, AffecTech. During his 4 years of study at Goldsmiths, University of London, William was engaged in designing new interfaces for immersive performance environments, often collaborating with other musicians and artists. His research aims to promote the use of physiological sensors and biofeedback for designing expressive interactive systems.