Motivated by the understanding of gesture expressivity, complementary to our previous works working with shape gestures and free-air gestures, we aim to go further in the idea that expressivity is a visceral capacity of the human body. As such, to understand what makes a gesture expressive, we need to consider not only its spatial placement and orientation, but also its dynamics and the mechanisms enacting them. Our approach in this project is to assess gesture expressivity through muscle sensing. By that we want to propose new ways in to consider expressive and visceral Human-Computer Interaction.
The project is based on, first, our previous study on gesture expressive variations for continuous interaction, and second, our pilot studies on muscle sensing for musical interaction. The former gave insights on the motor capacity of human to intentionally control variations of simple shapes. The latter went further by inspecting how multimodal muscle sensing can inform on an expressive performance and can be controlled by a performer.
The project has been conducted by Baptiste Caramiaux in collaboration with Marco Donnarumma and Atau Tanaka at Goldsmiths University of London. The paper is published in ACM Transactions on Computer-Human Interactions (TOCHI) and will be presented at CHI 2015.
- B. Caramiaux, M. Donnarumma, and A. Tanaka. Understanding Gesture Expressivity through Muscle Sensing. ACM Transactions on Computer-Human Interaction (TOCHI). 21 (6). 2015
Here we framed our study by first giving working definition of a gesture and gesture expressivity
A gesture is a dynamic movement of the body (or part of the body) that contains information in the sense of deliberate expression. We propose the term gesture expressivity as deliberate and meaningful variation in the execution of a gesture.
Then, we built upon the insight gained from our previous studies to design an experiment looking at the elements underlying variations of gesture power and its characterisation in bi-modal muscle sensing (mechanomyography and electromyography). We defined a gesture vocabulary such as:
We invited participants to perform these gestures while varying some of its qualities such as its power dimension. We recorded their muscle activity through bimodal muscle sensing.
For the user, power is an ambiguous, subjective dimension that can be understood differently according to the presence or absence of haptic feedback and can be assimilated to tension or kinematic energy. According to the participants, ‘power’ is also depends on the gesture performed and other dimensions to be manipulated (e.g. speed). A quantitative analysis of muslce sensor data provides signal features, amplitude and zero-crossings, that are useful in measuring objectively the insights from the users. The analysis first shows that participants were able to modulate muscle tension in gestures and this modulation can be captured through physiological sensing. Exertion by pressure is better explained via EMG signal amplitude while dynamic variation of intensity is better captured through MMG, in the frequency domain. The ability to control variations in power, then, depends on the gesture performed.