Gestures complement the utterance contents to help the human understand. In the field of gesture generation, the task of generating gestures from utterances have attracted attention. The main method for generating gestures from utterances is to associate utterances with gestures using deep neural networks. To associate utterances with gestures using deep neural networks, a co-speech gesture dataset is required. However, building such datasets is costly and time-consuming because it requires a reliable pose estimation system (such as motion captures) and manual adjustments. We proposed an automatic method to collect a co-speech gesture dataset from online speech videos. The method extracts various utterance and gesture pairs from online speech videos. In addition, we use the collected dataset to train a deep neural network and confirm that our automatically-collected dataset can be a supervisory signal for speech-driven gesture generation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.