27 June, 2023
Marcel Heisler


Making an Android Robot Head Talk

We are happy to share that our PhD student Marcel Heisler got his recent work on automated lip-sync for android robots accepted at the IEEE RO-MAN conference. The paper will be presented in Busan Korea, 28th-31th August 2023.


We present two approaches to animate an android robot head according to audio speech input, both are adopted from recent machine learning based works in computer graphics animation. More concrete we implemented a viseme-based and a mesh-based approach on our robot. After a subjective comparison we conduct a speech-reading study to evaluate our preferred, the mesh-based, approach. The results show that on average the intelligibility is not increased by the visual cues provided through the robot head in comparison to noisy audio alone. This underlines the importance of carefully designing and controlling the facial co-speech movements of talking android heads.

Authors: Marcel Heisler, Stefan Kopp, Christian Becker-Asano

Paper: https://ieeexplore.ieee.org/document/10309532