You are here

WAV2PIX: SPEECH-CONDITIONED FACE GENERATION USING GENERATIVE ADVERSARIAL NETWORKS

Authors: 

Amanda Duarte, Francisco Roldan, Miquel Tubau, Janna Escur, Santiago Pascual, Amaia Salvador, Eva Mohedano, Kevin McGuinness, Jordi Torres, Xavier Giro-i-Nieto

Publication Type: 
Refereed Original Article
Abstract: 
Speech is a rich biometric signal that contains information about the identity, gender and emotional state of the speaker. In this work, we explore its potential to generate face images of a speaker by conditioning a Generative Adversarial Network (GAN) with raw speech input. We propose a deep neural network that is trained from scratch in an end-to-end fashion, generating a face directly from the raw speech waveform without any additional identity information (e.g reference image or one-hot encoding). Our model is trained in a self-supervised approach by exploiting the audio and visual signals naturally aligned in videos. With the purpose of training from video data, we present a novel dataset collected for this work, with highquality videos of youtubers with notable expressiveness in both the speech and visual signals.
Digital Object Identifer (DOI): 
10.NA
Publication Status: 
Published
Date Accepted for Publication: 
Monday, 25 March, 2019
Publication Date: 
25/03/2019
Journal: 
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Research Group: 
Institution: 
Dublin City University (DCU)
Open access repository: 
Yes