You are here

SalGAN: Visual Saliency Prediction with Adversarial Networks

Authors: 

Junting Pan, Elisa Sayrol, Xavier Giro-i-Nieto, Cristian Canton Ferrer, Jordi Torres, Kevin McGuinness, Noel O'Connor

Publication Type: 
Refereed Conference Meeting Proceeding
Abstract: 
Due to the outstanding nature of this work, this abstract was one of five selected as a Spotlight presentation. We introduce SalGAN, a deep convolutional neural network for visual saliency prediction trained with adversarial examples. The first stage of the network consists of a generator model whose weights are learned by back-propagation computed from a binary cross entropy (BCE) loss over downsampled versions of the saliency maps. The resulting prediction is processed by a discriminator network trained to solve a binary classification task between the saliency maps generated by the generative stage and the ground truth ones. Our experiments show how adversarial training allows reaching state-of-the-art performance across different metrics when combined with a widely-used loss function like BCE. Our results can be reproduced with the source code and trained models available at https://imatge-upc.github. io/saliency-salgan-2017/.
Conference Name: 
CVPR Scene Understanding Workshop (SUNw)
Proceedings: 
Proceedings of CVPR SUNw
Digital Object Identifer (DOI): 
10.na
Publication Date: 
26/07/2017
Conference Location: 
United States of America
Research Group: 
Institution: 
Dublin City University (DCU)
Open access repository: 
Yes