You are here

Scanpath and Saliency Prediction on 360 Degree Images

Authors: 

Marc Assens, Xavier Giro-i-Nieto, Kevin McGuinness, Noel O'Connor

Publication Type: 
Refereed Original Article
Abstract: 
We introduce deep neural networks for scanpath and saliency prediction trained on 360-degree images. The scanpath prediction model called SaltiNet is based on a temporal-aware novel representation of saliency information named the saliency volume. The first part of the network consists of a model trained to generate saliency volumes, whose parameters are fit by back-propagation using a binary cross entropy (BCE) loss over downsampled versions of the saliency volumes. Sampling strategies over these volumes are used to generate scanpaths over the 360-degree images. Our experiments show the advantages of using saliency volumes, and how they can be used for related tasks. We also show how a similar architecture achieves state-of-the-art performance for the related task of saliency map prediction. Our source code and trained models available at https://github.com/massens/saliency-360salient-2017.
Digital Object Identifer (DOI): 
10.1016/j.image.2018.06.006
Publication Status: 
Published
Date Accepted for Publication: 
Wednesday, 13 June, 2018
Publication Date: 
23/06/2018
Journal: 
Signal Processing: Image Communication
Research Group: 
Institution: 
Dublin City University (DCU)
Project Acknowledges: 
Open access repository: 
Yes