You are here

Semantic Indexing of Wearable Camera Images: Kids’Cam Concepts

Authors: 

Alan Smeaton, Kevin McGuinness, Cathal Gurrin, Jiang Zhou, Noel O'Connor, Peng Wang, Brian Davis, Lucas Azevedo, Andre Freitas, Louise Signal, Moira Smith, James Stanley, Michelle Barr, Tim Chambers, Cliona Ní Mhurchu

Publication Type: 
Refereed Conference Meeting Proceeding
Abstract: 
In order to provide content-based search on image media, including images and video, they are typically accessed based on manual or automatically assigned concepts or tags, or sometimes based on image-image similarity depending on the use case. While great progress has been made in very recent years in automatic concept detection using machine learning, we are still left with a mis-match between the semantics of the concepts we can automatically detect, and the semantics of the words used in a user’s query, for example. In this paper we report on a large collection of images from wearable cameras gathered as part of the Kids’Cam project, which have been both manually annotated from a vocabulary of 83 concepts, and automatically annotated from a vocabulary of 1,000 concepts. This collection allows us to explore issues around how language, in the form of two distinct concept vocabularies or spaces, one manually assigned and thus forming a ground-truth, is used to represent images, in our case taken using wearable cameras. It also allows us to discuss, in general terms, issues around mis-match of concepts in visual media, which derive from language mismatches. We report the data processing we have completed on this collection and some of our initial experimentation in mapping across the two language vocabularies.
Conference Name: 
iV&L-MM’16
Proceedings: 
Proceedings of ACM - iV&L-MM’16
Digital Object Identifer (DOI): 
10.1145/2983563.2983566
Publication Date: 
16/10/2016
Conference Location: 
Netherlands
Research Group: 
Institution: 
Dublin City University (DCU)
Open access repository: 
Yes