You are here

Semantic Indexing of Wearable Camera Images: Kids’Cam Concepts


Alan Smeaton, Kevin McGuinness, Cathal Gurrin, Jiang Zhou, Noel O'Connor, Peng Wang, Brian Davis, Lucas Azevedo, Andre Freitas, Louise Signal, Moira Smith, James Stanley, Michelle Barr, Tim Chambers, Cliona Ní Mhurchu

Publication Type: 
Refereed Conference Meeting Proceeding
In order to provide content-based search on image media, including images and video, they are typically accessed based on manual or automatically assigned concepts or tags, or sometimes based on image-image similarity depending on the use case. While great progress has been made in very recent years in automatic concept detection using machine learning, we are still left with a mis-match between the semantics of the concepts we can automatically detect, and the semantics of the words used in a user’s query, for example. In this paper we report on a large collection of images from wearable cameras gathered as part of the Kids’Cam project, which have been both manually annotated from a vocabulary of 83 concepts, and automatically annotated from a vocabulary of 1,000 concepts. This collection allows us to explore issues around how language, in the form of two distinct concept vocabularies or spaces, one manually assigned and thus forming a ground-truth, is used to represent images, in our case taken using wearable cameras. It also allows us to discuss, in general terms, issues around mis-match of concepts in visual media, which derive from language mismatches. We report the data processing we have completed on this collection and some of our initial experimentation in mapping across the two language vocabularies.
Conference Name: 
Proceedings of ACM - iV&L-MM’16
Digital Object Identifer (DOI): 
Publication Date: 
Conference Location: 
Research Group: 
Dublin City University (DCU)
Open access repository: