You are here

Formulating Queries for Collecting Training Examples in Visual Concept Classification

Authors: 
Publication Type: 
Refereed Conference Meeting Proceeding
Abstract: 
Video content can be automatically analysed and indexed using trained classifiers which map low-level features to semantic concepts. Such classifiers need training data consisting of sets of images which contain such concepts and recently it has been discovered that such training data can be located using text-based search to image databases on the internet. Formulating the text queries which locate these training images is the challenge we address here. In this paper we present preliminary results on TRECVid data of concept classification using automatically crawled images as training data and we compare the results with those obtained from manually annotated training sets.
Conference Name: 
Workshop On Vision And Language 2014 (VL'14)
Proceedings: 
VL'14
Digital Object Identifer (DOI): 
10.NA
Publication Date: 
03/08/2014
Conference Location: 
Ireland
Research Group: 
Institution: 
Dublin City University (DCU)
Open access repository: 
Yes