Formulating Queries for Collecting Training Examples in Visual Concept Classification
Refereed Conference Meeting Proceeding
Video content can be automatically analysed and indexed using trained classifiers which map low-level features to semantic concepts. Such classifiers need training data consisting of sets of images which contain such concepts and recently it has been discovered that such training data can be located using text-based search to image databases on the internet. Formulating the text queries which locate these training images is the challenge we address here. In this paper we present preliminary results on TRECVid data of concept classification using automatically crawled images as training data and we compare the results with those obtained from manually annotated training sets.
Workshop On Vision And Language 2014 (VL'14)
Digital Object Identifer (DOI):
Dublin City University (DCU)
Open access repository: