You are here

BEARS: Towards an Evaluation Framework for Bandit-based Interactive Recommender Systems

Authors: 

Andrea Barraza-Urbina, Georgia Koutrika, Mathieu d‘Aquin, Conor Hayes

Publication Type: 
Refereed Conference Meeting Proceeding
Abstract: 
Recommender Systems (RS) deployed in fast-paced dynamic scenarios must quickly learn to adapt in response to user evaluative feedback. In these settings, the RS faces an online learning problem where each decision should optimize two competing goals: gather new information about users and optimally serve users according to acquired knowledge. Related works commonly address this exploration-exploitation trade-off by proposing bandit-based RS. However, evaluating bandit-based RS in an oine interactive environment remains an open challenge. This paper presents BEARS, an evaluation frame- work that allows users to easily test bandit-based RS solutions. BEARS aims to support reproducible oine evaluations by providing simple building blocks for constructing experiments in a shared platform. Moreover, BEARS can be used to share benchmark problem settings (Environments) and reusable implementations of baseline solution approaches (RS Agents).
Conference Name: 
Reveal Workshop: Offline evaluation for recommender systems at 2018 ACM Recommender Systems Conference
Digital Object Identifer (DOI): 
10.XXX
Publication Date: 
07/10/2018
Conference Location: 
Canada
Research Group: 
Institution: 
National University of Ireland, Galway (NUIG)
Open access repository: 
Yes