Research team

Fundamentals of Multi-armed Bandit Algorithms for Recommenders Systems. 01/11/2024 - 31/10/2026

Abstract

Recommender systems powered by multi-armed bandit algorithms are becoming increasingly prevalent. However, their evaluation methods often don't accurately reflect the dynamic nature of real-world recommendation scenarios and neglect aspects other than predictive accuracy such as fairness, diversity, and long-term user satisfaction. This hinders the development of algorithms that perform well in practice, affecting various sectors such as news, e-commerce and streaming services. In this project we address these critical issues by creating a comprehensive framework for evaluating bandit algorithms across diverse recommendation scenarios and developing novel algorithms to address current shortcomings. We propose to assess the state-of-the-art bandit algorithms, identify their limitations, and introduce new methods focusing on non-stationarity, slate recommendations, and ethical concerns. The project will leverage real-world datasets and a novel simulation environment to ensure realistic and reproducible results. Our research will provide foundational tools and insights, enabling the development of more effective and responsible recommender systems that improve user engagement and satisfaction.

Researcher(s)

Research team(s)

Project type(s)

  • Research Project