Abstract
Machine learning algorithms have proven highly effective in analyzing large amounts of data and identifying complex patterns and relationships. One application of machine learning that has received significant attention in recent years is recommender systems, which are algorithms that analyze user behavior and other data to suggest items or content that a user may be interested in. However, these systems may unintentionally retain sensitive, outdated, or faulty information. Posing a risk to user privacy, system security, and usability. In this research proposal, we aim to address this challenge by investigating methods for machine "unlearning", which would allow information to be efficiently "forgotten" or "unlearned" from machine learning models. The main objective of this proposal is to develop the foundation for future machine unlearning methods. We first evaluate current unlearning methods and explore novel adversarial attacks on these methods' verifiability, efficiency, and accuracy to gain new insights and develop the theory of unlearning. Using our gathered insights, we seek to create novel unlearning methods that are verifiable, efficient, and don't lead to unnecessary accuracy degradation. Through this research, we seek to make significant contributions to the theoretical foundations of machine unlearning while also developing unlearning methods that can be applied to real-world problems.
Researcher(s)
Research team(s)
Project type(s)