Monday, 14 October 2024 | 14:00 - 15:00 CET
Emotional AI is quickly becoming a reality, with advanced models designed to act as friends, caretakers, and even therapists, claiming to possess emotional intelligence. But can these AI models genuinely feel emotions, or are we simply witnessing sophisticated simulations?
I will explore emotional AI, blending insights from philosophy and cognitive science. First, I’ll argue that "reinforcement learning" could potentially lead to AI experiencing real fear. I will then argue that instilling fear in bots is a bad idea, posing real and politically significant risks, including bias and harm to vulnerable groups. This isn’t just a future concern; we’re already seeing the damaging effects of fearful technology today. Finally, I’ll explore whether expanding the emotional spectrum of AI beyond fear could be a safer and more ethical path.
- Language: English
- Method of delivery: Virtual
- Lecturer:
Dr. Kris Goffin, whose research focuses on aesthetics, philosophy of cognitive science (including AI) & mind. In his current research, his focus is on bias in AI applications. He is doing a YUFE-funded postdoc at Maastricht University on the Philosophy of AI, more specifically on bias and stereotyping in AI, supervised by Katleen Gabriels (UMaastricht) and Katrien Schaubroeck (UAntwerpen).