News and Events (OLD)
Latest news
Policy Brief: Strengthening the roles of African Science Granting Councils as boundary organisations for societal transformation
Information Territory and Data Terrains: an examination of the Anti-Locust Research Centre
New papers on interdisciplinary cyber security
CIM event at Newspeak House: Lessons from everyday encounters with AI innovation
Research talk by Prof Simone Stumpf, University of Glasgow - "Why we can’t have nice things – the important role of Responsible AI"
Virtual CIM PG open day session - 2nd Dec
AI innovation missing the mark for local communities, University of ÌÇÐÄTV report warns
2025 FinGeo Doctoral Dissertation Prize Winner: Dr. Andra Sonea
LIVE PODCAST: Media and the Power of Knowledge w/ Prof. Steve Fuller
Our upcoming events
No items to show
Previous events
Newsletter
Research talk by Prof Simone Stumpf, University of Glasgow - "Why we can’t have nice things – the important role of Responsible AI"
Join us for a research talk by Prof Simone StumpfLink opens in a new window from University of Glasgow with the title "Why we can’t have nice things – the important role of Responsible AI" on Tuesday, January 13th 2026, 4pm - 5:30pm at the Social Sciences Building, Room S0.13.
Title:
Why we can’t have nice things – the important role of Responsible AI
Abstract:
Many AI technologies are now being integrated into everyday life. However, how can we ensure that AI is ‘responsible’? In this talk, I will review current efforts at developing responsible AI, focusing on transparency, fairness and auditing, and offer suggestions at how we can improve approaches in this area.
Bio:
Simone Stumpf is Professor of Responsible and Interactive AI at the School of Computing Science at University of Glasgow. She has a long-standing research focus on user interactions with AI systems. Her research includes self-management systems for people living with long-term conditions, developing teachable AI systems for people who don’t have a technical background, and investigating Responsible AI development, including AI fairness. Her work has contributed to shaping the field of Explainable AI (XAI) through the Explanatory Debugging approach for interactive machine learning, providing design principles to enable better human-computer interaction and investigating the effects of greater transparency. The prime aim of her work is to empower all users to use AI effectively.
<br />https://www.gla.ac.uk/schools/computing/staff/simonestumpf/Link opens in a new window