AI Ethics with Bradford Saad.
Overview
Open for registration
- Date:Starts 25 March 2025, 13:15Ends 25 March 2025, 14:15
- Location:Zoom, register to receive the link
- Language:English
- Last sign up date:25 March 2025

Abstract:
A morally acceptable course of AI development should avoid two dangers: creating unaligned AI systems that pose a threat to humans and mistreating AI systems that merit moral consideration in their own right.
This talk will argue that these dangers interact and that if we create AI systems that require alignment and which merit moral consideration, then simultaneously avoiding both of these dangers would be extremely challenging. The talk is based on a joint paper with Adam Bradley.
Bio:
Bradford Saad is a senior research fellow in philosophy at Oxford's Global Priorities Institute. His current research is focused on AI moral patients and catastrophic risks.