Expanding the conversation: Defining AI safety in practice
08:30 – 09:30
Please have your Eventbrite QR code ready for onsite check-in.
09:30 – 11:00
Defining AI safety
As the UK government convenes a Summit on ‘AI safety’, this session will explore what the term means, what we can learn from existing safety-based governance, and how such systems secure trust in critical technologies, industries, and supply chains through holistic management of a broad range of risks - not just the most extreme.
- Keynote: Francine Bennett
- Panel: Defining AI Safety
Moderated by Michael Birtwistle
11:00 – 11:30
11:30 – 12:10
Responsible AI ecosystems and standards
Experts explore how standards have an important role to play for achieving responsible AI, including safety in Frontier AI, but success requires inclusion and international cooperation.
- Panel: Standards for responsible AI
Moderated by Tim McGarr
12:10 – 12:30
How was that made? Verifying content in the age of AI
A presentation and live demonstrations of how the Coalition for Content Provenance and Authenticity standards can be used to prove the provenance of digital content like images and videos.
- Presenter: Andy Parsons
12:30 – 13:30
13:30 – 15:00
Showcasing responsible AI
A series of lightning talks and discussion covering approaches towards developing norms and evaluations for AI systems, with researchers and practitioners in industry and civil society.
- Fireside: Aaron Rosenberg & Dorothy Chou
- Panel: Demonstrating and showcasing approaches to evaluating AI
Moderated by Rumman Chowdhury
15:30 – 16:30
Why we need this conversation
Experts discuss how conversations around AI Safety will help realise the opportunities of AI for people and society in the UK and around the world, and why it’s so crucial to have those conversations now.
- Fireside: Nick Clegg and Madhumita Murgia
- Panel: In conversation with Sir Nigel Shadbolt and Chloe Smith MP
Moderated by Resham Kotecha