Tuesday
31
October
2023
Knowledge Centre at The British Library

Expanding the conversation: Defining AI safety in practice

Day two at the AI Fringe Hub expands the conversation around AI safety, exploring how we define it, who gets to define it and what it looks like in practice.

Agenda

08:30 – 09:30

Check in

Please have your Eventbrite QR code ready for onsite check-in.

09:30 – 11:00

Defining AI safety

As the UK government convenes a Summit on ‘AI safety’, this session will explore what the term means, what we can learn from existing safety-based governance, and how such systems secure trust in critical technologies, industries, and supply chains through holistic management of a broad range of risks - not just the most extreme.

  • Keynote: Francine Bennett
  • Panel: Defining AI Safety
    Moderated by Michael Birtwistle
Gill Whitehead
Gill Whitehead
Group Director, Online Safety, Ofcom
Prof. Shannon Vallor
Prof. Shannon Vallor
Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute
Michael Birtwistle
Michael Birtwistle
Associate Director, Ada Lovelace Institute
Yolanda Lannquist
Yolanda Lannquist
Director, Global AI Governance
Francine Bennett
Francine Bennett
Interim Director, Ada Lovelace Institute
Deborah Raji
Deborah Raji
Fellow, Mozilla
Emran Mian
Emran Mian
Director General for Digital Technologies and Telecoms, Department for Science, Innovation and Technology
11:00 – 11:30

Break

Responsible AI ecosystems and standards

Experts explore how standards have an important role to play for achieving responsible AI, including safety in Frontier AI, but success requires inclusion and international cooperation.

  • Panel: Standards for responsible AI
    Moderated by Tim McGarr
Cristina Muresan
Cristina Muresan
University of Cambridge
Tim McGarr
Tim McGarr
AI Market Development Lead, British Standards Institution
Adam Leon Smith
Adam Leon Smith
CTO, Dragonfly
Chanell Daniels
Chanell Daniels
Responsible AI Manager, Digital Catapult
Hollie Hamblett
Hollie Hamblett
Policy Specialist, Consumers International
12:10 – 12:30

How was that made? Verifying content in the age of AI

A presentation and live demonstrations of how the Coalition for Content Provenance and Authenticity standards can be used to prove the provenance of digital content like images and videos.

  • Presenter: Andy Parsons
Andy Parsons
Andy Parsons
Senior Director, Content Authenticity Initiative, Adobe
12:30 – 13:30

Lunch

13:30 – 15:00

Showcasing responsible AI

A series of lightning talks and discussion covering approaches towards developing norms and evaluations for AI systems, with researchers and practitioners in industry and civil society.

  • Fireside: Aaron Rosenberg & Dorothy Chou
  • Panel: Demonstrating and showcasing approaches to evaluating AI
    Moderated by Rumman Chowdhury
Rumman Chowdhury
Rumman Chowdhury
CEO and Co-Founder, Humane Intelligence
Deborah Raji
Deborah Raji
Fellow, Mozilla
15:00–15:30

Break

Why we need this conversation

Experts discuss how conversations around AI Safety will help realise the opportunities of AI for people and society in the UK and around the world, and why it’s so crucial to have those conversations now.

  • Fireside: Nick Clegg and Madhumita Murgia
  • Panel: In conversation with Sir Nigel Shadbolt and Chloe Smith MP
    Moderated by Resham Kotecha
Nick Clegg
Nick Clegg
President, Global Affairs, Meta
Madhumita Murgia
Madhumita Murgia
AI Editor, Financial Times
Resham Kotecha
Resham Kotecha
Global Head of Policy, The Open Data Institute
Sir Nigel Shadbolt
Sir Nigel Shadbolt
Principal of Jesus College, Oxford; Professorial Research Fellow in the Department of Computer Science, University of Oxford; Chairman of the Open Data Institute
Chloe Smith
Chloe Smith
MP for Norwich North