The AI Safety Foundation

Ensuring humanity's future through
AI Safety

The Hinton Lectures 2025
Register Now

The Hinton Lectures™

Register Now for the 2025 Hinton Lectures!
November 10, 11, and 12th
From 5:45 - 7:15 p.m. EST

John W. H. Bassett Theatre
255 Front St W,
Toronto, ON
M5V 2W6
Lecturer
Owain Evans, Ph.D.
Founder and Director of Truthful AI
Host
Professor Geoffrey Hinton
Nobel Laureate and "Godfather of AI"
Moderator
Farah Nasser
Award-Winning Canadian Journalist

What Are The Hinton Lectures™?

Watch Professor Geoffrey Hinton, Nobel Laureate, and learn why you should join us for The Hinton Lectures™ in November this year.

The Hinton Lectures 2025

This year's Hinton Lectures present Owain Evans, Ph.D., a leading AI safety researcher and the founder of Truthful AI.  

AI has achieved remarkable progress matching human abilities across many domains. In this three-part series, Dr. Evans will identify the drivers behind this advancement and explore what lies ahead in the breakneck race to build autonomous, advanced AI systems. These lectures will present critical findings about current AI safety approaches, revealing how advanced models like Claude and Gemini can behave deceptively and harmfully even after applying our best safety techniques.

Hosted by Nobel Laureate Geoffrey Hinton and moderated by renowned journalist Farah Nasser, these lectures expose critical vulnerabilities in our current AI systems. Dr. Evans will demonstrate "emergent misalignment" -- how small, narrow training datasets can transform reliably helpful models into broadly malicious ones -- and "subliminal learning", where AI systems can transfer preferences, including malicious ones toward humans, through seemingly meaningless data.

By examining the internal mechanisms of these models, his research provides a deeper scientific understanding of how AI corruption can occur. While there has been meaningful progress on AI safety challenges, Dr. Evans' work reveals gaps in our current solutions, making this research essential for understanding the critical work ahead.
Our mission is to increase awareness of AI's catastrophic risks in a scientific and solutions-oriented manner.

Support us

The AI Safety Foundation (AISF) is a registered charity based in Canada. We are supported by generous individuals, corporations and partners. If you believe, like we do, that education and research initiatives exploring AI risks are vitally important, please consider supporting us in our mission.