Luma
Free
In 2 days

AI Safety Evals - Paper Reading Club

by BlueDot Impact

About this event

Mark Keavney will present Training large language models on narrow tasks can lead to broad misalignment, one of the foundational studies of emergent misalignment, a counterintuitive and fascinating finding in alignment research. ​Every week, someone will present for up to 20 minutes followed by 40 minutes of discussion. RSVP to join, sign up to present, or contact us at evalsreadinggroup@gmail.com with questions. Everyone is welcome!

Topics & Tags

AI
AI
Date & time
Tuesday, March 17, 2026 · 4:00 PM – 5:00 PM
Europe/London
Location
TBA
Europe/London
BlueDot Impact
Organised by
BlueDot Impact
Type
independent
SourceLuma
UpdatedMar 14, 2026

You might also like