LLMpeople
Home People Organizations Reports Fields Schools
Public Atlas People first, reports as evidence, organizations as context.

Atlas / People / Detail

David Duvenaud

Associate Professor at the University of Toronto whose research spans deep learning, probabilistic modeling, and machine learning methods for science and AI safety.

Researcher1 organizations4 reports

Profile status: updated

David Duvenaud portrait
Suggest a correction
Suggest a source

Contributions are treated as untrusted leads. Public changes happen only after review against public sources.

Trust signals

Profile completeness64%
Public sources2
Official sources1
CountryCanada
Last reviewedMar 13, 2026
Review outcomeUpdated
Official homepage
updated Canada 2 public sources

Latest review note

Added verified homepage, GitHub profile, avatar URL, country, and a concise English bio.

Public links

website Personal homepage github GitHub profile

Organizations

core Anthropic

Reports

Alignment and Safety Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training Alignment and Safety Alignment faking in large language models Interpretability On the Biology of a Large Language Model Interpretability Tracing the thoughts of a large language model

Official and primary sources

https://www.cs.toronto.edu/~duvenaud/ Official source · homepage

Supporting sources

https://github.com/duvenaud Supporting source · github

LLMpeople is a public atlas for discovering frontier AI researchers with context, provenance, and respect.