LLMpeople
Home People Organizations Reports Fields Schools
Public Atlas People first, reports as evidence, organizations as context.

Atlas / Reports / Detail

Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training

Alignment and Safety

AnthropicUndated39 researchers
Field
Alignment and Safety
Organization
Anthropic
arXiv
2401.05566

Canonical link

https://arxiv.org/abs/2401.05566

Connected researchers

Profile Reports

Samuel R. Bowman

Anthropic

Member of technical staff at Anthropic and associate professor of computer science, data science, and linguistics at New York University on leave. His public homepage focuses on natural language processing, machine learning, and AI alignment.

Anthropic
United States 5
Profile Reports

Newton Cheng

Anthropic

Anthropic researcher on the Frontier Red Team focused on cyber misuse evaluation and threat modeling; previously a physics PhD student at UC Berkeley and now also mentors in the MATS program.

Anthropic
Unknown 1
Profile Reports

Jack Clark

Anthropic / OpenAI

Co-founder and head of policy at Anthropic. He previously served as policy director at OpenAI, worked as a technology journalist, and writes the Import AI newsletter.

AnthropicOpenAI
Unknown 7
Profile Reports

David Duvenaud

Anthropic

Associate Professor at the University of Toronto whose research spans deep learning, probabilistic modeling, and machine learning methods for science and AI safety.

Anthropic
Canada 4
Profile Reports

Shauna Kravec

Anthropic

Researcher focused on AI safety, reinforcement learning, and language models, with public work spanning red teaming, adversarial robustness, and model behavior.

Anthropic
United States 3
Profile Reports

Jesse Mu

Anthropic

Jesse Mu is a Research Scientist at Anthropic and a visiting researcher at Stanford University. His work spans machine learning, AI safety, reinforcement learning, and deep learning theory.

Anthropic
Unknown 1
Profile Reports

Roger Grosse

Anthropic

Associate Professor of Computer Science at the University of Toronto and director of the machine learning group, with research spanning probabilistic models and optimization algorithms.

Anthropic
Unknown 1
Profile Reports

Amanda Askell

Anthropic / OpenAI

Alignment researcher at OpenAI working on making AI understandable to and aligned with human values.

AnthropicOpenAI
Unknown 7
Profile Reports

Jared D. Kaplan

Anthropic

Anthropic co-founder and Chief Science Officer. Formerly a physicist at Johns Hopkins, he helped develop scaling laws for neural language models and works on the science and safety of large AI systems.

Anthropic
Unknown 6
Profile Reports

Yuntao Bai

Anthropic

Anthropic researcher whose work includes reinforcement learning from human feedback and Constitutional AI; previously a Sherman Fairchild Postdoctoral Scholar in theoretical high-energy physics at Caltech.

Anthropic
Unknown 4
Profile Reports

Kamal Ndousse

Anthropic

Researcher at Anthropic working on alignment, reasoning, and evaluation for large language models.

Anthropic
Unknown 5
Profile Reports

Sören Mindermann

Anthropic

Research scientist at Anthropic working on machine learning and AI safety.

Anthropic
Unknown 3
Profile Reports

Kshitij Sachan

Anthropic

Kshitij Sachan is a research scientist at Anthropic whose public homepage and Google Scholar profile highlight work on language models, reasoning, code generation, and machine learning systems.

Anthropic
Unknown 1
Profile Reports

Michael Sellitto

Anthropic

Research scientist at Anthropic working on trustworthy AI and deceptive alignment.

Anthropic
Unknown 1
Profile Reports

Mrinank Sharma

Anthropic

AI safety researcher who led Anthropic's Safeguards Research Team and worked on jailbreak robustness, automated red teaming, and monitoring for misuse and misalignment.

Anthropic
Unknown 1
Profile Reports

Zachary Witten

Anthropic

Zachary Witten is a member of technical staff at Anthropic.

Anthropic
Unknown 1
Profile Reports

Ethan Perez

Anthropic

Research scientist at Anthropic focused on scalable oversight, AI safety, and language model evaluation; previously worked at New York University and Google.

Anthropic
Unknown 8
Profile Reports

Nicholas Schiefer

Anthropic

Member of Technical Staff at Anthropic and cofounder of Oulipo Labs, working on language model safety, evaluations, and scientific forecasting.

Anthropic
Unknown 8
Profile Reports

Deep Ganguli

Anthropic

Co-founder and head of alignment science at Anthropic.

Anthropic
Unknown 6
Profile Reports

Nova DasSarma

Anthropic

Research scientist at Anthropic interested in understanding neural networks and applying that understanding to alignment.

Anthropic
Unknown 5
Profile Reports

Buck Shlegeris

Anthropic

Buck Shlegeris is a Member of Technical Staff at Anthropic whose public homepage focuses on AI safety, model evaluations, and alignment.

Anthropic
Unknown 3
Profile Reports

Carson Denison

Anthropic

Member of Technical Staff at Anthropic and PhD student at Carnegie Mellon University focused on AI safety, evaluations, and oversight of large language models.

Anthropic
Unknown 2
Profile Reports

Monte MacDiarmid

Anthropic

Member of technical staff at Anthropic working on alignment science and the evaluation of hidden objectives in language models.

Anthropic
Unknown 2
Profile Reports

Adam Jermyn

Anthropic

Research scientist at Anthropic and former professor of theoretical astrophysics at Stony Brook University.

Anthropic
Unknown 1

LLMpeople is a public atlas for discovering frontier AI researchers with context, provenance, and respect.