LLMpeople
Home People Organizations Reports Fields Schools
Public Atlas People first, reports as evidence, organizations as context.

Atlas / People / Detail

Deep Ganguli

Co-founder and head of alignment science at Anthropic.

Researcher1 organizations6 reports

Profile status: updated

Suggest a correction
Suggest a source

Contributions are treated as untrusted leads. Public changes happen only after review against public sources.

Trust signals

Profile completeness41%
Public sources1
Official sources1
CountryUnknown
Last reviewedMar 13, 2026
Review outcomeUpdated
Official homepage
updated Unknown location 1 public sources

Latest review note

Added Anthropic team profile and a short bio from the public team page.

Public links

website Anthropic team profile

Organizations

core Anthropic

Reports

Alignment and RLHF Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback Alignment and RLHF Constitutional AI: Harmlessness from AI Feedback Alignment and Safety Many-shot Jailbreaking Alignment and Safety Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training Alignment and Safety Constitutional Classifiers++: Defending against Universal Jailbreaks across Thousands of Hours of Red Teaming Interpretability Tracing the thoughts of a large language model

Official and primary sources

https://www.anthropic.com/team/deep-ganguli Official source ยท homepage

LLMpeople is a public atlas for discovering frontier AI researchers with context, provenance, and respect.