LLMpeople
Home People Organizations Reports Fields Schools
Public Atlas People first, reports as evidence, organizations as context.

Atlas / People / Detail

Amanda Askell

Alignment researcher at OpenAI working on making AI understandable to and aligned with human values.

Researcher2 organizations7 reports

Profile status: updated

Amanda Askell portrait
Suggest a correction
Suggest a source

Contributions are treated as untrusted leads. Public changes happen only after review against public sources.

Trust signals

Profile completeness56%
Public sources1
Official sources1
CountryUnknown
Last reviewedMar 12, 2026
Review outcomeUpdated
Official homepage
updated Unknown location 1 public sources

Latest review note

Added official OpenAI profile, avatar, and English bio from the public profile page.

Public links

website OpenAI profile

Organizations

core OpenAI core Anthropic

Reports

Large Language Models Language Models are Few-Shot Learners Alignment and RLHF Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback Alignment and RLHF Constitutional AI: Harmlessness from AI Feedback Alignment and RLHF Collective Constitutional AI: Aligning a Language Model with Public Input Alignment and Safety Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training Alignment and Safety Auditing language models for hidden objectives Alignment and Safety Constitutional Classifiers++: Defending against Universal Jailbreaks across Thousands of Hours of Red Teaming

Official and primary sources

https://openai.com/index/amanda-askell/ Official source ยท homepage

LLMpeople is a public atlas for discovering frontier AI researchers with context, provenance, and respect.