LLMpeople
Home People Organizations Reports Fields Schools
Public Atlas People first, reports as evidence, organizations as context.

Atlas / People / Detail

Catherine Olsson

Catherine Olsson is an AI alignment researcher and writer whose public website and Anthropic author page describe work on AI safety, interpretability, and building helpful, harmless assistants.

Researcher1 organizations2 reports

Profile status: updated

Suggest a correction
Suggest a source

Contributions are treated as untrusted leads. Public changes happen only after review against public sources.

Trust signals

Profile completeness45%
Public sources2
Official sources1
CountryUnknown
Last reviewedMar 13, 2026
Review outcomeUpdated
Official homepage
updated Unknown location 2 public sources

Latest review note

Added personal homepage, Anthropic author page, and a concise public bio.

Public links

website Personal homepage news Anthropic author page

Organizations

core Anthropic

Reports

Alignment and RLHF Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback Alignment and RLHF Collective Constitutional AI: Aligning a Language Model with Public Input

Official and primary sources

https://www.catherinesdone.com/ Official source · homepage

Supporting sources

https://www.anthropic.com/authors/catherine-olsson Supporting source · news

LLMpeople is a public atlas for discovering frontier AI researchers with context, provenance, and respect.