LLMpeople
Home People Organizations Reports Fields Schools
Public Atlas People first, reports as evidence, organizations as context.

Atlas / People / Detail

Anna Chen

Researcher working on AI safety and adversarial evaluation, including Anthropic many-shot jailbreaking research.

Researcher1 organizations4 reports

Profile status: updated

Suggest a correction
Suggest a source

Contributions are treated as untrusted leads. Public changes happen only after review against public sources.

Trust signals

Profile completeness41%
Public sources1
Official sources1
CountryUnknown
Last reviewedMar 13, 2026
Review outcomeUpdated
Scholar profile
updated Unknown location 1 public sources

Latest review note

Cleanup pass A upgraded this record with a public DBLP author page listing Anthropic many-shot jailbreaking research.

Public links

dblp DBLP author page

Organizations

core Anthropic

Reports

Alignment and RLHF Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback Alignment and RLHF Constitutional AI: Harmlessness from AI Feedback Alignment and RLHF Collective Constitutional AI: Aligning a Language Model with Public Input Alignment and Safety Many-shot Jailbreaking

Official and primary sources

https://dblp.org/search/author?q=author%3AAnna+Chen%3A Official source ยท dblp

LLMpeople is a public atlas for discovering frontier AI researchers with context, provenance, and respect.