LLMpeople
Home People Organizations Reports Fields Schools
Public Atlas People first, reports as evidence, organizations as context.

Atlas / Reports / Detail

Constitutional AI: Harmlessness from AI Feedback

Alignment and RLHF

Anthropic2022-12-1546 researchers
Field
Alignment and RLHF
Organization
Anthropic
arXiv
2212.08073

Canonical link

https://arxiv.org/abs/2212.08073

Connected researchers

Profile Reports

Samuel R. Bowman

Anthropic

Member of technical staff at Anthropic and associate professor of computer science, data science, and linguistics at New York University on leave. His public homepage focuses on natural language processing, machine learning, and AI alignment.

Anthropic
United States 5
Profile Reports

Noemi Mercado

Anthropic

Researcher at Anthropic whose public homepage and scholarly profile connect cognitive science research with AI.

Anthropic
Unknown 1
Profile Reports

Azalia Mirhoseini

Anthropic

Research scientist at Anthropic working on machine learning systems and AI; previously worked on machine learning systems, compilers, and sustainability at Google.

Anthropic
Unknown 1
Profile Reports

Jack Clark

Anthropic / OpenAI

Co-founder and head of policy at Anthropic. He previously served as policy director at OpenAI, worked as a technology journalist, and writes the Import AI newsletter.

AnthropicOpenAI
Unknown 7
Profile Reports

Shauna Kravec

Anthropic

Researcher focused on AI safety, reinforcement learning, and language models, with public work spanning red teaming, adversarial robustness, and model behavior.

Anthropic
United States 3
Profile Reports

Zac Hatfield-Dodds

Anthropic

Staff software engineer at Anthropic building systems for AI safety, reliability, and alignment.

Anthropic
Unknown 3
Profile Reports

Chris Olah

Anthropic

Research scientist known for mechanistic interpretability and deep learning visualization, previously at Google Brain and OpenAI.

Anthropic
Unknown 2
Profile Reports

Robert Lasenby

Anthropic

Research scientist at Anthropic working on reasoning and geometry-aware machine learning.

Anthropic
Unknown 1
Profile Reports

Amanda Askell

Anthropic / OpenAI

Alignment researcher at OpenAI working on making AI understandable to and aligned with human values.

AnthropicOpenAI
Unknown 7
Profile Reports

Jared D. Kaplan

Anthropic

Anthropic co-founder and Chief Science Officer. Formerly a physicist at Johns Hopkins, he helped develop scaling laws for neural language models and works on the science and safety of large AI systems.

Anthropic
Unknown 6
Profile Reports

Yuntao Bai

Anthropic

Anthropic researcher whose work includes reinforcement learning from human feedback and Constitutional AI; previously a Sherman Fairchild Postdoctoral Scholar in theoretical high-energy physics at Caltech.

Anthropic
Unknown 4
Profile Reports

Sam McCandlish

Anthropic

Independent researcher working on the theoretical foundations of AI, especially inductive biases, scaling laws, and approximate Bayesian updating. His public homepage notes prior research roles at Anthropic and OpenAI.

Anthropic
Unknown 3
Profile Reports

Jackson Kernion

Anthropic

Member of Anthropic's Interpretability team, where he works on understanding how large language models work.

Anthropic
Unknown 3
Profile Reports

Christopher Olah

Anthropic

Research scientist at Anthropic known for mechanistic interpretability work, including early research on feature visualization and circuits in neural networks.

Anthropic
Unknown 1
Profile Reports

Kamal Ndousse

Anthropic

Researcher at Anthropic working on alignment, reasoning, and evaluation for large language models.

Anthropic
Unknown 5
Profile Reports

Kamile Lukosuite

Anthropic

AI governance researcher at the Centre for the Governance of AI and former Anthropic resident researcher, with interests in language models, AI safety, scalable oversight, and evaluations.

Anthropic
Unknown 1
Profile Reports

Ethan Perez

Anthropic

Research scientist at Anthropic focused on scalable oversight, AI safety, and language model evaluation; previously worked at New York University and Google.

Anthropic
Unknown 8
Profile Reports

Nicholas Schiefer

Anthropic

Member of Technical Staff at Anthropic and cofounder of Oulipo Labs, working on language model safety, evaluations, and scientific forecasting.

Anthropic
Unknown 8
Profile Reports

Deep Ganguli

Anthropic

Co-founder and head of alignment science at Anthropic.

Anthropic
Unknown 6
Profile Reports

Dario Amodei

Anthropic / OpenAI

CEO and co-founder of Anthropic. Before Anthropic, he served as vice president of research at OpenAI.

AnthropicOpenAI
Unknown 5
Profile Reports

Nova DasSarma

Anthropic

Research scientist at Anthropic interested in understanding neural networks and applying that understanding to alignment.

Anthropic
Unknown 5
Profile Reports

Anna Chen

Anthropic

Researcher working on AI safety and adversarial evaluation, including Anthropic many-shot jailbreaking research.

Anthropic
Unknown 4
Profile Reports

Saurav Kadavath

Anthropic

Research scientist at Anthropic interested in understanding and steering AI systems.

Anthropic
Unknown 4
Profile Reports

Tom Conerly

Anthropic

Software engineer at Anthropic, previously at Google, with public writing on language models, agents, and reinforcement learning.

Anthropic
Unknown 4

LLMpeople is a public atlas for discovering frontier AI researchers with context, provenance, and respect.