LLMpeople
Home People Organizations Reports Fields Schools
Public Atlas People first, reports as evidence, organizations as context.

Atlas / People / Detail

Andrea Vallone

AI safety researcher whose public work includes OpenAI model policy research and later alignment work at Anthropic, with contributions credited on OpenAI GPT-4o-era projects.

Researcher1 organizations1 reports

Profile status: updated

Suggest a correction
Suggest a source

Contributions are treated as untrusted leads. Public changes happen only after review against public sources.

Trust signals

Profile completeness43%
Public sources2
Official sources1
CountryUnknown
Last reviewedMar 12, 2026
Review outcomeUpdated
Scholar profile
updated Unknown location 2 public sources

Latest review note

Added safety-focused bio, DBLP entry, and a clear public news reference.

Public links

dblp DBLP profile news News profile

Organizations

core OpenAI

Reports

Large Language Models GPT-4 Technical Report

Official and primary sources

https://dblp.org/pid/313/2002.html Official source · dblp

Supporting sources

https://www.theverge.com/ai-artificial-intelligence/862402/openai-safety-lead-model-policy-departs-for-anthropic-alignment-andrea-vallone Supporting source · news

LLMpeople is a public atlas for discovering frontier AI researchers with context, provenance, and respect.