Atlas / Reports / Detail
DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models
Mathematical Reasoning Models
Connected researchers
Runxin Xu
DeepSeek
Researcher at DeepSeek whose public homepage describes work on DeepSeek R1, V1, V2, V3, Math, Coder, and mixture-of-experts systems.
Daya Guo
DeepSeek / Moonshot AI
DeepSeek researcher focused on NLP, code intelligence, and LLM reasoning, with public work spanning DeepSeek-Coder, DeepSeekMath, DeepSeek-V2, DeepSeek-V3, and DeepSeek-R1.
Mingchuan Zhang
DeepSeek
Research scientist at DeepSeek interested in large language models, reinforcement learning, robot learning, and machine learning.
Zhihong Shao
DeepSeek
Research scientist at DeepSeek AI working on multimodal large language models and end-to-end autonomous driving. Earned a PhD in computer science from the Chinese University of Hong Kong.
Qihao Zhu
DeepSeek
Research scientist focused on foundation models and multimodal large language models; his homepage notes earlier work at DeepSeek AI and current research at the University of Southern California.
Y. Wu
DeepSeek
Yu Wu is a researcher at DeepSeek AI and head of its LLM Alignment Team. His public homepage highlights work on reinforcement learning and alignment for the DeepSeek model family, including DeepSeek-V3, DeepSeek-R1, and DeepSeekMath, and notes prior work at Microsoft Research Asia.
Junxiao Song
DeepSeek
Member of Technical Staff at DeepSeek.
Peiyi Wang
DeepSeek
Research scientist at DeepSeek with public GitHub projects on AI systems.