Atlas / Reports / Detail
Qwen2.5-Coder Technical Report
Code Language Models
Connected researchers
Junyang Lin
Qwen
Junyang Lin (Justin Lin) is a researcher and open-source maintainer known for the Qwen family of models. His public profiles list interests in LLMs, AI agents, multimodal learning, long-horizon reasoning, world models, and reinforcement learning; multiple March 2026 news reports said he stepped down from the Qwen tech lead role.
Shuai Bai
Qwen
Senior algorithm expert at Alibaba Group working on large language models, multimodal large language models, and diffusion models.
Jinze Bai
Qwen
PhD student at The Hong Kong University of Science and Technology (Guangzhou) whose research interests include large language models, vision-language models, AI agents, and multimodal retrieval.
Zeyu Cui
Qwen
Research scientist at Meta in New York City and research advisor at the UCLA NLP group; previously completed a PhD in computer science at UCLA.
Kai Dang
Qwen
Researcher on Alibaba's Qwen team focused on large language models and NLP, with public research profiles listing a Nankai University background.
Xiaodong Deng
Qwen
Research scientist in Tongyi Lab whose official profile highlights post-training and multimodal large language models.
Wenbin Ge
Qwen
Research scientist in Tongyi Lab whose official profile highlights work on efficient reinforcement learning, generalization, inference-time scaling, and reasoning for large language models.
Chang Zhou
Qwen
Qwen researcher and co-lead whose work focuses on pretraining and post-training, multimodal models, agent systems, and large-scale model infrastructure.