Atlas / Organizations / Detail
Cohere
Cohere sits in the extended layer as a company with meaningful public work around enterprise LLM systems and open research initiatives.
Roster
Kyle Richardson
Ai2 / Cohere
Senior research scientist at Ai2 working on natural language processing, machine learning, and reasoning for the Aristo project; previously a researcher at the Institute for Natural Language Processing at the University of Stuttgart.
Dan Roth
Cohere
Head of AI at Cohere and professor at the University of Pennsylvania whose public profiles focus on natural language understanding, reasoning, and grounding.
Seungyoun Hong
Cohere
Seungyoun Hong is a research scientist and postdoctoral scholar at Stanford University working on machine unlearning, data attribution, hallucination, and generalization.
Trung H. Bui
Cohere
Trung H. Bui is a staff researcher at Cohere working on generative AI, large language models, natural language processing, machine learning, and computer vision.
Sydney Z. Li
Cohere
Research scientist at Cohere Labs and PhD candidate at Stanford University focused on language models, machine learning systems, and AI safety.
Geri Skenderi
Cohere
Geri Skenderi is a research scientist at Cohere focusing on multilingual language technology, evaluation of large language models, and natural language processing for low-resource settings.
Brian Lester
Google / Cohere
Brian Lester is a Senior Research Engineer at Google working on natural language processing and efficient ways to adapt large pre-trained language models. His public work includes prompt tuning, SPoT, and FLAN-related research.
Mohammad Norouzi
Cohere
Co-founder and CEO of Ideogram and former senior staff research scientist at Google Brain in Toronto. He is known for work on generative models and representation learning, including contributions to Imagen, WaveGrad, and SimCLR.
Quinten Anthony
Cohere
Research scientist at Cohere focused on scaling machine learning systems and improving training efficiency.
Alejandro Lopez-Lira
Cohere
Alejandro Lopez-Lira is an assistant professor of finance at the University of Florida whose research interests include investments, machine learning, and empirical asset pricing.
Dibya Ghosh
Cohere
Dibya Ghosh is a machine learning researcher at Cohere and a PhD student in computer science at UC Berkeley advised by Sergey Levine. His work spans reinforcement learning and large language models, with a focus on how foundation models can improve learning agents.
Diyi Yang
Cohere
Diyi Yang is an assistant professor of computer science at Stanford University. Her research focuses on natural language processing and machine learning, especially human-centered AI, social computing, and computational social science. She earned her PhD in language technologies from Carnegie Mellon University.
Hyung Won Chung
Cohere
Research scientist at Cohere working on large language models and machine reasoning; previously worked on deep learning at Google Brain and Google DeepMind.
Lewis Tunstall
Cohere
Lewis Tunstall is a principal scientist at Cohere Labs who works on open-source language models, evaluation, and multilingual NLP.
Samia Touileb
Cohere
Associate Professor in Natural Language Processing at the University of Bergen whose work focuses on bias and fairness in NLP, information extraction, summarization, and under-resourced languages.
Xiang Lisa Li
Cohere
Xiang Lisa Li is a researcher focused on controllable and steerable language models. Her public profile highlights work including Diffusion-LM, Prefix-Tuning, Contrastive Decoding, and evaluation methods such as AutoBencher.
Zhen Qin
Cohere
Research scientist at Cohere working on large language models; previously a postdoctoral researcher in machine learning at Carnegie Mellon University.
Supriya Kalluri
Cohere
PhD candidate at the University of Washington and research scientist at Cohere working on natural language processing and machine learning.
Arianna Bisazza
Cohere
Associate professor of natural language processing at the University of Groningen and research scientist at Cohere Labs, with work spanning machine translation, multilingual models, and multimodal language understanding.
Ari Holtzman
Cohere
Assistant professor of computer science at the University of Chicago studying language generation, dialogue systems, and aligning models with human preferences.
Edward J Hu
Cohere
Edward J. Hu is a machine learning researcher known for efficient adaptation methods for large language models, including LoRA and QLoRA. His public profile focuses on parameter-efficient fine-tuning, model editing, and practical LLM systems.
Jason Phang
Cohere
Jason Phang is a machine learning researcher focused on language model robustness, evaluation, and parameter-efficient adaptation. His public profile highlights work on benchmark design, representation learning, and practical LLM methods.
Jianmo Ni
Cohere
Jianmo Ni is a researcher focused on information retrieval, question answering, and large language model systems. His public profile highlights retrieval-augmented generation, ranking, and efficient NLP methods.
Kelvin Guu
Cohere
Kelvin Guu is a researcher working on language models, retrieval, and efficient adaptation. His public profile highlights work on retrieval-augmented generation, in-context learning, and large-scale NLP systems.