Mengyu Ye
Ph.D. student @ Tohoku University Fundamental AI lab.
ye.mengyu.s1 [at] dc.tohoku.ac.jp
I am a 2nd-year Ph.D. student in NLP at the [Fundamental AI lab] (https://www.fai.cds.tohoku.ac.jp/)(member of the Tohoku NLP Group) at Tohoku University, advised by Prof. Jun Suzuki. I am also a Google PhD Fellow. My research examines the behavior of large language models through precise, controlled experimentation. I design synthetic tasks and datasets, build training, inference, and evaluation pipelines, and use causal methods to characterize how models reason and where they fail.
My work focuses on uncovering overlooked failure modes and validating their underlying mechanisms, with resulting publications at NeurIPS, ACL, and EMNLP. A central aim of my research is to develop tools and methodologies that make model capabilities more predictable, reliable, and controllable.
I am currently extending this approach to diffusion language models, studying how post-training and inference-time algorithms shape coherence and reasoning. I also explore agentic systems, applying the same experimental discipline; our deep-research agent received the Best Static Evaluation Prize in the MMU-RAG competition at NeurIPS 2025.
news
| Dec 08, 2025 | Our team won the Best Static Evaluation Prize in the MMU-RAG NeurIPS 2025 Competition. |
|---|---|
| Dec 01, 2025 | Released a CLI tool that uses an LLM agent to automatically clean, format, and update BibTeX references. |
| Oct 24, 2025 | I’m honored to receive the 2025 Google PhD Fellowship in Natural Language Processing. |
| Sep 19, 2025 | The paper demonstrating that the interpretability of key-value memories closely matches that of sparse autoencoders has been accepted to NeurIPS 2025. |
| May 16, 2025 | The paper on evaluating input attribution methods under ICL setting has been accepted to ACL 2025 (Findings). |