My long-term research interest is: how can intelligence grow and scale, while remaining precisely calibrated to the complexity of the real world?
More specifically, I am currently studying how to use Agents to synthesize data, improve LLM performance in science-related domains, and further build the benchmarks that science needs.
As human-generated data is rapidly compressed into model intelligence, synthetic data is destined to become a key approach for LLM self-learning and unbounded growth. At the same time, we need to continuously design high-quality benchmarks to calibrate model intelligence. These benchmarks are also the ultimate reward in reinforcement learning: we propose increasingly challenging benchmarks, and then use reinforcement learning and related techniques to design more fine-grained reward signals to train the self-learning of LLMs.
Incoming PhD Student in Artificial Intelligence · Fall 2026
Advisor: Prof. Linfeng Zhang.
Master of Finance (FinTech) · 2024.09–2026.06
Entrance Scholarship: RMB 55,000.
Dual-degree "Internet+" Elite Program · 2020.09–2024.06
Finance + Data Science. GPA: 91.08/100. National Scholarship.
Academic Collaboration · 2025.11–Present
Agent-based data synthesis and benchmark construction.
Scientific Computing Intern · 2024.06–2024.08
Back-end Development.
Research Assistant · 2024.03–Present
Advisor: Prof. Linfeng Zhang.