After 3+ years at Jump Trading, I’m spending my garden leave at the Text Analysis, Understanding and Reasoning (TAUR) lab led by Professor Greg Durrett at UT Austin. In Fall 2025, I will be starting my PhD at the Massachusetts Institute of Technology, supported by the NSF CSGrad4US Fellowship and the School of Engineering Distinguished Graduate Fellowship!
In March 2024, I finished 3.5 exciting years at Jump Trading working as a Software Engineer on trading infrastructure and then as a Quantitative Researcher on trading strategy simulation. Receiving a fellowship provided by the NSF motivated me to return to academic research on Large Language Models and apply for PhD programs.
Previously, I graduated with a B.S. in Computer Science and a B.A. in Plan II Honors from the University of Texas. As part of the Turing Scholars Honors Program, I completed an undergraduate research thesis advised by Greg Durrett as a member of the Text Analysis, Understanding and Reasoning (TAUR) group.
While at UT, I co-founded the UT CS Directed Reading Program, in which upperclassmen and graduate students mentor groups of 3-4 undergraduate students to understand cutting-edge research in various topics. By selecting papers relevant to UT faculty and inviting faculty members to attend reading group sessions, we increased CS undergraduate research participation. This program has now grown to 20+ mentors and 120+ mentees each semester!
I was also a part of Freetail Hackers, as corporate lead and then as president. We organized HackTX, an annual 800-person hackathon for college students hosted at the AT&T Conference Center. HackTX gives students a supportive space to build cool things, learn new skills through workshops and competitions, and connect with industry opportunities.
ChartMuseum: Testing Visual Reasoning Capabilities of Large Vision-Language Models
Liyan Tang, Grace Kim, Xinyu Zhao, Thom Lake, Wenxuan Ding, Fangcong Yin, Prasann Singhal, Manya Wadhwa, Zeyu Leo Liu, Zayne Sprague, Ramya Namuduri, Bodun Hu, Juan Diego Rodriguez, Puyuan Peng, and Greg Durrett. arXiv 2025.
Understanding Synthetic Context Extension via Retrieval Heads
Xinyu Zhao, Fangcong Yin, Greg Durrett. Proceedings of ICML 2025.
To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning
Zayne Sprague, Fangcong Yin, Juan Diego Rodriguez, Dongwei Jiang, Manya Wadhwa, Prasann Singhal, Xinyu Zhao, Xi Ye, Kyle Mahowald, and Greg Durrett. Proceedings of ICLR 2025.
Learning to Refine with Fine-Grained Natural Language Feedback
Manya Wadhwa, Xinyu Zhao, Junyi Jessy Li, and Greg Durrett. Findings of EMNLP 2024.
Flexible generation of natural language deductions
Kaj Bostrom, Xinyu Zhao, Swarat Chaudhuri, Greg Durrett. EMNLP 2021.
Effective Distant Supervision for Temporal Relation Extraction
Xinyu Zhao, Shih-ting Lin, Greg Durrett. Proceedings of Adapt-NLP: The Second Workshop on Domain Adaptation for NLP (at EACL) 2021.
Powered by Jekyll and Minimal Light theme.