|
Mingda Chen (陈明达)Tenure-Track Associate Professor Email: chenmda [at] gmail [dot] com |
About
Hello! I am currently a tenure-track Associate Professor in the School of Artificial Intelligence at Shanghai Jiao Tong University. Prior to this, I was a Research Scientist at Meta FAIR, NYC. I earned my PhD in Computer Science from TTI-Chicago in 2022, and my BSc in Applied Math from Zhejiang University in 2015. I was supported by a Google PhD Fellowship.
Prospective students: We are recruiting PhD and interns.
Our group accepts two PhD students annually. Visiting students and research interns are expected to stay for at least six months to ensure high-quality research outcomes.
We generally look for candidates with hands-on experience in finetuning transformer models and a basic understanding of the large language model field. Prior publications in top-tier conferences (e.g., NeurIPS, ICML, ICLR, ACL, EMNLP, CVPR) are a plus.
Interested applicants should email me their CV, a brief description of their research interests, and past research experiences. In your email, please also explain how your interests or prior work align with our group’s research focuses.
Note: I receive a large number of emails and cannot always reply to everyone. Please do not take it personally if you do not hear back quickly. It is much more representative of my failings than of yours.
Research
My research broadly focuses on natural language processing, and multimodal understanding and generation. Current research interests include:
Retrieval- and memory-augmented language models
Multimodal LLMs, such as tokenization design
Factuality and attributions in LLM outputs
Evaluation methods for LLMs
Selected Publications
Please see below for a selection of papers that reflects my recent research interests.
😈ImpRAG: Retrieval-Augmented Generation with Implicit Queries
Wenzheng Zhang, Victoria Lin, Karl Stratos, Scott Wen-tau Yih, Mingda Chen
Findings of EMNLP, 2025
Improving Factuality with Explicit Working Memory
Mingda Chen, Yang Li, Karthik Padthe, Rulin Shao, Alicia Sun, Luke Zettlemoyer, Gargi Ghosh, Scott Wen-tau Yih
Proceedings of ACL, 2025
Chameleon: Mixed-Modal Early-Fusion Foundation Models
Meta FAIR Chameleon Team
arXiv Preprint, 2024
BLASER: A Text-Free Speech-to-Speech Translation Evaluation Metric
Mingda Chen, Paul-Ambroise Duquenne, Pierre Andrews, Justine Kao, Alexandre Mourachko, Holger Schwenk, Marta R. Costa-jussà
Proceedings of ACL, 2023