3856 Bob Betty and Beyster Building
2260 Hayward Street
Ann Arbor, MI, 48105
I am currently a Ph.D. student in Computer Science and Engineering at University of Michigan, co-advised by Dr. Honglak Lee and Dr. Todd Hollon. Previously I received a B.S. in Computer Science and a M.S. in Machine Learning at Carnegie Mellon University, where I was advised by Paul Liang and Dr. Louis-Philippe Morency.
My research interests are in making machine learning more applicable to the real world, including multimodal machine learning, computer vision and natural language processing. Specifically, I am interested in improving model performance, few-shot generalizations, and model interpretability.
Previously I have spent three summers doing research, supported by CMU Summer Undergraduate Research Apprenticeship (2018), Summer Undergraduate Research Fellowship (2020), and Research Intern at CMU MultiComp Lab (2021), and have worked as an undergraduate/graduate research assistant at CMU SquaresLab and CMU MultiComp Lab. I have also been a software engineer intern at Pinterest in the summer of 2019, and a teaching assistant for CMU 15-210 for 4 semesters.
|Jul 14, 2023||Our paper Fine-grained Text Style Transfer with Diffusion-Based Language Models won Best Paper Award at Repl4NLP Workshop at ACL 2023!|
|May 24, 2023||Our paper Fine-grained Text Style Transfer with Diffusion-Based Language Models is accepted at Repl4NLP Workshop at ACL 2023!|
|Apr 24, 2023||Our paper HighMMT: Quantifying Modality and Interaction Heterogeneity for High-Modality Representation Learning is accepted by TMLR!|
|Jan 20, 2023||Check out our new accepted paper MultiViz: Towards Visualizing and Understanding Multimodal Models at ICLR 2023!|
|Nov 20, 2022||Check out our new accepted papers Nano: Nested Human-in-the-Loop Reward Learning for Few-shot Language Model Control and MULTIVIZ: Towards Visualizing and Understanding Multimodal Models at NeurIPS 2022 HILL Workshop!|
- MultiViz: An Analysis Benchmark for Visualizing and Understanding Multimodal Models2022
- DIME: Fine-Grained Interpretations of Multimodal Models via Disentangled Local ExplanationsIn Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society 2022
- MultiBench: Multiscale Benchmarks for Multimodal Representation LearningIn Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1) 2021
- StylePTB: A Compositional Benchmark for Fine-grained Controllable Text Style TransferIn Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies Jun 2021