Hi there!
I am currently a 4th-year Ph.D. student in the Department of Computer Science and Engineering at Texas A&M University. I am working at the DATA Lab under the supervision of Prof. Xia (Ben) Hu.
My research interests focus on LLMs, particularly long context and efficiency.
(This page was last updated on Aug 26, 2024.)
News
- 2024.06: LLM Maybe LongLM (SelfExtend) has been selected as Spotlight (3.5%) at ICML2024!
- 2024.05: Started an internship at Amazon Rufus. Happy to chat if you’re in Seattle!
- 2024.05: KiVi (2-bit training free KV cache quantization) has been accepted by ICML2024! Congrats to Zirui Liu and Jiayi Yuan!
- 2024.01: Our new paper about LLMs’ context window extention is now avaivale at Arxiv: LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning. We propose Self-Extend to elicit LLMs’ inherent long context ability without any fine-tuning. It significantly improve the long contextn performance of LLMs and even can beat many fine-tuning based long-context method!
- 2023.09: One paper accepted by NeurIPS2023.
- 2023.05: One paper accepted by TMLR, Retiring ∆DP!
- 2023.04: New preprint Survey on LLMs!
Publications
- LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning [PDF]
- Hongye Jin*, Xiaotian Hann*, Jingfeng Yang, Zhimeng Jiang, Zirui Liu, Chia-Yuan Chang, Huiyuan Chen, Xia Hu
- ICML2024 (Spotlight)
- Kivi: A tuning-free asymmetric 2bit quantization for kv cache [PDF]
- Zirui Liu*, Jiayi Yuan*, Hongye Jin, Shaochen Zhong, Zhaozhuo Xu, Vladimir Braverman, Beidi Chen, Xia Hu
- ICML2024
- Learning Alignment and Compactness in Collaborative Filtering
- Huiyuan Chen, Vivian Lai, Hongye Jin, Zhimeng Jiang, Mahashweta Das, Xia Hu
- WSDM2024
- Chasing Fairness under Distribution Shift: a Model Weight Perturbation Approach [PDF]
- Zhimeng Jiang*, Xiaotian Han*, Hongye Jin, Guanchu Wang, Rui Chen, Na Zou, Xia Hu.
- NeurIPS2023
- Retiring ∆DP: New Distribution-Level Metrics for Demographic Parity. [PDF]
- Xiaotian Han*, Zhimeng Jiang*, Hongye Jin*, Zirui Liu, Na Zou, Qifan Wang, Xia Hu
- TMLR, 2023
- Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond [PDF] [Github]
- Jingfeng Yang*, Hongye Jin*, Ruixiang Tang*, Xiaotian Han*, Qizhang Feng*, Haoming Jiang, Bing Yin, Xia Hu
- Preprint, 2023
- Exposing Model Theft: A Robust and Transferable Watermark for Thwarting Model Extraction Attacks
- Ruixiang Tang, Hongye Jin, Mengnan Du, Curtis Wigington, Rajiv Jain, and Xia Hu
- CIKM(short)
- Disentangled graph collaborative filtering [PDF]
- Xiang Wang, Hongye Jin, An Zhang, Xiangnan He, Tong Xu, Tat-Seng Chua
- SIGIR, 2020.
- Transferring Fairness under Distribution Shift without Sensitive Information
- Hongye Jin, Fan Yang, Cecilia Tilli, Saumitra Mishra, Xia Hu
- Under Review
- GrowLength: Accelerating LLMs Pretraining by Progressively Growing Training Length [PDF]
- Hongye Jin*, Xiaotian Han*, Jingfeng Yang, Zhimeng Jiang, Chia-Yuan Chang, Xia Hu
- Preprint, 2023
- Towards Mitigating Dimensional Collapse of Representations in Collaborative Filtering
- Huiyuan Chen, Vivian Lai, Zhimeng Jiang, Hongye Jin, Chin-Chia Michael Yeh, Yan Zheng, Xia Hu and Hao Yang
- Under Review
- Gradient Rewiring for Editable Graph Neural Network Training
- Zhimeng Jiang, Zirui Liu, Xiaotian Han, Qizhang Feng, Hongye Jin, Qiaoyu Tan, Kaixiong Zhou, Na Zou, Xia Hu
- Under Review
Internships
- Visa Research, Palo Alto, CA. Sept 2022 – Dec 2022
- Research Intern
- Out-of-distribution Generalization of Graph Neural Networks
- Work with Huiyuan Chen, Hao Yang.
- Damo Academy, Alibaba, Beijing, China. Dec. 2020 - Feb. 2021
- Research Intern
- Weak/distant-supervised learning for NLP.
Educations
- Aug. 2020 - now, Ph.D. Student, Computer Science, Texas A&M University.
- Sept. 2015 - July. 2020, Bacheler Degree, Computer Science, Peking University.
Professional Acitivities
- Conference Reviewer: WWW’23, KDD’23, ICDM’22, NeurIPS’23, AAAI’24
- Journal Reviewer: ACM Transactions on Intelligent Systems and Technology
Last updated on Jan 02, 2024.