Hi there!
I am currently a 4th-year Ph.D. student in the Department of Computer Science and Engineering at Texas A&M University. I am working at the DATA Lab under the supervision of Prof. Xia (Ben) Hu.
I am actively seeking an internship position for 2024, available in Spring, Summer, and Fall. Please kindly contact me if there is a good fit.
My research interests lie in LLMs, the general area of machine learning.
News
- 2024.01: Our new paper about LLMs’ context window extention is now avaivale at Arxiv: LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning. We propose Self-Extend to elicit LLMs’ inherent long context ability without any fine-tuning. It significantly improve the long contextn performance of LLMs and even can beat many fine-tuning based long-context method!
- 2023.09: One paper accepted by NeurIPS2023.
- 2023.05: One paper accepted by TMLR, Retiring ∆DP!
- 2023.04: New preprint Survey on LLMs!
Publications
- LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning [PDF]
- Hongye Jin*, Xiaotian Hann*, Jingfeng Yang, Zhimeng Jiang, Zirui Liu, Chia-Yuan Chang, Huiyuan Chen, Xia Hu
- Preprint, 2024
- Learning Alignment and Compactness in Collaborative Filtering
- Huiyuan Chen, Vivian Lai, Hongye Jin, Zhimeng Jiang, Mahashweta Das, Xia Hu
- WSDM2024
- Chasing Fairness under Distribution Shift: a Model Weight Perturbation Approach [PDF]
- Zhimeng Jiang*, Xiaotian Han*, Hongye Jin, Guanchu Wang, Rui Chen, Na Zou, Xia Hu.
- NeurIPS2023
- Retiring ∆DP: New Distribution-Level Metrics for Demographic Parity. [PDF]
- Xiaotian Han*, Zhimeng Jiang*, Hongye Jin*, Zirui Liu, Na Zou, Qifan Wang, Xia Hu
- TMLR, 2023
- Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond [PDF] [Github]
- Jingfeng Yang*, Hongye Jin*, Ruixiang Tang*, Xiaotian Han*, Qizhang Feng*, Haoming Jiang, Bing Yin, Xia Hu
- Preprint, 2023
- Exposing Model Theft: A Robust and Transferable Watermark for Thwarting Model Extraction Attacks
- Ruixiang Tang, Hongye Jin, Mengnan Du, Curtis Wigington, Rajiv Jain, and Xia Hu
- CIKM(short)
- Disentangled graph collaborative filtering [PDF]
- Xiang Wang, Hongye Jin, An Zhang, Xiangnan He, Tong Xu, Tat-Seng Chua
- SIGIR, 2020.
- Transferring Fairness under Distribution Shift without Sensitive Information
- Hongye Jin, Fan Yang, Cecilia Tilli, Saumitra Mishra, Xia Hu
- Under Review
- GrowLength: Accelerating LLMs Pretraining by Progressively Growing Training Length [PDF]
- Hongye Jin*, Xiaotian Han*, Jingfeng Yang, Zhimeng Jiang, Chia-Yuan Chang, Xia Hu
- Preprint, 2023
- Towards Mitigating Dimensional Collapse of Representations in Collaborative Filtering
- Huiyuan Chen, Vivian Lai, Zhimeng Jiang, Hongye Jin, Chin-Chia Michael Yeh, Yan Zheng, Xia Hu and Hao Yang
- Under Review
- Gradient Rewiring for Editable Graph Neural Network Training
- Zhimeng Jiang, Zirui Liu, Xiaotian Han, Qizhang Feng, Hongye Jin, Qiaoyu Tan, Kaixiong Zhou, Na Zou, Xia Hu
- Under Review
Internships
- Visa Research, Palo Alto, CA. Sept 2022 – Dec 2022
- Research Intern
- Out-of-distribution Generalization of Graph Neural Networks
- Work with Huiyuan Chen, Hao Yang.
- Damo Academy, Alibaba, Beijing, China. Dec. 2020 - Feb. 2021
- Research Intern
- Weak/distant-supervised learning for NLP.
Educations
- Aug. 2020 - now, Ph.D. Student, Computer Science, Texas A&M University.
- Sept. 2015 - July. 2020, Bacheler Degree, Computer Science, Peking University.
Professional Acitivities
- Conference Reviewer: WWW’23, KDD’23, ICDM’22, NeurIPS’23, AAAI’24
- Journal Reviewer: ACM Transactions on Intelligent Systems and Technology
Last updated on Jan 02, 2024.