Lingpeng Kong

In this much larger and permanent world of rational thinking in general, the current problems of theoretical physics appeared as only details of temporary interest.
 
Edwin Thompson Jaynes
In this group of papers, we explore text diffusion models.
  • [2024] Jiacheng Ye*, Shansan Gong*, Liheng Chen*, Lin Zheng, Jiahui Gao, Han Shi, Chuan Wu, Zhenguo Li, Wei Bi, and Lingpeng Kong, Diffusion of Thoughts: Chain-of-Thought Reasoning in Diffusion Language Models.
  • [2023] Lin Zheng, Jianbo Yuan, Lei Yu, and Lingpeng Kong, A Reparameterized Discrete Diffusion Model for Text Generation.
  • [2023] Shansan Gong, Mukai Li, Jiangtao Feng, Zhiyong Wu, and Lingpeng Kong, Bridging Discrete and Continuous Text Spaces for Accelerated Seq2Seq Diffusion Models.
  • [2023] Shansan Gong, Mukai Li, Jiangtao Feng, Zhiyong Wu, and Lingpeng Kong, DiffuSeq: Sequence to Sequence Text Generation with Diffusion Models.
  • In this group of papers, we explore new transformer architectures with the goal of enhancing efficiency and performance in long sequence modeling.
  • [2023] Lin Zheng, Jianbo Yuan, Chong Wang, and Lingpeng Kong, Efficient Attention via Control Variates.
  • [2022] Lin Zheng, Chong Wang, and Lingpeng Kong, Linear Complexity Randomized Self-attention Mechanism.
  • [2022] Lin Zheng, Huijie Pan, and Lingpeng Kong, Ripple Attention for Visual Perception with Sub-quadratic Complexity.
  • [2021] Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah A. Smith, and Lingpeng Kong, Random Feature Attention.
  • The evaluation of these architectures.
  • [2023] Jun Zhang*, Shuyang Jiang*, Jiangtao Feng, Lin Zheng, and Lingpeng Kong, CAB: Comprehensive Attention Benchmarking on Long Sequence Modeling.
  • In this group of papers, we explore key issues related to large language models (LLMs).

    In-context Learning.
  • [2023] Zhiyong Wu*, Yaoxiang Wang*, Jiacheng Ye*, and Lingpeng Kong, Self-adaptive In-context Learning.
  • [2023] Jiacheng Ye, Zhiyong Wu, Jiangtao Feng, Tao Yu, and Lingpeng Kong, Compositional Exemplars for In-context Learning
  • Principles of Data Synthesis.
  • [2023] Jiahui Gao, Renjie Pi, Lin Yong, Hang Xu, Jiacheng Ye, Zhiyong Wu, Weizhong Zhang, Xiaodan Liang, Zhenguo Li, and Lingpeng Kong, Self-Guided Noise-Free Data Generation for Efficient Zero-Shot Learning.
  • [2022] Jiacheng Ye, Jiahui Gao, Zhiyong Wu, Jiangtao Feng, Tao Yu, and Lingpeng Kong, ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback.
  • [2023] Jiacheng Ye, Chengzu Li, Lingpeng Kong, Tao Yu, Generating Data for Symbolic Language with Large Language Models.
  • [2022] Jiacheng Ye*, Jiahui Gao*, Qintong Li, Hang Xu, Jiangtao Feng, Zhiyong Wu, Tao Yu, and Lingpeng Kong, ZeroGen: Efficient Zero-shot Learning via Dataset Generation.
  • Ultra-long Sequences.
  • [2023] Chenxin An, Fei Huang, Jun Zhang, Shansan Gong, Xipeng Qiu, Chang Zhou, and Lingpeng Kong, Training-Free Long-Context Scaling of Large Language Models.
  • [2023] Chenxin An, Shansan Gong, Ming Zhong, Xingjian Zhao, Mukai Li, Jun Zhang, Lingpeng Kong, and Xipeng Qiu, L-Eval: Instituting Standardized Evaluation for Long Context Language Models.
  • In this group of papers, we explore computational science enabled by AI.

    Mathematics.
  • [2024] Qintong Li, Leyang Cui, Xueliang Zhao, Lingpeng Kong, and Wei Bi, GSM-Plus: A Comprehensive Benchmark for Evaluating the Robustness of LLMs as Mathematical Problem Solvers.
  • [2023] Xueliang Zhao, Wenda Li, and Lingpeng Kong, Decomposing the Enigma: Subgoal-based Demonstration Learning for Formal Theorem Proving.
  • [2023] Jiahui Gao, Renjie Pi, Jipeng Zhang, Jiacheng Ye, Wanjun Zhong, Yufei Wang, Lanqing Hong, Jianhua Han, Hang Xu, Zhenguo Li, and Lingpeng Kong, G-LLaVA: Solving Geometric Problem with Multi-Modal Large Language Model.
  • [2023] Xueliang Zhao, Xinting Huang, Wei Bi, and Lingpeng Kong, SEGO: Sequential Subgoal Optimization for Mathematical Problem-Solving.
  • Biology and Chemistry.
  • [2023] Chang Ma, Haiteng Zhao, Lin Zheng, Jiayi Xin, Qintong Li, Lijun Wu, Zhihong Deng, Yang Lu, Qi Liu, and Lingpeng Kong, Retrieved Sequence Augmentation for Protein Representation Learning.
  • [2023] Haiteng Zhao, Shengchao Liu, Chang Ma, Hannan Xu, Jie Fu, Zhi-Hong Deng, Lingpeng Kong, Qi Liu, GIMLET: A Unified Graph-Text Model for Instruction-Based Molecule Zero-Shot Learning.
  • Starting from 2024, I have decided to discontinue the use of the list of publications format, as I believe that it fails to provide an intuitive view of our research. Instead, I am creating this active, self-curated selection of papers. I have intentionally excluded the publication venue information from these papers, which means they may be arXiv preprints or peer-reviewed conference / journal papers. For that information, kindly refer to my resume. The selection only represents my personal subjective preferences.