Lora: Low-rank adaptation of large language models EJ Hu, Y Shen, P Wallis, Z Allen-Zhu, Y Li, S Wang, L Wang, W Chen arXiv preprint arXiv:2106.09685, 2021 | 3728 | 2021 |
On the variance of the adaptive learning rate and beyond L Liu, H Jiang, P He, W Chen, X Liu, J Gao, J Han arXiv preprint arXiv:1908.03265, 2019 | 2043 | 2019 |
Deberta: Decoding-enhanced bert with disentangled attention P He, X Liu, J Gao, W Chen arXiv preprint arXiv:2006.03654, 2020 | 1955 | 2020 |
Multi-task deep neural networks for natural language understanding X Liu, P He, W Chen, J Gao arXiv preprint arXiv:1901.11504, 2019 | 1340 | 2019 |
What Makes Good In-Context Examples for GPT-? J Liu, D Shen, Y Zhang, B Dolan, L Carin, W Chen arXiv preprint arXiv:2101.06804, 2021 | 800 | 2021 |
Debertav3: Improving deberta using electra-style pre-training with gradient-disentangled embedding sharing P He, J Gao, W Chen arXiv preprint arXiv:2111.09543, 2021 | 587 | 2021 |
Smart: Robust and efficient fine-tuning for pre-trained natural language models through principled regularized optimization H Jiang, P He, W Chen, X Liu, J Gao, T Zhao arXiv preprint arXiv:1911.03437, 2019 | 416 | 2019 |
Reasonet: Learning to stop reading in machine comprehension Y Shen, PS Huang, J Gao, W Chen Proceedings of the 23rd ACM SIGKDD international conference on knowledge …, 2017 | 333 | 2017 |
Short text conceptualization using a probabilistic knowledgebase Y Song, H Wang, Z Wang, H Li, W Chen Proceedings of the twenty-second international joint conference on …, 2011 | 291 | 2011 |
Understanding the difficulty of training transformers L Liu, X Liu, J Gao, W Chen, J Han arXiv preprint arXiv:2004.08249, 2020 | 241 | 2020 |
Check your facts and try again: Improving large language models with external knowledge and automated feedback B Peng, M Galley, P He, H Cheng, Y Xie, Y Hu, Q Huang, L Liden, Z Yu, ... arXiv preprint arXiv:2302.12813, 2023 | 238 | 2023 |
Fusionnet: Fusing via fully-aware attention with application to machine comprehension HY Huang, C Zhu, Y Shen, W Chen arXiv preprint arXiv:1711.07341, 2017 | 204 | 2017 |
Improving multi-task deep neural networks via knowledge distillation for natural language understanding X Liu, P He, W Chen, J Gao arXiv preprint arXiv:1904.09482, 2019 | 198 | 2019 |
Document transformation for multi-label feature selection in text categorization W Chen, J Yan, B Zhang, Z Chen, Q Yang Seventh IEEE International Conference on Data Mining (ICDM 2007), 451-456, 2007 | 181 | 2007 |
On the advance of making language models better reasoners Y Li, Z Lin, S Zhang, Q Fu, B Chen, JG Lou, W Chen arXiv preprint arXiv:2206.02336, 2022 | 176* | 2022 |
Agieval: A human-centric benchmark for evaluating foundation models W Zhong, R Cui, Y Guo, Y Liang, S Lu, Y Wang, A Saied, W Chen, ... arXiv preprint arXiv:2304.06364, 2023 | 162 | 2023 |
Generation-augmented retrieval for open-domain question answering Y Mao, P He, X Liu, Y Shen, J Gao, J Han, W Chen arXiv preprint arXiv:2009.08553, 2020 | 162 | 2020 |
Adversarial training for large neural language models X Liu, H Cheng, P He, W Chen, Y Wang, H Poon, J Gao arXiv preprint arXiv:2004.08994, 2020 | 161 | 2020 |
Tapex: Table pre-training via learning a neural sql executor Q Liu, B Chen, J Guo, M Ziyadi, Z Lin, W Chen, JG Lou arXiv preprint arXiv:2107.07653, 2021 | 160 | 2021 |
Codet: Code generation with generated tests B Chen, F Zhang, A Nguyen, D Zan, Z Lin, JG Lou, W Chen arXiv preprint arXiv:2207.10397, 2022 | 155 | 2022 |