Следене
Tian Lan
Заглавие
Позовавания
Позовавания
Година
Pandagpt: One model to instruction-follow them all
Y Su, T Lan, H Li, J Xu, Y Wang, D Cai
arXiv preprint arXiv:2305.16355, 2023
2162023
A contrastive framework for neural text generation
Y Su, T Lan, Y Wang, D Yogatama, L Kong, N Collier
arXiv preprint arXiv:2202.06417, 2022
196*2022
Lingpeng Kong, and Nigel Collier. Language models can see: Plugging visual controls in text generation
Y Su, T Lan, Y Liu, F Liu, D Yogatama, Y Wang
arXiv preprint arXiv:2205.02655 4 (5), 2022
106*2022
Food recommendation with graph convolutional network
X Gao, F Feng, H Huang, XL Mao, T Lan, Z Chi
Information Sciences 584, 170-183, 2022
552022
TaCL: Improving BERT pre-training with token-aware contrastive learning
Y Su, F Liu, Z Meng, T Lan, L Shu, E Shareghi, N Collier
arXiv preprint arXiv:2111.04198, 2021
542021
Pone: A novel automatic evaluation metric for open-domain generative dialogue systems
T Lan, XL Mao, W Wei, X Gao, H Huang
ACM Transactions on Information Systems (TOIS) 39 (1), 1-37, 2020
442020
Exploring dense retrieval for dialogue response selection
T Lan, D Cai, Y Wang, Y Su, H Huang, XL Mao
ACM Transactions on Information Systems 42 (3), 1-29, 2024
14*2024
LASH: Large-scale academic deep semantic hashing
JN Guo, XL Mao, T Lan, RX Tu, W Wei, H Huang
IEEE Transactions on Knowledge and Data Engineering 35 (2), 1734-1746, 2021
82021
Criticbench: Evaluating large language models as critic
T Lan, W Zhang, C Xu, H Huang, D Lin, K Chen, X Mao
arXiv preprint arXiv:2402.13764, 2024
62024
Repetition in repetition out: Towards understanding neural text degeneration from the data perspective
H Li, T Lan, Z Fu, D Cai, L Liu, N Collier, T Watanabe, Y Su
Advances in Neural Information Processing Systems 36, 72888-72903, 2023
62023
Cross-lingual phrase retrieval
H Zheng, X Zhang, Z Chi, H Huang, T Yan, T Lan, W Wei, XL Mao
arXiv preprint arXiv:2204.08887, 2022
52022
Language models can see: Plugging visual controls in text generation. arXiv 2022
Y Su, T Lan, Y Liu, F Liu, D Yogatama, Y Wang, L Kong, N Collier
arXiv preprint arXiv:2205.02655, 0
5
When to talk: Chatbot controls the timing of talking during multi-turn open-domain dialogue generation
T Lan, X Mao, H Huang, W Wei
arXiv preprint arXiv:1912.09879, 2019
42019
Towards Efficient Coarse-grained Dialogue Response Selection
T Lan, XL Mao, W Wei, X Gao, H Huang
ACM Transactions on Information Systems 42 (2), 1-32, 2023
3*2023
Which kind is better in open-domain multi-turn dialog, hierarchical or non-hierarchical models? An empirical study
T Lan, XL Mao, W Wei, H Huang
arXiv preprint arXiv:2008.02964, 2020
32020
Multi-task learning for low-resource second language acquisition modeling
Y Hu, H Huang, T Lan, X Wei, Y Nie, J Qi, L Yang, XL Mao
Web and Big Data: 4th International Joint Conference, APWeb-WAIM 2020 …, 2020
22020
Training Language Models to Critique With Multi-agent Feedback
T Lan, W Zhang, C Lyu, S Li, C Xu, H Huang, D Lin, XL Mao, K Chen
arXiv preprint arXiv:2410.15287, 2024
2024
Beyond Exact Match: Semantically Reassessing Event Extraction by Large Language Models
YF Lu, XL Mao, T Lan, C Xu, H Huang
arXiv preprint arXiv:2410.09418, 2024
2024
Block-Attention for Low-Latency RAG
E Sun, Y Wang, L Tian
arXiv preprint arXiv:2409.15355, 2024
2024
A Hierarchical Context Augmentation Method to Improve Retrieval-Augmented LLMs on Scientific Papers
TY Che, XL Mao, T Lan, H Huang
Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and …, 2024
2024
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–20