Tian Lan
Tian Lan
Потвърден имейл адрес: bit.edu.cn - Начална страница
Pandagpt: One model to instruction-follow them all
Y Su, T Lan, H Li, J Xu, Y Wang, D Cai
arXiv preprint arXiv:2305.16355, 2023
A contrastive framework for neural text generation
Y Su, T Lan, Y Wang, D Yogatama, L Kong, N Collier
arXiv preprint arXiv:2202.06417, 2022
Lingpeng Kong, and Nigel Collier. Language models can see: Plugging visual controls in text generation
Y Su, T Lan, Y Liu, F Liu, D Yogatama, Y Wang
arXiv preprint arXiv:2205.02655 4 (5), 2022
Food recommendation with graph convolutional network
X Gao, F Feng, H Huang, XL Mao, T Lan, Z Chi
Information Sciences 584, 170-183, 2022
TaCL: Improving BERT pre-training with token-aware contrastive learning
Y Su, F Liu, Z Meng, T Lan, L Shu, E Shareghi, N Collier
arXiv preprint arXiv:2111.04198, 2021
Pone: A novel automatic evaluation metric for open-domain generative dialogue systems
T Lan, XL Mao, W Wei, X Gao, H Huang
ACM Transactions on Information Systems (TOIS) 39 (1), 1-37, 2020
Exploring dense retrieval for dialogue response selection
T Lan, D Cai, Y Wang, Y Su, H Huang, XL Mao
ACM Transactions on Information Systems 42 (3), 1-29, 2024
LASH: Large-scale academic deep semantic hashing
JN Guo, XL Mao, T Lan, RX Tu, W Wei, H Huang
IEEE Transactions on Knowledge and Data Engineering 35 (2), 1734-1746, 2021
Repetition in repetition out: Towards understanding neural text degeneration from the data perspective
H Li, T Lan, Z Fu, D Cai, L Liu, N Collier, T Watanabe, Y Su
Advances in Neural Information Processing Systems 36, 72888-72903, 2023
Cross-lingual phrase retrieval
H Zheng, X Zhang, Z Chi, H Huang, T Yan, T Lan, W Wei, XL Mao
arXiv preprint arXiv:2204.08887, 2022
When to talk: Chatbot controls the timing of talking during multi-turn open-domain dialogue generation
T Lan, X Mao, H Huang, W Wei
arXiv preprint arXiv:1912.09879, 2019
Towards Efficient Coarse-grained Dialogue Response Selection
T Lan, XL Mao, W Wei, X Gao, H Huang
ACM Transactions on Information Systems 42 (2), 1-32, 2023
Which kind is better in open-domain multi-turn dialog, hierarchical or non-hierarchical models? An empirical study
T Lan, XL Mao, W Wei, H Huang
arXiv preprint arXiv:2008.02964, 2020
Criticbench: Evaluating large language models as critic
T Lan, W Zhang, C Xu, H Huang, D Lin, K Chen, X Mao
arXiv preprint arXiv:2402.13764, 2024
Multi-task learning for low-resource second language acquisition modeling
Y Hu, H Huang, T Lan, X Wei, Y Nie, J Qi, L Yang, XL Mao
Web and Big Data: 4th International Joint Conference, APWeb-WAIM 2020 …, 2020
Copy Is All You Need
T Lan, D Cai, Y Wang, H Heyan, M Xian-Ling
https://openreview.net/forum?id=CROlOA9Nd8C, 2023
Momentum Decoding: Open-ended Text Generation As Graph Exploration
T Lan, Y Su, S Liu, H Huang, XL Mao
arXiv preprint arXiv:2212.02175, 2022
Generative Dialog Policy for Task-oriented Dialog Systems
T Lan, X Mao, H Huang
arXiv preprint arXiv:1909.09484, 2019
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–18