Следене
Longyue Wang
Longyue Wang
Tencent AI Lab
Потвърден имейл адрес: tencent.com - Начална страница
Заглавие
Позовавания
Позовавания
Година
Siren's song in the AI ocean: a survey on hallucination in large language models
Y Zhang, Y Li, L Cui, D Cai, L Liu, T Fu, X Huang, E Zhao, Y Zhang, ...
arXiv preprint arXiv:2309.01219, 2023
4862023
Exploiting cross-sentence context for neural machine translation
L Wang, Z Tu, A Way, Q Liu
arXiv preprint arXiv:1704.04347, 2017
2292017
Convolutional self-attention networks
B Yang, L Wang, D Wong, LS Chao, Z Tu
arXiv preprint arXiv:1904.03107, 2019
1412019
UM-Corpus: A Large English-Chinese Parallel Corpus for Statistical Machine Translation.
L Tian, DF Wong, LS Chao, P Quaresma, F Oliveira, L Yi, S Li, Y Wang, ...
LREC, 1837-1842, 2014
1322014
Understanding and improving lexical choice in non-autoregressive translation
L Ding, L Wang, X Liu, DF Wong, D Tao, Z Tu
arXiv preprint arXiv:2012.14583, 2020
1102020
Document-level machine translation with large language models
L Wang, C Lyu, T Ji, Z Zhang, D Yu, S Shi, Z Tu
arXiv preprint arXiv:2304.02210, 2023
1002023
Macaw-llm: Multi-modal language modeling with image, audio, video, and text integration
C Lyu, M Wu, L Wang, X Huang, B Liu, Z Du, S Shi, Z Tu
arXiv preprint arXiv:2306.09093, 2023
982023
Modeling recurrence for transformer
J Hao, X Wang, B Yang, L Wang, J Zhang, Z Tu
arXiv preprint arXiv:1904.03092, 2019
912019
Self-attention with structural position representations
X Wang, Z Tu, L Wang, S Shi
arXiv preprint arXiv:1909.00383, 2019
762019
Self-attention with cross-lingual position representation
L Ding, L Wang, D Tao
arXiv preprint arXiv:2004.13310, 2020
742020
Context-aware cross-attention for non-autoregressive translation
L Ding, L Wang, D Wu, D Tao, Z Tu
arXiv preprint arXiv:2011.00770, 2020
692020
Rejuvenating low-frequency words: Making the most of parallel data in non-autoregressive translation
L Ding, L Wang, X Liu, DF Wong, D Tao, Z Tu
arXiv preprint arXiv:2106.00903, 2021
602021
Redistributing low-frequency words: Making the most of monolingual data in non-autoregressive translation
L Ding, L Wang, S Shi, D Tao, Z Tu
Proceedings of the 60th Annual Meeting of the Association for Computational …, 2022
562022
Dynamic layer aggregation for neural machine translation with routing-by-agreement
ZY Dou, Z Tu, X Wang, L Wang, S Shi, T Zhang
Proceedings of the AAAI Conference on Artificial Intelligence 33 (01), 86-93, 2019
532019
Translating pro-drop languages with reconstruction models
L Wang, Z Tu, S Shi, T Zhang, Y Graham, Q Liu
Proceedings of the AAAI Conference on Artificial Intelligence 32 (1), 2018
482018
A novel approach to dropped pronoun translation
L Wang, Z Tu, X Zhang, H Li, A Way, Q Liu
arXiv preprint arXiv:1604.06285, 2016
482016
Progressive multi-granularity training for non-autoregressive translation
L Ding, L Wang, X Liu, DF Wong, D Tao, Z Tu
arXiv preprint arXiv:2106.05546, 2021
432021
Towards understanding neural machine translation with word importance
S He, Z Tu, X Wang, L Wang, MR Lyu, S Shi
arXiv preprint arXiv:1909.00326, 2019
432019
New trends in machine translation using large language models: Case examples with chatgpt
C Lyu, J Xu, L Wang
arXiv preprint arXiv:2305.01181, 2023
42*2023
Assessing the ability of self-attention networks to learn word order
B Yang, L Wang, DF Wong, LS Chao, Z Tu
arXiv preprint arXiv:1906.00592, 2019
422019
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–20