Yixuan Su
Yixuan Su
Research Scientist at Cohere
Потвърден имейл адрес: cohere.com - Начална страница
Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System
Y Su, L Shu, E Mansimov, A Gupta, D Cai, YA Lai, Y Zhang
ACL'22, 2021
A Contrastive Framework for Neural Text Generation
Y Su, T Lan, Y Wang, D Yogatama, L Kong, N Collier
NeurIPS'22 (Spotlight), 2022
Language models can see: plugging visual controls in text generation
Y Su, T Lan, Y Liu, F Liu, D Yogatama, Y Wang, L Kong, N Collier
arXiv preprint arXiv:2205.02655, 2022
A survey on retrieval-augmented text generation
H Li, Y Su, D Cai, Y Wang, L Liu
arXiv preprint arXiv:2202.01110, 2022
Pandagpt: One model to instruction-follow them all
Y Su, T Lan, H Li, J Xu, Y Wang, D Cai
TLLM'23, 2023
Plan-then-Generate: Controlled Data-to-Text Generation via Planning
Y Su, D Vandyke, S Wang, Y Fang, N Collier
EMNLP'21-Findings, 2021
TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning
Y Su, F Liu, Z Meng, L Shu, E Shareghi, N Collier
NAACL'22-Findings, 2021
Dialogue Response Selection with Hierarchical Curriculum Learning
Y Su, D Cai, Q Zhou, Z Lin, S Baker, Y Cao, S Shi, N Collier, Y Wang
ACL'21, 2021
Non-autoregressive text generation with pre-trained language models
Y Su, D Cai, Y Wang, D Vandyke, S Baker, P Li, N Collier
EACL'21, 2021
Prototype-to-style: Dialogue generation with style-aware editing on retrieval memory
Y Su, Y Wang, D Cai, S Baker, A Korhonen, N Collier
TASLP'21, 2021
Contrastive search is what you need for neural text generation
Y Su, N Collier
TMLR'23, 2022
Rewire-then-Probe: A Contrastive Recipe for Probing Biomedical Knowledge of Pre-trained Language Models
Z Meng, F Liu, E Shareghi, Y Su, C Collins, N Collier
ACL'22, 2021
Few-Shot Table-to-Text Generation with Prototype Memory
Y Su, Z Meng, S Baker, N Collier
EMNLP'21-Findings, 2021
Keep the Primary, Rewrite the Secondary: A Two-Stage Approach for Paraphrase Generation
Y Su, D Vandyke, S Baker, Y Wang, N Collier
ACL'21-Findings, 2021
Exploring dense retrieval for dialogue response selection
T Lan, D Cai, Y Wang, Y Su, H Huang, XL Mao
arXiv preprint arXiv:2110.06612, 2021
Stylistic dialogue generation via information-guided reinforcement learning strategy
Y Su, D Cai, Y Wang, S Baker, A Korhonen, N Collier, X Liu
arXiv preprint arXiv:2004.02202, 2020
Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models
Y Huang, Z Meng, F Liu, Y Su, N Collier, Y Lu
arXiv preprint arXiv:2308.16463, 2023
An empirical study on contrastive search and contrastive decoding for open-ended text generation
Y Su, J Xu
arXiv preprint arXiv:2211.10797, 2022
From Easy to Hard: A Dual Curriculum Learning Framework for Context-Aware Document Ranking
Y Zhu, JY Nie, Y Su, H Chen, X Zhang, Z Dou
CIKM'22, 2022
Measuring and Reducing Model Update Regression in Structured Prediction for NLP
D Cai, E Mansimov, YA Lai, Y Su, L Shu, Y Zhang
NeurIPS'22, 2022
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–20