Zhiyong Wu
Zhiyong Wu
Shanghai AI Lab
Потвърден имейл адрес: cs.hku.hk - Начална страница
Perturbed Masking: Parameter-free Probing for Analyzing and Interpreting BERT
Z Wu, Y Chen, B Kao, Q Liu
arXiv preprint arXiv:2004.14786, 2020
A Survey for In-context Learning
Q Dong, L Li, D Dai, C Zheng, Z Wu, B Chang, X Sun, J Xu, Z Sui
arXiv preprint arXiv:2301.00234, 2022
Next: a neural network framework for next poi recommendation
Z Zhang, C Li, Z Wu, A Sun, D Ye, X Luo
Frontiers of Computer Science 14, 314-333, 2020
DiffuSeq: Sequence to Sequence Text Generation with Diffusion Models
S Gong, M Li, J Feng, Z Wu, LP Kong
arXiv preprint arXiv:2210.08933, 2022
ZeroGen: Efficient Zero-shot Learning via Dataset Generation
J Ye, J Gao, Q Li, H Xu, J Feng, Z Wu, T Yu, L Kong
arXiv preprint arXiv:2202.07922, 2022
Good for misconceived reasons: An empirical revisiting on the need for visual context in multimodal machine translation
Z Wu, L Kong, W Bi, X Li, B Kao
Proceedings of the 59th Annual Meeting of the Association for Computational …, 2021
Towards practical open knowledge base canonicalization
TH Wu, Z Wu, B Kao, P Yin
Proceedings of the 27th ACM International Conference on Information and …, 2018
PERQ: Predicting, Explaining, and Rectifying Failed Questions in KB-QA Systems
Z Wu, B Kao, TH Wu, P Yin, Q Liu
ACM International Conference on Web Search and Data Mining (WSDM), 2020
Self-Adaptive In-Context Learning: An Information Compression Perspective for In-Context Example Selection and Ordering
Z Wu, Y Wang, J Ye, L Kong
Compositional Exemplars for In-context Learning
J Ye, Z Wu, J Feng, T Yu, L Kong
arXiv preprint arXiv:2302.05698, 2023
ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback
J Ye, J Gao, J Feng, Z Wu, T Yu, L Kong
arXiv preprint arXiv:2210.12329, 2022
Self-Guided Noise-Free Data Generation for Efficient Zero-Shot Learning
J Gao, R Pi, LIN Yong, H Xu, J Ye, Z Wu, W ZHANG, X Liang, Z Li, L Kong
The Eleventh International Conference on Learning Representations, 2022
COLO: A Contrastive Learning based Re-ranking Framework for One-Stage Summarization
C An, M Zhong, Z Wu, Q Zhu, X Huang, X Qiu
arXiv preprint arXiv:2209.14569, 2022
OpenICL: An Open-Source Framework for In-context Learning
Z Wu, YX Wang, J Ye, J Feng, J Xu, Y Qiao, Z Wu
arXiv preprint arXiv:2303.02913, 2023
In-Context Learning with Many Demonstration Examples
M Li, S Gong, J Feng, Y Xu, J Zhang, Z Wu, L Kong
arXiv preprint arXiv:2302.04931, 2023
Lexical Knowledge Internalization for Neural Dialog Generation
Z Wu, W Bi, X Li, L Kong, B Kao
arXiv preprint arXiv:2205.01941, 2022
Cascaded Head-colliding Attention
L Zheng, Z Wu, L Kong
arXiv preprint arXiv:2105.14850, 2021
Unsupervised Explanation Generation via Correct Instantiations
S Cheng, Z Wu, J Chen, Z Li, Y Liu, L Kong
Proceedings of the AAAI Conference on Artificial Intelligence 37 (11), 12700 …, 2023
Can We Edit Factual Knowledge by In-Context Learning?
C Zheng, L Li, Q Dong, Y Fan, Z Wu, J Xu, B Chang
arXiv preprint arXiv:2305.12740, 2023
Explanation Regeneration via Information Bottleneck
Q Li, Z Wu, L Kong, W Bi
arXiv preprint arXiv:2212.09603, 2022
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–20