Следене
Ping (Iris) Yu
Ping (Iris) Yu
FAIR researcher at Meta AI
Потвърден имейл адрес: buffalo.edu - Начална страница
Заглавие
Позовавания
Позовавания
Година
Lima: Less is more for alignment
C Zhou, P Liu, P Xu, S Iyer, J Sun, Y Mao, X Ma, A Efrat, P Yu, L Yu, ...
Advances in Neural Information Processing Systems 36, 2024
4002024
Self-alignment with instruction backtranslation
X Li, P Yu, C Zhou, T Schick, L Zettlemoyer, O Levy, J Weston, M Lewis
arXiv preprint arXiv:2308.06259, 2023
732023
Opt-iml: Scaling language model instruction meta learning through the lens of generalization
S Iyer, XV Lin, R Pasunuru, T Mihaylov, D Simig, P Yu, K Shuster, T Wang, ...
arXiv preprint arXiv:2212.12017, 2022
622022
Learning diverse stochastic human-action generators by learning smooth latent transitions
Z Wang, P Yu, Y Zhao, R Zhang, Y Zhou, J Yuan, C Chen
Proceedings of the AAAI conference on artificial intelligence 34 (07), 12281 …, 2020
422020
Feature quantization improves gan training
Y Zhao, C Li, P Yu, J Gao, C Chen
arXiv preprint arXiv:2004.02088, 2020
382020
Structure-aware human-action generation
P Yu, Y Zhao, C Li, J Yuan, C Chen
Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23 …, 2020
372020
Shepherd: A critic for language model generation
T Wang, P Yu, XE Tan, S O'Brien, R Pasunuru, J Dwivedi-Yu, ...
arXiv preprint arXiv:2308.04592, 2023
262023
Bayesian meta sampling for fast uncertainty adaptation
Z Wang, Y Zhao, P Yu, R Zhang, C Chen
International Conference on Learning Representations, 2019
202019
Alert: Adapting language models to reasoning tasks
P Yu, T Wang, O Golovneva, B AlKhamissi, S Verma, Z Jin, G Ghosh, ...
arXiv preprint arXiv:2212.08286, 2022
122022
Efficient language modeling with sparse all-mlp
P Yu, M Artetxe, M Ott, S Shleifer, H Gong, V Stoyanov, X Li
arXiv preprint arXiv:2203.06850, 2022
112022
The art of llm refinement: Ask, refine, and trust
K Shridhar, K Sinha, A Cohen, T Wang, P Yu, R Pasunuru, M Sachan, ...
arXiv preprint arXiv:2311.07961, 2023
92023
Remp: Rectified metric propagation for few-shot learning
Y Zhao, C Li, P Yu, C Chen
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2021
82021
OPT-R: Exploring the Role of Explanations in Finetuning and Prompting for Reasoning Skills of Large Language Models
B AlKhamissi, S Verma, P Yu, Z Jin, A Celikyilmaz, M Diab
arXiv preprint arXiv:2305.12001, 2023
52023
Rethinking sentiment style transfer
P Yu, Y Zhao, C Li, C Chen
Findings of the Association for Computational Linguistics: EMNLP 2021, 1569-1582, 2021
52021
Improve variational autoencoder for text generationwith discrete latent bottleneck
Y Zhao, P Yu, S Mahapatra, Q Su, C Chen
arXiv preprint arXiv:2004.10603, 2020
52020
Improve variational autoencoder for text generation with discrete latent bottleneck
Y Zhao, P Yu, S Mahapatra, Q Su, C Chen
arXiv preprint arXiv:2004.10603, 2020
52020
Low-power wireless sensor network protocol of mobile health based on IPv6
L Wang, S Hao, P Yu, Z Huang
2016 35th Chinese Control Conference (CCC), 8479-8484, 2016
52016
LIMA: less is more for alignment (2023)
C Zhou, P Liu, P Xu, S Iyer, J Sun, Y Mao, X Ma, A Efrat, P Yu, L Yu, ...
WARNING: APPENDIX Ccontains EXAMPLES OF TOXIC USER INPUTS, WHICH MAY INCLUDE …, 0
5
SDA: Improving text generation with self data augmentation
P Yu, R Zhang, Y Zhao, Y Zhang, C Li, C Chen
arXiv preprint arXiv:2101.03236, 2021
42021
Discretized Bottleneck: Posterior-Collapse-Free Sequence-to-Sequence Learning
Y Zhao, P Yu, S Mahapatra, Q Su, C Chen
42020
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–20