Следене
Adams Wei Yu
Adams Wei Yu
Research Scientist, Google DeepMind
Потвърден имейл адрес: cs.cmu.edu - Начална страница
Заглавие
Позовавания
Позовавания
Година
Finetuned language models are zero-shot learners
J Wei, M Bosma, VY Zhao, K Guu, AW Yu, B Lester, N Du, AM Dai, QV Le
ICLR 2022, 2022
19142022
Scaling instruction-finetuned language models
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, Y Li, X Wang, ...
arXiv preprint arXiv:2210.11416, 2022
17112022
QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension
AW Yu, D Dohan, MT Luong, R Zhao, K Chen, M Norouzi, QV Le
ICLR 2018, 2018
1273*2018
Simvlm: Simple visual language model pretraining with weak supervision
Z Wang, J Yu, AW Yu, Z Dai, Y Tsvetkov, Y Cao
ICLR 2022, 2022
6342022
Glam: Efficient scaling of language models with mixture-of-experts
N Du, Y Huang, AM Dai, S Tong, D Lepikhin, Y Xu, M Krikun, Y Zhou, ...
ICML 2022, 2022
445*2022
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ...
arXiv preprint arXiv:2312.11805, 2023
3572023
Deepfusion: Lidar-camera deep fusion for multi-modal 3d object detection
Y Li, AW Yu, T Meng, B Caine, J Ngiam, D Peng, J Shen, Y Lu, D Zhou, ...
CVPR 2022, 2022
2262022
Orthogonal weight normalization: Solution to optimization over multiple dependent stiefel manifolds in deep neural networks
L Huang, X Liu, B Lang, AW Yu, B Li
AAAI 2018, 2017
2132017
Combined scaling for zero-shot transfer learning
H Pham, Z Dai, G Ghiasi, H Liu, AW Yu, MT Luong, M Tan, QV Le
arXiv preprint arXiv:2111.10050, 2021
177*2021
Learning to skim text
AW Yu, H Lee, QV Le
ACL 2017, 2017
1562017
Neural symbolic reader: Scalable integration of distributed and symbolic representations for reading comprehension
X Chen, C Liang, AW Yu, D Zhou, D Song, QV Le
ICLR 2020, 2019
1092019
Compositional generalization via neural-symbolic stack machines
X Chen, C Liang, AW Yu, D Song, D Zhou
NeurIPS 2020, 2020
882020
Large language models cannot self-correct reasoning yet
J Huang, X Chen, S Mishra, HS Zheng, AW Yu, X Song, D Zhou
arXiv preprint arXiv:2310.01798, 2023
842023
Adadelay: Delay adaptive distributed stochastic convex optimization
S Sra, AW Yu, M Li, AJ Smola
AISTATS 2016, 2016
83*2016
Towards zero-label language learning
Z Wang, AW Yu, O Firat, Y Cao
arXiv preprint arXiv:2109.09193, 2021
712021
AutoHAS: Efficient hyperparameter and architecture search
X Dong, M Tan, AW Yu, D Peng, B Gabrys, QV Le
arXiv preprint arXiv:2006.03656, 2020
67*2020
On computationally tractable selection of experiments in measurement-constrained regression models
Y Wang, AW Yu, A Singh
The Journal of Machine Learning Research 18 (1), 5238-5278, 2017
65*2017
An improved gap-dependency analysis of the noisy power method
MF Balcan, SS Du, Y Wang, AW Yu
COLT 2016, 2016
652016
Dscovr: Randomized primal-dual block coordinate algorithms for asynchronous distributed optimization
L Xiao, AW Yu, Q Lin, W Chen
Journal of Machine Learning Research 20 (43), 1-58, 2019
542019
BLOCK-NORMALIZED GRADIENT METHOD: AN EMPIRICAL STUDY FOR TRAINING DEEP NEURAL NETWORK
AW Yu, L Huang, Q Lin, R Salakhutdinov, J Carbonell
44*2018
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–20