Следене
Sang Michael Xie
Заглавие
Позовавания
Позовавания
Година
On the opportunities and risks of foundation models
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
arXiv preprint arXiv:2108.07258, 2021
42762021
Combining satellite imagery and machine learning to predict poverty
N Jean, M Burke, M Xie, WM Davis, DB Lobell, S Ermon
Science 353 (6301), 790-794, 2016
18752016
Wilds: A benchmark of in-the-wild distribution shifts
PW Koh, S Sagawa, H Marklund, SM Xie, M Zhang, A Balsubramani, ...
International conference on machine learning, 5637-5664, 2021
14832021
Holistic evaluation of language models
P Liang, R Bommasani, T Lee, D Tsipras, D Soylu, M Yasunaga, Y Zhang, ...
arXiv preprint arXiv:2211.09110, 2022
1221*2022
An Explanation of In-context Learning as Implicit Bayesian Inference
SM Xie, A Raghunathan, P Liang, T Ma
International Conference on Learning Representations (ICLR), 2022
6422022
Transfer learning from deep features for remote sensing and poverty mapping
M Xie, N Jean, M Burke, D Lobell, S Ermon
AAAI, 2016
5452016
Adversarial training can hurt generalization
A Raghunathan*, SM Xie*, F Yang, JC Duchi, P Liang
arXiv preprint arXiv:1906.06032, 2019
2802019
Understanding and mitigating the tradeoff between robustness and accuracy
A Raghunathan*, SM Xie*, F Yang, J Duchi, P Liang
International Conference on Machine Learning (ICML), 2020
2612020
Weakly supervised deep learning for segmentation of remote sensing imagery
S Wang, W Chen, SM Xie, G Azzari, DB Lobell
Remote Sensing 12 (2), 207, 2020
2462020
Reward design with language models
M Kwon, SM Xie, K Bullard, D Sadigh
arXiv preprint arXiv:2303.00001, 2023
1952023
Extending the wilds benchmark for unsupervised adaptation
S Sagawa, PW Koh, T Lee, I Gao, SM Xie, K Shen, A Kumar, W Hu, ...
arXiv preprint arXiv:2112.05090, 2021
1292021
Data selection for language models via importance resampling
SM Xie, S Santurkar, T Ma, PS Liang
Advances in Neural Information Processing Systems 36, 34201-34227, 2023
1262023
Doremi: Optimizing data mixtures speeds up language model pretraining
SM Xie, H Pham, X Dong, N Du, H Liu, Y Lu, PS Liang, QV Le, T Ma, ...
Advances in Neural Information Processing Systems 36, 2023
1072023
Connect, not collapse: Explaining contrastive learning for unsupervised domain adaptation
K Shen*, R Jones*, A Kumar*, SM Xie*, JZ HaoChen, T Ma, P Liang
arXiv preprint arXiv:2204.00570, 2022
1052022
Semi-supervised Deep Kernel Learning: Regression with Unlabeled Data by Minimizing Predictive Variance
N Jean*, SM Xie*, S Ermon
Advances in Neural Information Processing Systems (NeurIPS), 2018
1012018
Reparameterizable Subset Sampling via Continuous Relaxations
SM Xie, S Ermon
IJCAI, 2019
922019
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning
C Wei, SM Xie, T Ma
Neural Information Processing Systems (NeurIPS), 2021
912021
A survey on data selection for language models
A Albalak, Y Elazar, SM Xie, S Longpre, N Lambert, X Wang, ...
arXiv preprint arXiv:2402.16827, 2024
622024
In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness
SM Xie*, A Kumar*, R Jones*, F Khani, T Ma, P Liang
International Conference on Learning Representations (ICLR), 2022
612022
Same pre-training loss, better downstream: Implicit bias matters for language models
H Liu, SM Xie, Z Li, T Ma
International Conference on Machine Learning, 22188-22214, 2023
372023
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–20