Следене
Weijia Shi
Weijia Shi
Потвърден имейл адрес: uw.edu - Начална страница
Заглавие
Позовавания
Позовавания
Година
REPLUG: Retrieval-Augmented Black-Box Language Models
W Shi, S Min, M Yasunaga, M Seo, R James, M Lewis, L Zettlemoyer, ...
arXiv preprint arXiv:2301.12652, 2023
216*2023
Examining gender bias in languages with grammatical gender
P Zhou, W Shi, J Zhao, KH Huang, M Chen, R Cotterell, KW Chang
arXiv preprint arXiv:1909.02224, 2019
131*2019
Embedding uncertain knowledge graphs
X Chen, M Chen, W Shi, Y Sun, C Zaniolo
Proceedings of the AAAI conference on artificial intelligence 33 (01), 3363-3370, 2019
1242019
Selective annotation makes language models better few-shot learners
H Su, J Kasai, CH Wu, W Shi, T Wang, J Xin, R Zhang, M Ostendorf, ...
arXiv preprint arXiv:2209.01975, 2022
117*2022
One Embedder, Any Task: Instruction-Finetuned Text Embeddings
H Su, W Shi, J Kasai, Y Wang, Y Hu, M Ostendorf, W Yih, NA Smith, ...
arXiv preprint arXiv:2212.09741, 2022
1062022
Fine-grained human feedback gives better rewards for language model training
Z Wu, Y Hu, W Shi, N Dziri, A Suhr, P Ammanabrolu, NA Smith, ...
Advances in Neural Information Processing Systems 36, 2024
92*2024
On tractable representations of binary neural networks
W Shi, A Shih, A Darwiche, A Choi
arXiv preprint arXiv:2004.02082, 2020
92*2020
Promptcap: Prompt-guided task-aware image captioning
Y Hu, H Hua, Z Yang, W Shi, NA Smith, J Luo
arXiv preprint arXiv:2211.09699, 2022
68*2022
Retrieval-augmented multimodal language modeling
M Yasunaga, A Aghajanyan, W Shi, R James, J Leskovec, P Liang, ...
arXiv preprint arXiv:2211.12561, 2022
462022
Detecting pretraining data from large language models
W Shi, A Ajith, M Xia, Y Huang, D Liu, T Blevins, D Chen, L Zettlemoyer
arXiv preprint arXiv:2310.16789, 2023
442023
Trusting your evidence: Hallucinate less with context-aware decoding
W Shi, X Han, M Lewis, Y Tsvetkov, L Zettlemoyer, SW Yih
arXiv preprint arXiv:2305.14739, 2023
432023
Nonparametric masked language modeling
S Min, W Shi, M Lewis, X Chen, W Yih, H Hajishirzi, L Zettlemoyer
arXiv preprint arXiv:2212.01349, 2022
392022
Nearest neighbor zero-shot inference
W Shi, J Michael, S Gururangan, L Zettlemoyer
Proceedings of the 2022 Conference on Empirical Methods in Natural Language …, 2022
37*2022
Cross-lingual entity alignment with incidental supervision
M Chen, W Shi, B Zhou, D Roth
arXiv preprint arXiv:2005.00171, 2020
362020
Retrofitting contextualized word embeddings with paraphrases
W Shi, M Chen, P Zhou, KW Chang
arXiv preprint arXiv:1909.09700, 2019
302019
Lemur: Harmonizing natural language and code for language agents
Y Xu, H Su, C Xing, B Mi, Q Liu, W Shi, B Hui, F Zhou, Y Liu, T Xie, ...
arXiv preprint arXiv:2310.06830, 2023
272023
Toward Human Readable Prompt Tuning: Kubrick's The Shining is a good movie, and a good prompt too?
W Shi, X Han, H Gonen, A Holtzman, Y Tsvetkov, L Zettlemoyer
arXiv preprint arXiv:2212.10539, 2022
232022
Design challenges in low-resource cross-lingual entity linking
X Fu, W Shi, X Yu, Z Zhao, D Roth
arXiv preprint arXiv:2005.00692, 2020
21*2020
Ra-dit: Retrieval-augmented dual instruction tuning
XV Lin, X Chen, M Chen, W Shi, M Lomeli, R James, P Rodriguez, J Kahn, ...
arXiv preprint arXiv:2310.01352, 2023
172023
Recomp: Improving retrieval-augmented lms with compression and selective augmentation
F Xu, W Shi, E Choi
arXiv preprint arXiv:2310.04408, 2023
162023
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–20