Следене
Weijia Shi
Weijia Shi
Потвърден имейл адрес: uw.edu - Начална страница
Заглавие
Позовавания
Позовавания
Година
Examining gender bias in languages with grammatical gender
P Zhou, W Shi, J Zhao, KH Huang, M Chen, R Cotterell, KW Chang
arXiv preprint arXiv:1909.02224, 2019
99*2019
Embedding uncertain knowledge graphs
X Chen, M Chen, W Shi, Y Sun, C Zaniolo
Proceedings of the AAAI conference on artificial intelligence 33 (01), 3363-3370, 2019
752019
On tractable representations of binary neural networks
W Shi, A Shih, A Darwiche, A Choi
arXiv preprint arXiv:2004.02082, 2020
372020
Retrofitting contextualized word embeddings with paraphrases
W Shi, M Chen, P Zhou, KW Chang
arXiv preprint arXiv:1909.09700, 2019
262019
Compiling neural networks into tractable Boolean circuits
A Choi, W Shi, A Shih, A Darwiche
intelligence, 2017
242017
Cross-lingual entity alignment with incidental supervision
M Chen, W Shi, B Zhou, D Roth
arXiv preprint arXiv:2005.00171, 2020
232020
Selective annotation makes language models better few-shot learners
H Su, J Kasai, CH Wu, W Shi, T Wang, J Xin, R Zhang, M Ostendorf, ...
arXiv preprint arXiv:2209.01975, 2022
102022
Nearest neighbor zero-shot inference
W Shi, J Michael, S Gururangan, L Zettlemoyer
arXiv preprint arXiv:2205.13792, 2022
82022
Learning bilingual word embeddings using lexical definitions
W Shi, M Chen, Y Tian, KW Chang
arXiv preprint arXiv:1906.08939, 2019
82019
Design challenges in low-resource cross-lingual entity linking
X Fu, W Shi, X Yu, Z Zhao, D Roth
arXiv preprint arXiv:2005.00692, 2020
72020
Karthikeyan K, Jamaal Hay, Michael Shur, Jennifer Sheffield, and Dan Roth. 2019. University of Pennsylvania LoReHLT 2019 Submission
S Mayhew, T Tsygankova, F Marini, Z Wang, J Lee, X Yu, X Fu, W Shi, ...
Technical report, 2019
62019
DESCGEN: A Distantly Supervised Dataset for Generating Abstractive Entity Descriptions
W Shi, M Joshi, L Zettlemoyer
arXiv preprint arXiv:2106.05365, 2021
4*2021
Analyzing and Mitigating Gender Bias in Languages with Grammatical Gender and Bilingual Word Embeddings
P Zhou, W Shi, J Zhao, KH Huang, M Chen, KW Chang
ACL: Montréal, QC, Canada, 2019
42019
Nonparametric Masked Language Modeling
S Min, W Shi, M Lewis, X Chen, W Yih, H Hajishirzi, L Zettlemoyer
arXiv preprint arXiv:2212.01349, 2022
12022
Retrieval-Augmented Multimodal Language Modeling
M Yasunaga, A Aghajanyan, W Shi, R James, J Leskovec, P Liang, ...
arXiv preprint arXiv:2211.12561, 2022
12022
RoMQA: A Benchmark for Robust, Multi-evidence, Multi-answer Question Answering
V Zhong, W Shi, W Yih, L Zettlemoyer
arXiv preprint arXiv:2210.14353, 2022
12022
Toward Human Readable Prompt Tuning: Kubrick's The Shining is a good movie, and a good prompt too?
W Shi, X Han, H Gonen, A Holtzman, Y Tsvetkov, L Zettlemoyer
arXiv preprint arXiv:2212.10539, 2022
2022
One Embedder, Any Task: Instruction-Finetuned Text Embeddings
H Su, J Kasai, Y Wang, Y Hu, M Ostendorf, W Yih, NA Smith, ...
arXiv preprint arXiv:2212.09741, 2022
2022
PromptCap: Prompt-Guided Task-Aware Image Captioning
Y Hu, H Hua, Z Yang, W Shi, NA Smith, J Luo
arXiv preprint arXiv:2211.09699, 2022
2022
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–19