Peng Shi
Peng Shi
Потвърден имейл адрес: uwaterloo.ca
Simple bert models for relation extraction and semantic role labeling
P Shi, J Lin
arXiv preprint arXiv:1904.05255, 2019
Strong baselines for simple question answering over knowledge graphs with and without neural networks
S Mohammed, P Shi, J Lin
arXiv preprint arXiv:1712.01969, 2017
Aligning cross-lingual entities with multi-aspect information
HW Yang, Y Zou, P Shi, W Lu, J Lin, X Sun
arXiv preprint arXiv:1910.06575, 2019
Bridging the gap between relevance matching and semantic matching for short text similarity modeling
J Rao, L Liu, Y Tay, W Yang, P Shi, J Lin
Proceedings of the 2019 Conference on Empirical Methods in Natural Language …, 2019
Learning contextual representations for semantic parsing with generation-augmented pre-training
P Shi, P Ng, Z Wang, H Zhu, AH Li, J Wang, CN dos Santos, B Xiang
Proceedings of the AAAI Conference on Artificial Intelligence 35 (15), 13806 …, 2021
Farewell freebase: Migrating the simplequestions dataset to dbpedia
M Azmy, P Shi, J Lin, I Ilyas
Proceedings of the 27th international conference on computational …, 2018
Unifiedskg: Unifying and multi-tasking structured knowledge grounding with text-to-text language models
T Xie, CH Wu, P Shi, R Zhong, T Scholak, M Yasunaga, CS Wu, M Zhong, ...
arXiv preprint arXiv:2201.05966, 2022
Matching entities across different knowledge graphs with graph embeddings
M Azmy, P Shi, J Lin, IF Ilyas
arXiv preprint arXiv:1903.06607, 2019
Mr. TyDi: A multi-lingual benchmark for dense retrieval
X Zhang, X Ma, P Shi, J Lin
arXiv preprint arXiv:2108.08787, 2021
Cross-lingual training of neural models for document ranking
P Shi, H Bai, J Lin
Findings of the Association for Computational Linguistics: EMNLP 2020, 2768-2773, 2020
Segatron: Segment-aware transformer for language modeling and understanding
H Bai, P Shi, J Lin, Y Xie, L Tan, K Xiong, W Gao, M Li
Proceedings of the AAAI Conference on Artificial Intelligence 35 (14), 12526 …, 2021
Cross-lingual training with dense retrieval for document retrieval
P Shi, R Zhang, H Bai, J Lin
arXiv preprint arXiv:2109.01628, 2021
Logic-Consistency Text Generation from Semantic Parses
C Shu, Y Zhang, X Dong, P Shi, T Yu, R Zhang
Findings of ACL 2021, 2021
Simple attention-based representation learning for ranking short social media posts
P Shi, J Rao, J Lin
arXiv preprint arXiv:1811.01013, 2018
Cross-lingual relevance transfer for document retrieval
P Shi, J Lin
arXiv preprint arXiv:1911.02989, 2019
Power-saving transportation mode identification for large-scale applications
Y Zhou, J Wang, P Shi, D Dahlmeier, N Tippenhauer, E Wilhelm
arXiv preprint arXiv:1701.05768, 2017
Exploiting Mutual Benefits between Syntax and Semantic Roles using Neural Network.
P Shi, Z Teng, Y Zhang
EMNLP, 968-974, 2016
Joint bi-affine parsing and semantic role labeling
P Shi, Y Zhang
2017 International Conference on Asian Language Processing (IALP), 338-341, 2017
Did you ask a good question? a cross-domain question intention classification benchmark for text-to-sql
Y Zhang, X Dong, S Chang, T Yu, P Shi, R Zhang
arXiv preprint arXiv:2010.12634, 2020
Semantics of the Unwritten: The Effect of End of Paragraph and Sequence Tokens on Text Generation with GPT2
H Bai, P Shi, J Lin, L Tan, K Xiong, W Gao, J Liu, M Li
arXiv preprint arXiv:2004.02251, 2020
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–20