Follow
Dara Bahri
Dara Bahri
Research Scientist, Google Research
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
Efficient transformers: A survey
Y Tay, M Dehghani, D Bahri, D Metzler
ACM Computing Surveys 55 (6), 1-28, 2022
10442022
Long range arena: A benchmark for efficient transformers
Y Tay, M Dehghani, S Abnar, Y Shen, D Bahri, P Pham, J Rao, L Yang, ...
arXiv preprint arXiv:2011.04006, 2020
4962020
Berkeley advanced reconstruction toolbox
M Uecker, F Ong, JI Tamir, D Bahri, P Virtue, JY Cheng, T Zhang, M Lustig
Proc. Intl. Soc. Mag. Reson. Med 23 (2486), 2015
4122015
Synthesizer: Rethinking self-attention for transformer models
Y Tay, D Bahri, D Metzler, DC Juan, Z Zhao, C Zheng
International conference on machine learning, 10183-10192, 2021
3112021
Sparse sinkhorn attention
Y Tay, D Bahri, L Yang, D Metzler, DC Juan
International Conference on Machine Learning, 9438-9447, 2020
2832020
Ul2: Unifying language learning paradigms
Y Tay, M Dehghani, VQ Tran, X Garcia, J Wei, X Wang, HW Chung, ...
arXiv preprint arXiv:2205.05131, 2022
2672022
Ext5: Towards extreme multi-task scaling for transfer learning
V Aribandi, Y Tay, T Schuster, J Rao, HS Zheng, SV Mehta, H Zhuang, ...
arXiv preprint arXiv:2111.10952, 2021
1702021
Transformer memory as a differentiable search index
Y Tay, V Tran, M Dehghani, J Ni, D Bahri, H Mehta, Z Qin, K Hui, Z Zhao, ...
Advances in Neural Information Processing Systems 35, 21831-21843, 2022
1462022
Scarf: Self-supervised contrastive learning using random feature corruption
D Bahri, H Jiang, Y Tay, D Metzler
arXiv preprint arXiv:2106.15147, 2021
1112021
Charformer: Fast character transformers via gradient-based subword tokenization
Y Tay, VQ Tran, S Ruder, J Gupta, HW Chung, D Bahri, Z Qin, ...
arXiv preprint arXiv:2106.12672, 2021
1102021
Rethinking search: making domain experts out of dilettantes
D Metzler, Y Tay, D Bahri, M Najork
Acm sigir forum 55 (1), 1-27, 2021
1012021
Confident adaptive language modeling
T Schuster, A Fisch, J Gupta, M Dehghani, D Bahri, V Tran, Y Tay, ...
Advances in Neural Information Processing Systems 35, 17456-17472, 2022
852022
Are pre-trained convolutions better than pre-trained transformers?
Y Tay, M Dehghani, J Gupta, D Bahri, V Aribandi, Z Qin, D Metzler
arXiv preprint arXiv:2105.03322, 2021
772021
Deep k-nn for noisy labels
D Bahri, H Jiang, M Gupta
International Conference on Machine Learning, 540-550, 2020
722020
Sharpness-aware minimization improves language model generalization
D Bahri, H Mobahi, Y Tay
arXiv preprint arXiv:2110.08529, 2021
692021
Structformer: Joint unsupervised induction of dependency and constituency structure from masked language modeling
Y Shen, Y Tay, C Zheng, D Bahri, D Metzler, A Courville
arXiv preprint arXiv:2012.00857, 2020
392020
Hypergrid transformers: Towards a single model for multiple tasks
Y Tay, Z Zhao, D Bahri, D Metzler, DC Juan
International conference on learning representations, 2020
322020
Omninet: Omnidirectional representations from transformers
Y Tay, M Dehghani, V Aribandi, J Gupta, PM Pham, Z Qin, D Bahri, ...
International Conference on Machine Learning, 10193-10202, 2021
302021
Diminishing returns shape constraints for interpretability and regularization
M Gupta, D Bahri, A Cotter, K Canini
Advances in neural information processing systems 31, 2018
262018
Reverse engineering configurations of neural text generation models
Y Tay, D Bahri, C Zheng, C Brunk, D Metzler, A Tomkins
arXiv preprint arXiv:2004.06201, 2020
242020
The system can't perform the operation now. Try again later.
Articles 1–20