Следене
Brian O'Horo
Brian O'Horo
Meta AI (FAIR)
Потвърден имейл адрес: meta.com
Заглавие
Позовавания
Позовавания
Година
Efficient large scale language modeling with mixtures of experts
M Artetxe, S Bhosale, N Goyal, T Mihaylov, M Ott, S Shleifer, XV Lin, J Du, ...
arXiv preprint arXiv:2112.10684, 2021
732021
Opt-iml: Scaling language model instruction meta learning through the lens of generalization
S Iyer, XV Lin, R Pasunuru, T Mihaylov, D Simig, P Yu, K Shuster, T Wang, ...
arXiv preprint arXiv:2212.12017, 2022
622022
Few-shot learning with multilingual language models
XV Lin, T Mihaylov, M Artetxe, T Wang, S Chen, D Simig, M Ott, N Goyal, ...
arXiv preprint arXiv:2112.10668, 2021
372021
Few-shot learning with multilingual generative language models
XV Lin, T Mihaylov, M Artetxe, T Wang, S Chen, D Simig, M Ott, N Goyal, ...
Proceedings of the 2022 Conference on Empirical Methods in Natural Language …, 2022
302022
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–4