Следене
Jack Parker-Holder
Jack Parker-Holder
Research Scientist at DeepMind
Потвърден имейл адрес: google.com - Начална страница
Заглавие
Позовавания
Позовавания
Година
Effective Diversity in Population Based Reinforcement Learning
J Parker-Holder*, A Pacchiano*, K Choromanski, S Roberts
NeurIPS 2020 (Spotlight), 2020
852020
Provably Efficient Online Hyperparameter Optimization with Population-Based Bandits
J Parker-Holder, V Nguyen, S Roberts
NeurIPS 2020, 2020
432020
From Complexity to Simplicity: Adaptive ES-Active Subspaces for Blackbox Optimization
KM Choromanski*, A Pacchiano*, J Parker-Holder*, Y Tang*, ...
Advances in Neural Information Processing Systems, 10299-10309, 2019
392019
MiniHack the Planet: A Sandbox for Open-Ended Reinforcement Learning Research
M Samvelyan, R Kirk, V Kurin, J Parker-Holder, M Jiang, E Hambro, ...
NeurIPS 2021 (Datasets and Benchmarks), 2021
362021
Ready Policy One: World Building Through Active Learning
P Ball*, J Parker-Holder*, A Pacchiano, K Choromanski, S Roberts
ICML 2020, 2020
352020
Provably Robust Blackbox Optimization for Reinforcement Learning
K Choromanski*, A Pacchiano*, J Parker-Holder*, Y Tang, D Jain, Y Yang, ...
Conference on Robot Learning, 683-696, 2019
31*2019
Towards Tractable Optimism in Model-Based Reinforcement Learning
A Pacchiano*, P Ball*, J Parker-Holder*, K Choromanski, S Roberts
UAI 2021, 2020
27*2020
Evolving Curricula with Regret-Based Environment Design
J Parker-Holder*, M Jiang*, M Dennis, M Samvelyan, J Foerster, ...
ICML 2022, 2022
262022
Replay-Guided Adversarial Environment Design
M Jiang*, M Dennis*, J Parker-Holder, J Foerster, E Grefenstette, ...
NeurIPS 2021, 2021
262021
Learning to Score Behaviors for Guided Policy Optimization
A Pacchiano*, J Parker-Holder*, Y Tang*, K Choromanski, ...
International Conference on Machine Learning, 7445-7454, 2020
222020
Augmented World Models Facilitate Zero-Shot Dynamics Generalization From a Single Offline Environment
PJ Ball*, C Lu*, J Parker-Holder, S Roberts
ICML 2021, 2021
212021
Automated Reinforcement Learning (AutoRL): A Survey and Open Problems
J Parker-Holder*, R Rajan*, X Song*, A Biedenkapp, Y Miao, T Eimer, ...
JAIR, 2022
202022
Ridge Rider: Finding Diverse Solutions by Following Eigenvectors of the Hessian
J Parker-Holder*, L Metz, C Resnick, H Hu, A Lerer, A Letcher, ...
Advances in Neural Information Processing Systems 33, 2020
182020
Tactical Optimism and Pessimism for Deep Reinforcement Learning
T Moskovitz, J Parker-Holder, A Pacchiano, M Arbel, MI Jordan
NeurIPS 2021, 2021
17*2021
Same State, Different Task: Continual Reinforcement Learning without Interference
S Kessler, J Parker-Holder, P Ball, S Zohren, SJ Roberts
AAAI 2022 (Oral), 2022
92022
ES-ENAS: Blackbox Optimization over Hybrid Spaces via Combinatorial and Continuous Evolution
X Song, KM Choromanski, J Parker-Holder, Y Tang, D Peng, D Jain, ...
8*2021
Tuning Mixed Input Hyperparameters on the Fly for Efficient Population Based AutoRL
J Parker-Holder, V Nguyen, S Desai, S Roberts
NeurIPS 2021, 2021
82021
Revisiting Design Choices in Model-Based Offline Reinforcement Learning
C Lu*, PJ Ball*, J Parker-Holder, MA Osborne, SJ Roberts
ICLR 2022 (Spotlight), 2022
7*2022
Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations
C Lu, PJ Ball, TGJ Rudner, J Parker-Holder, MA Osborne, YW Teh
RSS L-DOD Workshop (Best Paper Award), 2022
62022
Towards an Understanding of Default Policies in Multitask Policy Optimization
T Moskovitz, M Arbel, J Parker-Holder, A Pacchiano
AISTATS 2022 (Nominated for Best Paper Award), 2022
62022
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–20