Follow
Yao Liu
Yao Liu
Amazon
Verified email at stanford.edu - Homepage
Title
Cited by
Cited by
Year
Provably good batch reinforcement learning without great exploration
Y Liu, A Swaminathan, A Agarwal, E Brunskill
Advances in Neural Information Processing Systems 33, 1264–1274, 2020
1682020
Off-Policy Policy Gradient with Stationary Distribution Correction
Y Liu, A Swaminathan, A Agarwal, E Brunskill
Proceedings of The 35th Uncertainty in Artificial Intelligence Conference …, 2019
151*2019
Representation balancing mdps for off-policy policy evaluation
Y Liu, O Gottesman, A Raghu, M Komorowski, A Faisal, F Doshi-Velez, ...
Advances in Neural Information Processing Systems 31, 2644--2653, 2018
722018
Interpretable off-policy evaluation in reinforcement learning by highlighting influential transitions
O Gottesman, J Futoma, Y Liu, S Parbhoo, L Celi, E Brunskill, ...
International Conference on Machine Learning, 3658-3667, 2020
412020
Behaviour policy estimation in off-policy policy evaluation: Calibration matters
A Raghu, O Gottesman, Y Liu, M Komorowski, A Faisal, F Doshi-Velez, ...
arXiv preprint arXiv:1807.01066, 2018
342018
Understanding the curse of horizon in off-policy evaluation via conditional importance sampling
Y Liu, PL Bacon, E Brunskill
International Conference on Machine Learning, 6184-6193, 2020
312020
Combining parametric and nonparametric models for off-policy evaluation
O Gottesman, Y Liu, S Sussex, E Brunskill, F Doshi-Velez
In International Conference on Machine Learning, 2366-2375, 2019
262019
Pac continuous state online multitask reinforcement learning with identification
Y Liu, Z Guo, E Brunskill
Proceedings of the 2016 International Conference on Autonomous Agents …, 2016
182016
When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms
Y Liu, E Brunskill
The 14th European Workshop on Reinforcement Learning, 2018
162018
All-action policy gradient methods: A numerical integration approach
B Petit, L Amdahl-Culleton, Y Liu, J Smith, PL Bacon
arXiv preprint arXiv:1910.09093, 2019
42019
Nonlinear Dimensionality Reduction by Local Orthogonality Preserving Alignment
T Lin, Y Liu, B Wang, LW Wang, HB Zha
Journal of Computer Science and Technology 31 (3), 512-524, 2016
3*2016
Offline policy optimization with eligible actions
Y Liu, Y Flet-Berliac, E Brunskill
Uncertainty in Artificial Intelligence, 1253-1263, 2022
22022
Provably sample-efficient RL with side information about latent dynamics
Y Liu, D Misra, M Dudík, RE Schapire
Advances in Neural Information Processing Systems 35, 33482-33493, 2022
12022
Stitched Trajectories for Off-Policy Learning
S Sussex, O Gottesman, Y Liu, S Murphy, E Brunskill, F Doshi-Velez
ICML Workshop, 2018
12018
Budgeting Counterfactual for Offline RL
Y Liu, P Chaudhari, R Fakoor
arXiv preprint arXiv:2307.06328, 2023
2023
TD Convergence: An Optimization Perspective
K Asadi, S Sabach, Y Liu, O Gottesman, R Fakoor
arXiv preprint arXiv:2306.17750, 2023
2023
Model Selection for Off-Policy Policy Evaluation
Y Liu, PS Thomas, E Brunskill
The Multi-disciplinary Conference on Reinforcement Learning and Decision Making, 2017
2017
The system can't perform the operation now. Try again later.
Articles 1–17