Следене
Qinghua Liu
Qinghua Liu
Потвърден имейл адрес: princeton.edu - Начална страница
Заглавие
Позовавания
Позовавания
Година
Tackling the objective inconsistency problem in heterogeneous federated optimization
J Wang, Q Liu, H Liang, G Joshi, HV Poor
Advances in Neural Information Processing Systems, 2020, 2020
9602020
Bellman Eluder dimension: New rich classes of RL problems, and sample-efficient algorithms
C Jin, Q Liu, S Miryoosefi
Advances in Neural Information Processing Systems, 2021, 2021
2102021
A Sharp Analysis of Model-based Reinforcement Learning with Self-play
Q Liu, T Yu, Y Bai, C Jin
International Conference on Machine Learning, 7001-7010, 2021
1332021
Linearized admm for nonconvex nonsmooth optimization with convergence analysis
Q Liu, X Shen, Y Gu
arXiv preprint arXiv:1705.02502, 2017
1272017
V-learning—a simple, efficient, decentralized algorithm for multiagent reinforcement learning
C Jin, Q Liu, Y Wang, T Yu
Mathematics of Operations Research, 2023
85*2023
When is partially observable reinforcement learning not scary?
Q Liu, A Chung, C Szepesvári, C Jin
Conference on Learning Theory, 5175-5220, 2022
702022
A novel framework for the analysis and design of heterogeneous federated learning
J Wang, Q Liu, H Liang, G Joshi, HV Poor
IEEE Transactions on Signal Processing 69, 5234-5249, 2021
602021
Sample-Efficient Reinforcement Learning of Undercomplete POMDPs
C Jin, SM Kakade, A Krishnamurthy, Q Liu
Advances in Neural Information Processing Systems, 2020, 2020
592020
The power of exploiter: Provable multi-agent rl in large state spaces
C Jin, Q Liu, T Yu
International Conference on Machine Learning, 10251-10279, 2022
582022
Optimistic mle: A generic model-based algorithm for partially observable sequential decision making
Q Liu, P Netrapalli, C Szepesvari, C Jin
Proceedings of the 55th Annual ACM Symposium on Theory of Computing, 363-376, 2023
252023
Breaking the curse of multiagency: Provably efficient decentralized multi-agent rl with function approximation
Y Wang, Q Liu, Y Bai, C Jin
Conference on Learning Theory, 2023, 2023
242023
Policy optimization for markov games: Unified framework and faster convergence
R Zhang, Q Liu, H Wang, C Xiong, N Li, Y Bai
Advances in Neural Information Processing Systems 35, 21886-21899, 2022
242022
Sample-efficient reinforcement learning of partially observable markov games
Q Liu, C Szepesvári, C Jin
Advances in Neural Information Processing Systems 35, 18296-18308, 2022
222022
Learning markov games with adversarial opponents: Efficient algorithms and fundamental limits
Q Liu, Y Wang, C Jin
International Conference on Machine Learning, 14036-14053, 2022
182022
Is RLHF More Difficult than Standard RL? A Theoretical Perspective
Y Wang, Q Liu, C Jin
Advances in Neural Information Processing Systems 36, 2024
12*2024
Rigorous restricted isometry property of low-dimensional subspaces
G Li, Q Liu, Y Gu
Applied and Computational Harmonic Analysis 49 (2), 608-635, 2018
92018
Provable rich observation reinforcement learning with combinatorial latent states
D Misra, Q Liu, C Jin, J Langford
International Conference on Learning Representations, 2020
72020
Optimistic Natural Policy Gradient: a Simple Efficient Policy Optimization Framework for Online RL
Q Liu, G Weisz, A György, C Jin, C Szepesvári
Thirty-seventh Conference on Neural Information Processing Systems, 2023
42023
Context-lumpable stochastic bandits
CW Lee, Q Liu, Y Abbasi-Yadkori, C Jin, T Lattimore, C Szepesvári
Thirty-seventh Conference on Neural Information Processing Systems, 2023
12023
A Deep Reinforcement Learning Approach for Finding Non-Exploitable Strategies in Two-Player Atari Games
Z Ding, D Su, Q Liu, C Jin
arXiv preprint arXiv:2207.08894, 2022
12022
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–20