Sharan Vaswani
Fast and faster convergence of sgd for over-parameterized models and an accelerated perceptron
S Vaswani, F Bach, M Schmidt
The 22nd international conference on artificial intelligence and statistics …, 2019
Painless stochastic gradient: Interpolation, line-search, and convergence rates
S Vaswani, A Mishkin, I Laradji, M Schmidt, G Gidel, S Lacoste-Julien
Advances in neural information processing systems 32, 2019
Stochastic polyak step-size for sgd: An adaptive learning rate for fast convergence
N Loizou, S Vaswani, IH Laradji, S Lacoste-Julien
International Conference on Artificial Intelligence and Statistics, 1306-1314, 2021
Online Influence Maximization under Independent Cascade Model with Semi-Bandit Feedback
Z Wen, B Kveton, M Valko, S Vaswani
arXiv preprint arXiv:1605.06593, 2017
Model-independent online learning for influence maximization
S Vaswani, B Kveton, Z Wen, M Ghavamzadeh, LVS Lakshmanan, ...
International conference on machine learning, 3530-3539, 2017
Garbage In, Reward Out: Bootstrapping Exploration in Multi-Armed Bandits
B Kveton, C Szepesvari, S Vaswani, Z Wen, M Ghavamzadeh, T Lattimore
Proceedings of the 36th International Conference on Machine Learning 97 …, 2019
Influence Maximization with Bandits
S Vaswani, L Lakshmanan, M Schmidt
arXiv preprint arXiv:1503.00024, 2015
Fast and furious convergence: Stochastic second order methods under interpolation
SY Meng, S Vaswani, IH Laradji, M Schmidt, S Lacoste-Julien
International Conference on Artificial Intelligence and Statistics, 2020
Adaptive gradient methods converge faster with over-parameterization (but you should do a line-search)
S Vaswani, I Laradji, F Kunstner, SY Meng, M Schmidt, S Lacoste-Julien
arXiv preprint arXiv:2006.06835, 2020
Old Dog Learns New Tricks: Randomized UCB for Bandit Problems
S Vaswani, A Mehrabian, A Durand, B Kveton
International Conference on Artificial Intelligence and Statistics, 2020
Combining Bayesian optimization and Lipschitz optimization
MO Ahmed, S Vaswani, M Schmidt
Machine Learning 109, 79-102, 2020
Near-optimal sample complexity bounds for constrained MDPs
S Vaswani, L Yang, C Szepesvári
Advances in Neural Information Processing Systems 35, 3110-3122, 2022
Adaptive influence maximization in social networks: Why commit when you can adapt?
S Vaswani, LVS Lakshmanan
arXiv preprint arXiv:1604.08171, 2016
New insights into bootstrapping for bandits
S Vaswani, B Kveton, Z Wen, A Rao, M Schmidt, Y Abbasi-Yadkori
arXiv preprint arXiv:1805.09793, 2018
Svrg meets adagrad: Painless variance reduction
B Dubois-Taine, S Vaswani, R Babanezhad, M Schmidt, S Lacoste-Julien
Machine Learning 111 (12), 4359-4409, 2022
A general class of surrogate functions for stable and efficient reinforcement learning
S Vaswani, O Bachem, S Totaro, R Müller, S Garg, M Geist, MC Machado, ...
arXiv preprint arXiv:2108.05828, 2021
Horde of bandits using gaussian markov random fields
S Vaswani, M Schmidt, L Lakshmanan
Artificial Intelligence and Statistics, 690-699, 2017
Modeling non-progressive phenomena for influence propagation
VY Lou, S Bhagat, LVS Lakshmanan, S Vaswani
Proceedings of the second ACM conference on Online social networks, 131-138, 2014
Towards noise-adaptive, problem-adaptive (accelerated) stochastic gradient descent
S Vaswani, B Dubois-Taine, R Babanezhad
International conference on machine learning, 22015-22059, 2022
Towards painless policy optimization for constrained mdps
A Jain, S Vaswani, R Babanezhad, C Szepesvari, D Precup
Uncertainty in Artificial Intelligence, 895-905, 2022
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–20