Следене
Aaron Sidford
Aaron Sidford
Потвърден имейл адрес: stanford.edu - Начална страница
Заглавие
Позовавания
Позовавания
Година
Path finding methods for linear programming: Solving linear programs in o (vrank) iterations and faster algorithms for maximum flow
YT Lee, A Sidford
2014 IEEE 55th Annual Symposium on Foundations of Computer Science, 424-433, 2014
339*2014
Accelerated methods for nonconvex optimization
Y Carmon, JC Duchi, O Hinder, A Sidford
SIAM Journal on Optimization 28 (2), 1751-1772, 2018
2812018
An almost-linear-time algorithm for approximate max flow in undirected graphs, and its multicommodity generalizations
JA Kelner, YT Lee, L Orecchia, A Sidford
Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete …, 2014
2722014
Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems
YT Lee, A Sidford
2013 ieee 54th annual symposium on foundations of computer science, 147-156, 2013
2662013
A simple, combinatorial algorithm for solving SDD systems in nearly-linear time
JA Kelner, L Orecchia, A Sidford, ZA Zhu
Proceedings of the forty-fifth annual ACM symposium on Theory of computing …, 2013
2622013
A faster cutting plane method and its implications for combinatorial and convex optimization
YT Lee, A Sidford, SC Wong
2015 IEEE 56th Annual Symposium on Foundations of Computer Science, 1049-1065, 2015
2552015
Uniform sampling for matrix approximation
MB Cohen, YT Lee, C Musco, C Musco, R Peng, A Sidford
Proceedings of the 2015 Conference on Innovations in Theoretical Computer …, 2015
2032015
Lower bounds for finding stationary points I
Y Carmon, JC Duchi, O Hinder, A Sidford
Mathematical Programming 184 (1), 71-120, 2020
1862020
Near-optimal time and sample complexities for solving Markov decision processes with a generative model
A Sidford, M Wang, X Wu, L Yang, Y Ye
Advances in Neural Information Processing Systems 31, 2018
180*2018
Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification
P Jain, S Kakade, R Kidambi, P Netrapalli, A Sidford
Journal of Machine Learning Research 18, 2018
152*2018
Single pass spectral sparsification in dynamic streams
M Kapralov, YT Lee, CN Musco, CP Musco, A Sidford
SIAM Journal on Computing 46 (1), 456-477, 2017
1512017
Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization
R Frostig, R Ge, S Kakade, A Sidford
International Conference on Machine Learning, 2540-2548, 2015
1492015
Geometric median in nearly linear time
MB Cohen, YT Lee, G Miller, J Pachocki, A Sidford
Proceedings of the forty-eighth annual ACM symposium on Theory of Computing …, 2016
1422016
Accelerating stochastic gradient descent for least squares regression
P Jain, SM Kakade, R Kidambi, P Netrapalli, A Sidford
Conference On Learning Theory, 545-604, 2018
133*2018
Efficient inverse maintenance and faster algorithms for linear programming
YT Lee, A Sidford
2015 IEEE 56th Annual Symposium on Foundations of Computer Science, 230-249, 2015
1322015
Streaming pca: Matching matrix bernstein and near-optimal finite sample guarantees for oja’s algorithm
P Jain, C Jin, SM Kakade, P Netrapalli, A Sidford
Conference on learning theory, 1147-1164, 2016
1252016
Competing with the empirical risk minimizer in a single pass
R Frostig, R Ge, SM Kakade, A Sidford
Conference on learning theory, 728-763, 2015
1182015
“Convex Until Proven Guilty”: Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions
Y Carmon, JC Duchi, O Hinder, A Sidford
International conference on machine learning, 654-663, 2017
1152017
Robust shift-and-invert preconditioning: Faster and more sample efficient algorithms for eigenvector computation
D Garber, E Hazan, C Jin, SM Kakade, C Musco, P Netrapalli, A Sidford
ICML, 2016
111*2016
Variance reduced value iteration and faster algorithms for solving markov decision processes
A Sidford, M Wang, X Wu, Y Ye
Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete …, 2018
1102018
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–20