Следене
Yifei Wang 汪祎非
Yifei Wang 汪祎非
Потвърден имейл адрес: stanford.edu - Начална страница
Заглавие
Позовавания
Позовавания
Година
Accelerated Information Gradient flow
Y Wang, W Li
arXiv preprint arXiv:1909.02102, 2019
602019
Information Newton's flow: second-order optimization method in probability space
Y Wang, W Li
arXiv preprint arXiv:2001.04341, 2020
352020
Projected Wasserstein gradient descent for high-dimensional Bayesian inference
Y Wang, P Chen, W Li
SIAM/ASA Journal on Uncertainty Quantification 10 (4), 1513-1532, 2022
262022
Adaptive newton sketch: Linear-time optimization with quadratic convergence and effective hessian dimensionality
J Lacotte, Y Wang, M Pilanci
International Conference on Machine Learning, 5926-5936, 2021
212021
Parallel deep neural networks have zero duality gap
Y Wang, T Ergen, M Pilanci
arXiv preprint arXiv:2110.06482, 2021
162021
A decomposition augmented lagrangian method for low-rank semidefinite programming
Y Wang, K Deng, H Liu, Z Wen
SIAM Journal on Optimization 33 (3), 1361-1390, 2023
132023
A stochastic stein variational newton method
A Leviyev, J Chen, Y Wang, O Ghattas, A Zimmerman
arXiv preprint arXiv:2204.09039, 2022
122022
The convex geometry of backpropagation: Neural network gradient flows converge to extreme points of the dual convex program
Y Wang, M Pilanci
arXiv preprint arXiv:2110.06488, 2021
122021
Optimal neural network approximation of wasserstein gradient direction via convex optimization
Y Wang, P Chen, M Pilanci, W Li
SIAM Journal on Mathematics of Data Science 6 (4), 978-999, 2024
92024
The hidden convex optimization landscape of two-layer relu neural networks: an exact characterization of the optimal solutions
Y Wang, J Lacotte, M Pilanci
arXiv preprint arXiv:2006.05900, 2020
82020
Beyond the best: estimating distribution functionals in infinite-armed bandits
Y Wang, TZ Baharav, Y Han, J Jiao, D Tse
arXiv preprint arXiv:2211.01743, 2022
42022
Overparameterized relu neural networks learn the simplest models: Neural isometry and exact recovery
Y Wang, Y Hua, E Candés, M Pilanci
arXiv preprint arXiv:2209.15265, 2022
32022
Search direction correction with normalized gradient makes first-order methods faster
Y Wang, Z Jia, Z Wen
SIAM Journal on Scientific Computing 43 (5), A3184-A3211, 2021
32021
The search direction correction makes first-order methods faster
Y Wang, Z Jia, Z Wen
arXiv preprint arXiv:1905.06507, 2019
32019
Randomized Geometric Algebra Methods for Convex Neural Networks
Y Wang, S Kim, P Chu, I Subramaniam, M Pilanci
arXiv preprint arXiv:2406.02806, 2024
22024
A Library of Mirrors: Deep Neural Nets in Low Dimensions are Convex Lasso Models with Reflection Features
E Zeger, Y Wang, A Mishkin, T Ergen, E Candès, M Pilanci
arXiv preprint arXiv:2403.01046, 2024
22024
Polynomial-Time Solutions for ReLU Network Training: A Complexity Classification via Max-Cut and Zonotopes
Y Wang, M Pilanci
arXiv preprint arXiv:2311.10972, 2023
22023
Overparameterized ReLU Neural Networks Learn the Simplest Model: Neural Isometry and Phase Transitions
Y Wang, Y Hua, EJ Candes, M Pilanci
IEEE Transactions on Information Theory, 2025
12025
A Circuit Approach to Constructing Blockchains on Blockchains
EN Tas, D Tse, Y Wang
arXiv preprint arXiv:2402.00220, 2024
12024
Sketching the Krylov subspace: faster computation of the entire ridge regularization path
Y Wang, M Pilanci
The Journal of Supercomputing 79 (16), 18748-18776, 2023
2023
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–20