Следене
Amit Daniely
Amit Daniely
Потвърден имейл адрес: mail.huji.ac.il - Начална страница
Заглавие
Позовавания
Позовавания
Година
Toward deeper understanding of neural networks: The power of initialization and a dual view on expressivity
A Daniely, R Frostig, Y Singer
Advances in neural information processing systems 29, 2016
3452016
SGD learns the conjugate kernel class of the network
A Daniely
Advances in Neural Information Processing Systems 30, 2017
1832017
Strongly adaptive online learning
A Daniely, A Gonen, S Shalev-Shwartz
International Conference on Machine Learning, 1405-1411, 2015
1722015
Complexity theoretic limitations on learning halfspaces
A Daniely
Proceedings of the forty-eighth annual ACM symposium on Theory of Computing …, 2016
1472016
Complexity theoretic limitations on learning dnf’s
A Daniely, S Shalev-Shwartz
Conference on Learning Theory, 815-830, 2016
1232016
From average case complexity to improper learning complexity
A Daniely, N Linial, S Shalev-Shwartz
Proceedings of the forty-sixth annual ACM symposium on Theory of computing …, 2014
1092014
Depth separation for neural networks
A Daniely
Conference on Learning Theory, 690-696, 2017
912017
Optimal learners for multiclass problems
A Daniely, S Shalev-Shwartz
Conference on Learning Theory, 287-316, 2014
782014
Multiclass learnability and the erm principle
A Daniely, S Sabato, S Ben-David, S Shalev-Shwartz
Proceedings of the 24th Annual Conference on Learning Theory, 207-232, 2011
782011
Multiclass learnability and the ERM principle.
A Daniely, S Sabato, S Ben-David, S Shalev-Shwartz
J. Mach. Learn. Res. 16 (1), 2377-2404, 2015
752015
Learning parities with neural networks
A Daniely, E Malach
Advances in Neural Information Processing Systems 33, 20356-20365, 2020
732020
The implicit bias of depth: How incremental learning drives generalization
D Gissin, S Shalev-Shwartz, A Daniely
arXiv preprint arXiv:1909.12051, 2019
532019
Multiclass learning approaches: A theoretical comparison with implications
A Daniely, S Sabato, S Shwartz
Advances in Neural Information Processing Systems 25, 2012
512012
A PTAS for agnostically learning halfspaces
A Daniely
Conference on Learning Theory, 484-502, 2015
492015
Learning economic parameters from revealed preferences
MF Balcan, A Daniely, R Mehta, R Urner, VV Vazirani
Web and Internet Economics: 10th International Conference, WINE 2014 …, 2014
492014
Clustering is difficult only when it does not matter
A Daniely, N Linial, M Saks
arXiv preprint arXiv:1205.4891, 2012
472012
More data speeds up training time in learning halfspaces over sparse vectors
A Daniely, N Linial, S Shalev-Shwartz
Advances in Neural Information Processing Systems 26, 2013
442013
On the practically interesting instances of MAXCUT
Y Bilu, A Daniely, N Linial, M Saks
arXiv preprint arXiv:1205.4893, 2012
442012
Neural networks learning and memorization with (almost) no over-parameterization
A Daniely
Advances in Neural Information Processing Systems 33, 9007-9016, 2020
292020
Most ReLU Networks Suffer from Adversarial Perturbations
A Daniely, H Shacham
Advances in Neural Information Processing Systems 33, 6629-6636, 2020
272020
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–20