Следене
Nishanth Dikkala
Nishanth Dikkala
Потвърден имейл адрес: google.com
Заглавие
Позовавания
Позовавания
Година
Testing ising models
C Daskalakis, N Dikkala, G Kamath
IEEE Transactions on Information Theory 65 (11), 6829-6852, 2019
1122019
Minimax estimation of conditional moment models
N Dikkala, G Lewis, L Mackey, V Syrgkanis
Advances in Neural Information Processing Systems 33, 12248-12262, 2020
1002020
From soft classifiers to hard decisions: How fair can we be?
R Canetti, A Cohen, N Dikkala, G Ramnarayan, S Scheffler, A Smith
Proceedings of the conference on fairness, accountability, and transparency …, 2019
582019
Tight hardness results for maximum weight rectangles
A Backurs, N Dikkala, C Tzamos
arXiv preprint arXiv:1602.05837, 2016
572016
Regression from dependent observations
C Daskalakis, N Dikkala, I Panageas
Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing …, 2019
382019
Testing symmetric markov chains from a single trajectory
C Daskalakis, N Dikkala, N Gravin
Conference On Learning Theory, 385-409, 2018
352018
Learning from weakly dependent data under dobrushin’s condition
Y Dagan, C Daskalakis, N Dikkala, S Jayanti
Conference on Learning Theory, 914-928, 2019
332019
Do more negative samples necessarily hurt in contrastive learning?
P Awasthi, N Dikkala, P Kamath
International conference on machine learning, 1101-1116, 2022
302022
Concentration of multilinear functions of the Ising model with applications to network data
C Daskalakis, N Dikkala, G Kamath
Advances in Neural Information Processing Systems 30, 2017
292017
Hogwild!-gibbs can be panaccurate
C Daskalakis, N Dikkala, S Jayanti
Advances in Neural Information Processing Systems 31, 2018
192018
Learning Ising models from one or multiple samples
Y Dagan, C Daskalakis, N Dikkala, AV Kandiros
Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing …, 2021
182021
Logistic regression with peer-group effects via inference in higher-order Ising models
C Daskalakis, N Dikkala, I Panageas
International Conference on Artificial Intelligence and Statistics, 3653-3663, 2020
142020
Estimating ising models from one sample
Y Dagan, C Daskalakis, N Dikkala, AV Kandiros
arXiv preprint arXiv:2004.09370, 2020
132020
A theoretical view on sparsely activated networks
C Baykal, N Dikkala, R Panigrahy, C Rashtchian, X Wang
Advances in Neural Information Processing Systems 35, 30071-30084, 2022
92022
Statistical estimation from dependent data
V Kandiros, Y Dagan, N Dikkala, S Goel, C Daskalakis
International Conference on Machine Learning, 5269-5278, 2021
92021
Can Credit Increase Revenue?
N Dikkala, É Tardos
Web and Internet Economics: 9th International Conference, WINE 2013 …, 2013
82013
For manifold learning, deep neural networks can be locality sensitive hash functions
N Dikkala, G Kaplun, R Panigrahy
arXiv preprint arXiv:2103.06875, 2021
72021
On the benefits of learning to route in mixture-of-experts models
N Dikkala, N Ghosh, R Meka, R Panigrahy, N Vyas, X Wang
Proceedings of the 2023 Conference on Empirical Methods in Natural Language …, 2023
62023
ReMI: A Dataset for Reasoning with Multiple Images
M Kazemi, N Dikkala, A Anand, P Devic, I Dasgupta, F Liu, B Fatemi, ...
arXiv preprint arXiv:2406.09175, 2024
12024
Alternating updates for efficient transformers
C Baykal, D Cutler, N Dikkala, N Ghosh, R Panigrahy, X Wang
Advances in Neural Information Processing Systems 36, 2024
12024
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–20