Следене
Stanislav Fort
Stanislav Fort
Anthropic / Stanford University / Google Brain / DeepMind
Потвърден имейл адрес: stanford.edu - Начална страница
Заглавие
Позовавания
Позовавания
Година
Deep Ensembles: A Loss Landscape Perspective
S Fort, H Hu, B Lakshminarayanan
arXiv preprint arXiv:1912.02757, 2019
2542019
Training independent subnetworks for robust prediction
M Havasi, R Jenatton, S Fort, JZ Liu, J Snoek, B Lakshminarayanan, ...
arXiv preprint arXiv:2010.06610, 2020
682020
The Break-Even Point on Optimization Trajectories of Deep Neural Networks
S Jastrzebski, M Szymczak, S Fort, D Arpit, J Tabor, K Cho, K Geras
arXiv preprint arXiv:2002.09572, 2020
602020
Gaussian Prototypical Networks for Few-Shot Learning on Omniglot
S Fort
arXiv preprint arXiv:1708.02735, 2017
552017
Discovery of gamma-ray pulsations from the transitional redback PSR J1227-4853
TJ Johnson, PS Ray, J Roy, CC Cheung, AK Harding, HJ Pletsch, S Fort, ...
The Astrophysical Journal 806 (1), 91, 2015
542015
Stiffness: A new perspective on generalization in neural networks
S Fort, PK Nowak, S Jastrzebski, S Narayanan
arXiv preprint arXiv:1901.09491, 2019
502019
Exploring the limits of out-of-distribution detection
S Fort, J Ren, B Lakshminarayanan
Advances in Neural Information Processing Systems 34, 7068-7081, 2021
442021
Large Scale Structure of Neural Network Loss Landscapes
S Fort, S Jastrzebski
arXiv preprint arXiv:1906.04724, 2019
442019
Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the neural tangent kernel
S Fort, GK Dziugaite, M Paul, S Kharaghani, DM Roy, S Ganguli
Advances in Neural Information Processing Systems 33, 5850-5861, 2020
412020
Adaptive quantum state tomography with neural networks
Y Quek, S Fort, HK Ng
arXiv preprint arXiv:1812.06693, 2018
252018
The goldilocks zone: Towards better understanding of neural network loss landscapes
S Fort, A Scherlis
Proceedings of the AAAI Conference on Artificial Intelligence 33 (01), 3574-3581, 2019
222019
Emergent properties of the local geometry of neural loss landscapes
S Fort, S Ganguli
arXiv preprint arXiv:1910.05929, 2019
182019
A Simple Fix to Mahalanobis Distance for Improving Near-OOD Detection
J Ren, S Fort, J Liu, AG Roy, S Padhy, B Lakshminarayanan
arXiv preprint arXiv:2106.09022, 2021
172021
Analyzing monotonic linear interpolation in neural network loss landscapes
J Lucas, J Bae, MR Zhang, S Fort, R Zemel, R Grosse
arXiv preprint arXiv:2104.11044, 2021
8*2021
Drawing Multiple Augmentation Samples Per Image During Training Efficiently Decreases Test Error
S Fort, A Brock, R Pascanu, S De, SL Smith
arXiv preprint arXiv:2105.13343, 2021
72021
Predictability and surprise in large generative models
D Ganguli, D Hernandez, L Lovitt, A Askell, Y Bai, A Chen, T Conerly, ...
2022 ACM Conference on Fairness, Accountability, and Transparency, 1747-1764, 2022
62022
The ATHENA WFI science products module
DN Burrows, S Allen, M Bautz, E Bulbul, J Erdley, AD Falcone, S Fort, ...
Space Telescopes and Instrumentation 2018: Ultraviolet to Gamma Ray 10699 …, 2018
42018
Towards understanding feedback from supermassive black holes using convolutional neural networks
S Fort
arXiv preprint arXiv:1712.00523, 2017
42017
Identifying charged particle background events in x-ray imaging detectors with novel machine learning algorithms
DR Wilkins, SW Allen, ED Miller, M Bautz, T Chattopadhyay, S Fort, ...
Space Telescopes and Instrumentation 2020: Ultraviolet to Gamma Ray 11444 …, 2020
32020
How many degrees of freedom do we need to train deep networks: a loss landscape perspective
BW Larsen, S Fort, N Becker, S Ganguli
arXiv preprint arXiv:2107.05802, 2021
22021
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–20