Следене
Mathurin Massias
Mathurin Massias
Потвърден имейл адрес: inria.fr - Начална страница
Заглавие
Позовавания
Позовавания
Година
Celer: a Fast Solver for the Lasso with Dual Extrapolation
M Massias, A Gramfort, J Salmon
Proceedings of the 35th International Conference on Machine Learning, 2018
902018
Learning step sizes for unfolded sparse coding
P Ablin, T Moreau, M Massias, A Gramfort
Advances in Neural Information Processing Systems, 13100-13110, 2019
622019
Implicit Differentiation for Fast Hyperparameter Selection in Non-Smooth Convex Learning
Q Bertrand, Q Klopfenstein, M Massias, M Blondel, S Vaiter, A Gramfort, ...
Journal of Machine Learning Research 23 (149), 1-43, 2022
312022
Benchopt: Reproducible, efficient and collaborative optimization benchmarks
T Moreau, M Massias, A Gramfort, P Ablin, PA Bannier, B Charlier, ...
Advances in Neural Information Processing Systems 35, 25404-25421, 2022
302022
Generalized Concomitant Multi-Task Lasso for Sparse Multimodal Regression
M Massias, O Fercoq, A Gramfort, J Salmon
International Conference on Artificial Intelligence and Statistics, 998-1007, 2018
302018
Dual Extrapolation for Sparse Generalized Linear Models
M Massias, S Vaiter, A Gramfort, J Salmon
Journal of Machine Learning Research 21, 1-33, 2020
272020
From safe screening rules to working sets for faster lasso-type solvers
M Massias, A Gramfort, J Salmon
10th NIPS Workshop on Optimization for Machine Learning, 2017
262017
Handling correlated and repeated measurements with the smoothed multivariate square-root Lasso
Q Bertrand, M Massias, A Gramfort, J Salmon
Advances in Neural Information Processing Systems, 3959-3970, 2019
222019
Iterative regularization for convex regularizers
C Molinari, M Massias, L Rosasco, S Villa
International Conference on Artificial Intelligence and Statistics, 1684-1692, 2021
172021
Anderson acceleration of coordinate descent
Q Bertrand, M Massias
International Conference on Artificial Intelligence and Statistics, 1288-1296, 2021
172021
Beyond l1: Faster and better sparse models with skglm
Q Bertrand, Q Klopfenstein, PA Bannier, G Gidel, M Massias
Advances in Neural Information Processing Systems 35, 38950-38965, 2022
142022
Dimension-free convergence rates for gradient Langevin dynamics in RKHS
B Muzellec, K Sato, M Massias, T Suzuki
Conference on Learning Theory, 1356-1420, 2022
92022
Support recovery and sup-norm convergence rates for sparse pivotal estimation
M Massias, Q Bertrand, A Gramfort, J Salmon
International Conference on Artificial Intelligence and Statistics, 2655-2665, 2020
82020
Coordinate Descent for SLOPE
J Larsson, Q Klopfenstein, M Massias, J Wallin
International Conference on Artificial Intelligence and Statistics, 4802-4821, 2023
72023
Iterative regularization for low complexity regularizers
C Molinari, M Massias, L Rosasco, S Villa
Numerische Mathematik 156 (2), 641-689, 2024
42024
Implicit Differentiation for Hyperparameter Tuning the Weighted Graphical Lasso
C Pouliquen, P Gonçalves, M Massias, T Vayer
arXiv preprint arXiv:2307.02130, 2023
12023
Gap Safe screening rules for faster complex-valued multi-task group Lasso
M Massias, J Salmon, A Gramfort
SPARS, 2017
12017
Schur's Positive-Definite Network: Deep Learning in the SPD cone with structure
C Pouliquen, M Massias, T Vayer
arXiv preprint arXiv:2406.09023, 2024
2024
Sparse high dimensional regression in the presence of colored heteroscedastic noise: application to M/EEG source imaging
M Massias
Telecom Paristech, 2019
2019
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–19