Quentin BERTRAND - Research
Ph. D. Thesis
Papers
2024
Q. Bertrand, J. Duque, E. Calvano, G. Gidel Q-learners Can Provably Collude in the Iterated Prisoner's Dilemma
Q. Bertrand, A. J. Bose, A. Duplessis, M. Jiralerspong, G. Gidel On the Stability of Iterative Retraining of Generative Models on their own Data, ICLR 2024 (with spotlight!), code, slides, video
2023
J. Ramirez, R. Sukumaran, Q. Bertrand, G. Gidel,
Omega: Optimistic EMA Gradients, ICML 2023 LatinX in AI Workshop, code
S. Lachapelle, T. Deleu, D. Mahajan, I. Mitliagkas, Y. Bengio, S. Lacoste-Julien, Q. Bertrand,
Synergies between Disentanglement and Sparsity:
Generalization and Identifiability in Multi-Task Learning, ICML 2023
Q. Bertrand, W. M. Czarnecki, G. Gidel On the Limitations of Elo: Real-World Games are Transitive, not Additive, AISTATS 2023
Q. Klopfenstein*, Q. Bertrand*, A. Gramfort, J. Salmon, S. Vaiter, Model identification and local linear convergence of coordinate descent, Optimization Letters
2022
D. Scieur, Q. Bertrand, G. Gidel, F. Pedregosa,
The Curse of Unrolling: Rate of Differentiating Through Optimization,
NeurIPS 2022
Q. Bertrand, Q. Klopfenstein, P.-A. Bannier, G. Gidel, M. Massias Beyond L1: Faster and Better Sparse Models with skglm,
NeurIPS 2022
code,
doc
Q. Bertrand*, Q. Klopfenstein*, M. Massias, M. Blondel, S. Vaiter, A. Gramfort, J. Salmon,
Implicit differentiation for fast hyperparameter selection in non-smooth convex learning,
JMLR,
code,
doc
2021
P.-A. Bannier, Q. Bertrand, J. Salmon, A. Gramfort,
Electromagnetic neural source imaging under
sparsity constraints with SURE-based hyperparameter tuning,
Workshop medical imaging meets NeurIPS, NeurIPS2021,
code
Q. Bertrand, M. Massias,
Anderson acceleration of coordinate descent,
AISTATS 2021,
code,
doc
2020
Q. Bertrand*, Q. Klopfenstein*, M. Blondel, S. Vaiter, A. Gramfort, J. Salmon,
Implicit differentiation of Lasso-type models for hyperparameter optimization,
ICML 2020,
proc.,
code,
doc
M. Massias*, Q. Bertrand*, A. Gramfort, J. Salmon,
Support recovery and sup-norm convergence rates for sparse pivotal estimation,
AISTATS 2020,
proc.
Slides
On the Stability of Iterative Retraining of Generative Models on their own Data, video
Hyperparameter selection for high dimensional sparse learning: application to neuroimaging,
28/09/2021, Ph. D. Defense, Paris-Saclay, France.
Anderson acceleration of coordinate descent,
07/06/2021, Journées des statistiques, Nice, France.
Optimization for machine learning, “Hands on”,
04/01/2021, Data Science Summer School of École polytechnique, France.
Implicit differentiation of Lasso-type models for hyperparameter optimization,
09/09/2020, SMAI MODE 2020, France.
Handling correlated and repeated measurements with the smoothed multivariate square-root Lasso,
18/10/2019, GDR MOA 2019, France.
Reviewing service
|