Quentin BERTRAND - Home
Contact
Email: quentin [dot] bertrand AT inria [dot] fr
News
09-29-2024 I was delighted to be a keynote speaker for the ECCV workshop ¨The Dark Side of Generative AIs and Beyond¨, here are the slides
09-26-2024 Our paper showing that Self-Consuming Generative Models with Curated Data Provably Optimize Human Preferences was just accepted to NeurIPS! See you in Vancouver!
08-26-2024 Our recent works on self-consuming generative models and their biases was featured in the N.Y. times!
07-01-2024 Just started as an Inria researcher in the Malice team
04-22-2024 The recording of the talk On the Stability of Iterative Retraining of Generative Models on their own Data at the Montreal Machine Learning Seminar can be found here
01-16-2024 Our paper On the Stability of Iterative Retraining of Generative Models on their own Data was accepted to ICLR 2024 with a spotlight, see you in Vienna!
12-18-2023 We just released our paper proving that Q-learners can provably learn to collude in the iterated prisoner dilemma!
01-07-2023 On July 1st, 2024 I will join Inria as a Research Scientist!
Previous News
05-12-2023 I will present our paper On the Limitations of Elo: Real-World Games are Transitive, not Additive at the Berkeley Multi-Agent Reinforcement Learning Seminar
Our paper Synergies between Disentanglement and Sparsity: Generalization and Identifiability in Multi-Task Learning has been accepted at ICML 2023, see you in Hawaii!
Our paper On the Limitations of Elo: Real-World Games are Transitive, not Additive has been accepted to AISTATS 2023, see you in Spain!
I just presented our paper Synergies between Disentanglement and Sparsity: a Multi-task Learning Perspective at the Canadian Mathematical Society Winter Workshop
I just presented our two papers Beyond L1: Faster and Better Sparse Models with skglm and The Curse of Unrolling: Rate of Differentiating Through Optimization at NeurIPS 2022
I was awarded the top reviewer award at NeurIPS 2022!
Our papers Beyond L1: Faster and Better Sparse Models with skglm and The Curse of Unrolling: Rate of Differentiating Through Optimization have been accepted to NeurIPS 2022!