2025:
4 ICLR accepted incl.:
— 2024: NeurIPS papers incl. Implicit Multimodal Alignment, IDEFICS2, Explainability for Large Multimodal Models, Zero-Shot Image Segmentation, ManiPose, ECCV UniTraj, CVPR PointBeV: A Sparse Approach to BeV Predictions, ICLR Beyond Task Performance: Evaluating and Reducing the Flaws of Large Multimodal Models with In-Context Learning
— 2023: NeurIPS OBELICS and Rewarded soups, ICCV eP-ALM: Efficient Perceptual Augmentation of Language Models and ZestGuide, ICML Model Ratatouille: Recycling Diverse Models for ood, CVPR
Counterfactual Explanations,
Semantic and Panoptic Segmentation,
Visual Recognition,
Improving Selective VQA, ICLR Image editing with diffusion models
AndrewYNg’s news letter pointed out our recent strategy for training Transformers in Vision: Cookbook for Vision Transformers: Good insight about our DeiT III !
— 2022 main publi/infos: 2 papers at NeurIPS, 1 paper at CoRL, 3 papers at ECCV including STEEX! and 2 papers about DeiT, 1 paper at ICML Fishr: Invariant Gradient Variances for Out-of-distribution Generalization, 2 papers at CVPR 1- Flexible Semantic Image Translation, 2- Transformers for continual learning
Janv. 2022 MLIA is joigning the ISIR lab of Sorbonne University, it is a fantastic opportunuity with a lot of new challenges for us!
— 2021: Transfomrer for Vision: ICML – DeiT data-efficient image transformers
And follow-up at ICCV CaiT Deeper with transformers
1 paper BEEF on XAI and Autonomous driving at the NeurIPS workshop: Machine Learning for Autonomous Driving
My GANified Face (Thanks to Asya Grechka ) 