👤 About me
I am a computer vision and machine learning researcher interested in representation learning, multimodal learning, Earth Observation and video games. I am working as a junior research director (directeur de recherches junior) at IGN in the LASTIG laboratory, STRUDEL team.I am currently on leave from my associate professor position at Cnam in the Vertigo team.
Contact:
- Email: nicolas.audebert@ign.fr
IGN - Laboratoire LASTIG, équipe STRUDEL
73 avenue de Paris
94160 Saint-Mandé, France
📌 Job offers
No open position currently.📰 News
November 21st, 2024: I was an examiner at Basile Rousse's Ph.D. defense. Congrats to the new doctor!
September 26th, 2024: I've been named Outstanding Reviewer for ECCV 2024.
July 1th, 2024: Our work on global-local prompt tuning for few-shot learning has been accepted into ECCV 2024. This work was a led by Marc Lafon and my former Ph.D. student Elias Ramzi. Congrats to them!
June 28th, 2024: Our paper on grammar-based representations of symbolic music for genre classification has been accepted at ISMIR 2024. Congratulations to Léo Géré for his first paper!
June 17th, 2024: In Seattle for the EarthVision workshop and the CVPR 2024 conference.
May 14th, 2024: Aimi Okabayashi is starting her PhD thesis on super-resolution of satellite image time series, advised by Charlotte Pelletier, Nicolas Courty, Thomas Corpetti and myself. Welcome back Aimi!
April 8th, 2024: It is with great pleasure that I can announce being a co-author on two accepted papers at the EarthVision 2024 CVPR worskhop. Congratulations to Georges and Aimi for their respective first publication:
- Detecting Out-Of-Distribution Earth Observation Images with Diffusion Models, a work led by Georges Le Bellier,
- Cross-sensor super-resolution of irregularly sampled Sentinel-2 time series, a joint work led by Aimi Okabayashi, in collaboration with Simon Donike and Charlotte Pelletier.
March 25th, 2024: Perla Doubinsky has masterfully defended her PhD thesis on controllable generative models for image editing and data augmentation. Bravo to the new doctor! 🎉
March 20th, 2024: Elias Ramzi has brilliantly defended his PhD thesis on robust image retrieval with deep learning. Congrats doctor! 🎉
December 1st, 2023: I am starting a new position as junior research director at IGN in Paris. I will be leading a new project on coupling heterogeneous ML-based and physics-based generative models for geodata analysis with a support from ANR.
October 24th, 2023: Our paper Semantic Generative Augmentations for Few-Shot Counting has been accepted to WACV 2024. Congratulations Perla!
October 2nd, 2023: Léo Géré (MSc. INSA Toulouse) has joined the Vertigo team as a PhD student on generative models for music sheet to performance translation and vice-versa. He will be advised by Florent Jacquemard (INRIA/Cnam), Philippe Rigaux (Cnam) and myself. Welcome Léo!
September 15th, 2023: New preprint on the optimization of rank losses for image retrieval, led by Elias Ramzi. This is an extension of our previous work on (hierarchical) image retrieval. It introduces a new hierarchical version of the Google Landmarks v2 dataset. Check it out on GitHub!
July 16th, 2023: I am at IGARSS 2023 in Pasadena until the end of the week. Ping me if you want to chat!
June 9th, 2023: Our paper on semantic editing of images in the latent space of GANs using optimal transport has been accepted for an oral presentation at CBMI'23! Congrats Perla!
April 18th, 2023: We are looking for PhD candidate on deep generative models (large image and language models) for video game level generation, with Guillaume Levieux and myself. Apply before May 19th!
March 22nd, 2023: New preprint from Perla Doubinsky et al. on image editing by manipulating the GAN latent space using optimal transport!
March 1st, 2023: Valerio Marsocci joines the Vertigo team as a postdoctoral fellow. He will work on self-supervised learning for Earth Observation imagery. Welcome Valerio!
February 13, 2023: Armand Verstraete joins the Vertigo team as research engineer on the MAGE project, working on procedural generation of virtual cities. Welcome Armand!
January 16, 2023: Georges Le Bellier (MSc. from Centrale Lille) has joined the Vertigo team as a PhD student on domain adaptation and self-supervised learning for Earth Observation, as a part of the ANR MAGE project. He will be advised by Nicolas Thome (Sorbonne), Marin Ferecatu (Cnam) and myself. Welcome Georges!
November 30, 2022: We are hiring two interns on hierarchical learning for better generalization and super-resolution for sequences of satellite images.
October 23, 2022: Elias Ramzi will be at ECCV'2022 to present HAPPIER, our differentiable criterion for hierarchical ranking.
August 22, 2022: Our PhD student Perla Doubinsky's paper on disentangled controls for GAN has been accepted for publication into Pattern Recognition Letters. Congrats Perla!
July 11, 2022: I am principal investigator on the newly accepted MAGE project (2022-2026): Mapping Aerial imagery with Game Engine data. Job offers coming soon.
July 6, 2022: I'll be at CAp-RFIAP 2022 in Vannes from July 6 to July 8 for the conference. I'll present some of our work on classifying apes vocalization (french) and learning diversified behaviours by reinforcement for video games.
July 5, 2022: Our paper HAPPIER, an extension of average precision for the hierarchical image retrieval setting, as been accepted at ECCV2022! Congrats Elias!
June 1, 2022: Maxime Merizette (MSc. from ESGT) has joined the team as CIFRE PhD student working on semantic segmentation of indoor 3D point clouds. He will be advised by Jérôme Verdun (Cnam/ESGT) and myself, in a collaboration with Quarta. Welcome Maxime!
May 31, 2022: I am co-organizing with Hervé Le Borgne (CEA LIST) and Alexandre Benoît (UGA) a GdR ISIS meeting on the control of generative models. See the website for the full program.
April 4, 2022: I have received an unrestricted gift from the Google Research Scholar Program.
February 2, 2022: Our work on neural networks for efficient autoprecoding in MU-MIMO systems will be published at WCNC 2022. Congrats Xinying Cheng!
January 27, 2022: New preprint on finding disentangled directions in GAN latent spaces with simple attribute balancing, from our PhD student Perla Doubinsky.
December 2021: Code for ROADMAP, our robust and decomposable average precision surrogate for image retrieval published in Neurips is now available on GitHub (thanks to Elias Ramzi!).
November 2021: I was distinguished as an outstanding reviewer for BMVC 2021.
October 2021: Our paper on robust and decomposable differentiable ranking for image retrieval (ROADMAP) has been accepted into NeurIPS 2021.
September 2021: I was distinguished as one of the outstanding reviewers of ICCV 2021.
July 2021: Charlotte Pelletier and I received a 7k€ grant for a project on super-resolution of Sentinel-2 time series from GDR ISIS.
July 2021: 2 papers accepted just before the holidays:
- 1 paper to ISMIR21: PKSpell, our deep recurrent network for automated pitch spelling and key signature estimation from MIDI musical performances.
- 1 paper to ECML/PKDD Graph Embedding and Mining Workshop: Web Image Context Extraction using graph neural networks on the HTML DOM tree.
January 2021: Elias Ramzi (MSc. from CentraleSupélec) has joined the team for his PhD thesis on deep learning-based image retrieval for large logo databases. He will be advised by Nicolas Thome (Cnam), Clément Rambour (Cnam) and myself. Welcome Elias!
December 2020: The RL & Games project at CEDRIC laboratory is looking for a MSC. level intern for 5/6 months starting spring 2021 on producing diverse behaviours for AI using reinforcement learning in video games. See the internship offer (in French) for more details. (this position has been filled)
November 2020: Perla Doubinsky (MSc. from UTC) has joined the Vertigo team for her PhD thesis on the control of generative models. She will be advised by Michel Crucianu (Cnam), Hervé Le Borgne (CEA LIST) and myself. Welcome Perla!
November 2020: The Vertigo team is looking for a MSc. level intern for 5/6 months starting spring 2021 on equivariant neural networks for image classification and semantic segmentation. See the internship offer (in French) for more information. (this position has been filled)
October 2020: I was invited to organize a hands-on tutorial on machine learning for astrophysics at the SFtools-bigdata conference. Code is available on my Github page.
October 2020: A new specialization certificate on Artificial Intelligence has opened at Cnam. This program is tailored for professionnals that want to deepen their understanding of statistical learning, artificial intelligence and deep learning. Classes are tought either remotely or in the evening. Check out the curriculum here (in French).
June 2020: I am looking for a PhD candidate for a fully-funded thesis on controlling generative networks for image synthesis in collaboration with CEA. Check out the full offer fore more details. This position has been filled.
March 2020: We released SEN12-FLOOD, a flood detection dataset in Sentinel 1 and 2 time series.
January 2020: Our laboratory is hiring a junior assistant professor in Computer Science with a strong focus on machine learning and artificial intelligence. Application is done through the GALAXIE portal. It is a teaching and research permanent position in Paris. Position has been filled.
November 2019: We are looking to hire M2 interns from 5/6 months starting in spring 2020 on topics related to deep learning for image understanding. I personally offer one internship on weakly-supervised semantic segmentation. Check out the internship details. I am also involved in another M2 internship offer on deep learning for MIMO radiocommunication (see the subject in French) (these positions have been filled). Finally, I will co-advise with Qwant Research a M2 internship on deep learning for fast webpage information extraction. If this interests you and feel free to contact me for more information.
September 2019: I am joining the Conservatoire national des arts & métiers (Cnam) as an assistant professor in the Vertigo team.
August 2019: Our journal article on using signed distance transform regression to regularize semantic segmentation deep networks has been accepted for publication in CVIU.
July 2019: I will be at APIA 2019 in Toulouse from July 1st to July 5th to present our work on multi-modal text/image classification with deep nets. Feel free to come for a chat!
May 2019: Our paper on multi-modal text/image deep networks for document image classification has been accepted to APIA 2019 in Toulouse.
April 2019: I will be presenting at the GdR ISIS meeting on weakly and semi-supervised learning for image and video classification. My talk will detail some of the work I did at Quicksign on image/text clustering for document recognition.
April 2019: Our review on deep convolutional and recurrent neural networks for hyperspectral image classification has ben accepted for the IEEE Geoscience and Remote Sensing special issue on hyperspectral data. Preprint here.
January 2019: I joined Quicksign R&D team as a research scientist.
October 2018: I successfully defended my PhD thesis! The manuscript (in french) is available here with slides.
July 2018: I was at IGARSS'18 in Valencia, where I presented our work on generative adversarial network for hyperspectral samples synthesis. You can find the code here!
March 2018: We have one paper accepted for IGARSS 2018 on generative adversarial networks for hyperspectral data synthesis. We'll also appear on the Inria Aerial Image Labeling benchmark write-up on building extraction.
January 2018: I ported the code of our deep network for aerial/satellite semantic segmentation to PyTorch for an easier use: fork it on GitHub!
November 2017: Our latest journal paper on data fusion for remote sensing data using deep fully convolutional networks is out !
July 2017: I was at CVPR 2017 for the Earthvision workshop, where I presented our work on semantic mapping using deep nets and OpenStreetMap data.
June 2017: I collaborated with the LISTIC team on using deep nets to perform semantic segmentation on Sentinel-2 images. This work will be presented at IGARSS'17 in Forth Worth, Texas.
June 2017: I presented at ORASIS 2017 our work on data fusion with deep networks for remote sensing (slides).
May 2017: Our submission on joint deep learning using optical and OSM data for semantic mapping of aerial/satellite images has been accepted to the EarthVision 2017 CVPR Workshop !
April 2017: Our Remote Sensing journal paper on vehicle segmentation for detection and classification is out in open access on the MPDI website.
March 2017: My colleague Alexandre Boulch will present the SnapNet architecture for semantic segmentation of unstructured point clouds at Eurographics 3DOR workshop. It is the current state-of-the-art on the Semantic3D dataset (code).
March 2017: Our paper on data fusion for remote sensing using deep nets won the 2nd best student paper award at JURSE 2017 ! Slides and poster are available.
Februrary 2017: The code of the deep network we used for the ISPRS Vaihingen 2D Semantic Labeling Challenge is out on Github !
January 2017: We will present two invited papers at JURSE 2017 !
November 2016: I will be at ACCV'16 in Taipei to present our poster on semantic segmentation of Earth Observation using multi-scale and multimodal deep networks.
October 2016: I will be at PyCon-fr (the French Python conference) to speak about deep learning using Python (slides (in French) and video (in French, too)).
September 2016: Our paper on the use of deep networks for object-based image analysis of vehicles in the ISPRS dataset has been distinguished by the "Best Benchmarking Contribution Award" at GEOBIA 2016 !
September 2016: I will be at GEOBIA 2016 in Enschede to talk about our work on object-based analysis of cars in remote sensing images using deep learning.
September 2016: Our paper on semantic segmentation for Earth Observation was accepted at ACCV'16 for a poster presentation. Check out the state-of-the-art results on the ISPRS Vaihingen 2D Semantic Labeling Challenge !
July 2016: I will be at IGARSS'16 in Beijing to present our work on superpixel-based semantic segmentation of aerial images.
April 2016: Our paper on region-based classification of remote sensing images using deep features has been accepted at IGARSS'16 for an oral presentation.
October 2015: I started as a PhD student at ONERA and IRISA.
🥼 Research projects
🧙 MAGE (2022-2026)
Participants: Nicolas Audebert (PI), Georges Le Bellier, Armand Verstraete, Valerio Marsocci, Guillaume Levieux, Charlotte Pelletier, Nicolas Thome, Devis Tuia.
Funding: Agence Nationale de la Recherche
Mapping Aerial imagery with Game Engine data (MAGE) aims to leverage procedural generation and modern rendering engines to generate labeled synthetic data for deep Earth Observation models. It investigates the following questions: how can we generate synthetic data of cities, before and after a natural disaster? How can make the simulated images look more realistic? How to train deep models on a mix of real unlabeled data and simulated labeled images? It is a four-year project funded as a Young Researcher Grant from ANR.
🛰 SESURE (2021-2023)
Participants: Nicolas Audebert, Charlotte Pelletier, Simon Donike, Aimi Okabayashi.
Funding: GdR ISIS.
SEntinel-2 SUper REsolution (SESURE) aims to combine the high revisit frequency of the Sentinel-2 constellation with the very high spatial resolution of SPOT. To do so, we aim to train deep generative models to perform super-resolution by leveraging the temporal information contained in Sentinel-2 time series.
🎮 RL-Games (2020-2022)
Participants: Nicolas Audebert (PI), Guillaume Levieux, Clément Rambour, Raphaël Boige, Zineb Lahrichi.
Funding: Cédric laboratory.
RL-Games investigates innovative applications of machine learning and reinforcement learning for video games and entertainment. In the first year, we investigated the limitations of training agents by reinforcement in game settings. We observed that, contrarily to players' expectations, RL agents exhibit behaviours with low diversity, and therefore low "fun". We studied the limitations of skill discovery algorithms as a way to learn diversity and introduced the concept of observer-based diversity in a RL setting.
👨🔬 Research areas
My research interests shifts with time but my work falls broadly into one of these three areas which overlap - more often than not.Representation learning
Machine learning consists in training a computer to perform a task without explicitly detailing how to do so, but instead by creating models that leverage patterns in the data and perform inference from them. Especially interesting is deep learning, i.e. producing abstract representation of raw data that allows the computer to manipulate high-level concepts in numerical terms. I am mostly interested in how machine learning can improve machine perception (image, sound and text processing and understanding) through useful representations.Earth Observation
Remote sensing of Earth Observation data is a very broad field that aims to gather information about our planet and the objects that cover its surface. Earth Observation leverages satellite images (in the broad sense that radar, multispectral and infrared data are also images) to understand ecological phenomenon, map urban expansions or monitor natural disasters. My work focuses on processing and making sense of the large volumes of data acquired by the sensors for land cover mapping, change detection and image interpreation through automated means.Machine learning and games
I have recently picked up machine learning for games as an interest. I have led a small project at Cnam on improving reinforcement learning to generate diverse and challenging AI in video games (collab. with Clément Rambour and Guillaume Levieux). My goal is to better understand what makes an interesting AI or environment from a gameplay point of view. My focus is on how to use machine learning in clever ways to improve the gaming experience.👥 Collaborators
I am sometimes looking for collaborators (either at masters or PhD level). You can cold email me but check that my current research is interesting to you and is something that you feel comfortable working with.PhD students
I currently advise four PhD students:- Maxime Merizette (Cnam/ESGT/Quarta): 🏠 semantic segmentation of extremely detailed 3D point clouds in indoor settings, with Jérôme Verdun (Cnam/ESGT) and Pierre Kervella (Quarta), since June 2022.
- Georges Le Bellier (Cnam): 🌐 domain adaptation and self-supervised learning for Earth Observation, with Marin Ferecatu (Cnam), since January 2023.
- Léo Géré (Cnam): 🎼 Grammar-based generative models for music transcription and generation, with Florent Jacquemard (Cnam/Inria) and Philippe Rigaux (Cnam), since October 2023.
- Aimi Okabayashi (IRISA/Univ. Bretagne-Sud): 🔎 super-resolution of satellite image time series, with Charlotte Pelletier, Nicolas Courty and Thomas Corpetti (IRISA), since May 2024.
Past
- Perla Doubinsky (Cnam/CEA): 🎬 controlling generative models to steer image generation, with Michel Crucianu (Cnam) and Hervé Le Borgne (CEA). November 2020 - March 2024.
- Elias Ramzi (Cnam/Coexya): 🔎 robust image retrieval with deep learning, with Nicolas Thome (Cnam), Clément Rambour (Cnam) and Xavier Bitot (Coexya). January 2021 - March 2024.
Postdocs
Past
- March 2023 to march 2024: I worked with Valerio Marsocci at Cnam on self-supervised learning approaches to Earth Observation. We built large pretrained models that can be applied to multiple downstream tasks across sensors.
- September 2020 to september 2022: I have been working with Jean-François Robitaille from IPAG on deep learning multispectral image processing to detect multiscale interstellar structures. Our goal was to use data mining to detect interstellar objects in large clouds.
- April 2020 to april 2024: I worked with Marion Laporte from the Origins of Speech team at ICSD on building a vocal space of apes. We aim to leverage statistical models to characterize the graded nature of chimpanzee and bonobo vocalizations.
- September 2019 to september 2020: I worked with Clément Rambour (Cnam/ONERA) during his post-doc on multi-temporal analysis of SAR and optical data. We focused our work on flood detection in SAR/multispectral time series.
MSc. students
Current
I am not supervising any master student currently.Past
I have supervised or co-supervised several students for their research internship:2018
- Javiera Navarro-Castillo (ONERA): Towards the "ImageNet" of remote sensing, MSc. from École Polytechnique. Internship on large-scale semi-supervised semantic segmentation (with A. Boulch, B. Le Saux and S. Lefèvre). Now a post-doc at EPFL.
2019
- Adel Redjimi (Quicksign): Semi-supervised learning for document image classification, MSc. from INP Grenoble. Internship on document classification with scarce supervision and unlabeled data (with K. Slimani).
2020
- Chen Dang (Qwant): fast webpage information extraction. MSc. from Université Paris Sciences & Lettres. Internship on webpage image context extraction for indexing (with R. Fournier-S'niehotta and H. Randrianarivo). Now a PhD student at Orange Labs.
- Samar Chebbi (Cnam): deep learning for linear precoding in massive MIMO systems with power amplifiers, Sup'com engineering diploma (with R. Zayani and M. Ferecatu). Now a PhD student at XLIM.
- Yannis Karmim (Cnam): semi-supervised learning for semantic segmentation, MSc. from Sorbonne Université. Internship on equivariance constraints for segmentation models (with N. Thome). Now a PhD student at Cnam.
2021
- Raphaël Boige (Cnam): Internship on learning agents with diversified behaviours for video games by reinforcement (with G. Levieux and C. Rambour). MSc. Data Science from Télécom Paris/École Polytechnique. Now a research scientist at InstaDeep.
- João Pedro Araújo Ferreira Campos (Cnam): Internship on Transformers and equivariance for semantic segmentation of remote sensing images (with C. Rambour and N. Thome). MSc. ENSTA Paris. Now a PhD candidate at Universidade Federal de Minas Gerais.
2022
- Simon Donike (IRISA): super-resolution of Sentinel-2 time series, with Charlotte Pelletier and Dirk Tiede. MSc. in geodata science from U. Salzsburg/U. South Brittany. Now a PhD student a University of Valencia.
- Ibrahim Tamela (Cnam): digit detection in historical cadastre rasters, with Jean-Michel Follin, Élisabeth Simonetto and Frédéric Durand (Cnam/ESGT). Specialized master in photogrammetry and geomatics from ENSG. Now a Geographic Information Systems Engineer at BNETD.
- Zineb Lahrichi (Ubisoft): Controlled procedural generation of video game levels, with Guillaume Levieux (Cnam) and Ludovic Denoyer (Ubisoft). MSc. from CentraleSupélec. Now a PhD student at SonyAI.
2023
- Aimi Okabayashi (Cnam/IRISA): diffusion models for super-resolution of satellite image time series, with Charlotte Pelletier. MSc. from ENSTA. Now a PhD student at IRISA.
2024
- Charles Vin (Cnam): self-supervising learning on satellite image time series with ranking losses, MSc. from Sorbonne Université. Internship on representation learning for Earth Observation data, by learning how to order sequences of Sentinel-2 images (with C. Rambour).
💻 Code & datasets
Most of my research code is released under permissive open source licenses on GitHub.
MiniFrance
During my PhD thesis, I collected and built the MiniFrance dataset which combines open source land cover geodata from Copernicus Urban Atlas and aerial images from IGN. It was then improved upon by Javiera Castillo and published in open access.
QS-OCR
During my time at Quicksign, we released QS-OCR, a text/image classification dataset using OCR'd document images.
SEN12-FLOOD
Improving on the 2019 MediaEval Multimedia Satellite Task, we designed and released in 2020 SEN12-FLOOD, a multimodal SAR/multispectral dataset (Sentinel 1 and 2) for flood event detection in remote sensing time series.
BreizhSR
Multi-image super-resolution from Sentinel-2 to SPOT-6 with acquisitions from summer 2018. Code is available on GitHub and the dataset has been released on Zenodo.
📚 Publications
Preprints:
- 2024, Cross-sensor self-supervised training and alignment for remote sensing, Valerio Marsocci, Nicolas Audebert, preprint.
- 2023, Optimization of Rank Losses for Image Retrieval, Elias Ramzi, Nicolas Audebert, Clément Rambour, André Araujo, Xavier Bitot, Nicolas Thome, preprint.
Journals:
- 2022, Multi-attribute balanced sampling for disentangled GAN controls, Perla Doubinsky, Nicolas Audebert, Michel Crucianu, Hervé Le Borgne, Pattern Recognition Letters.
- 2021, Semi-Supervised Semantic Segmentation in Earth Observation: The MiniFrance Suite, Dataset Analysis and Multi-task Network Study, Javiera Castillo-Navarro, Bertrand Le Saux, Alexandre Boulch, Nicolas Audebert, Sébastien Lefèvre, Machine Learning Journal.
- 2019, Deep learning for classification of hyperspectral data: a comparative review, Nicolas Audebert, Bertrand Le Saux, Sébastien Lefèvre, IEEE Geosciences and Remote Sensing Magazine.
- 2019, Distance transform regression for spatially-aware deep semantic segmentation, Nicolas Audebert, Alexandre Boulch, Bertrand Le Saux, Sébastien Lefèvre, Computer Vision and Image Understanding.
- 2018, Beyond RGB: Very High Resolution Urban Remote Sensing With Multimodal Deep Networks, Nicolas Audebert, Bertrand Le Saux, Sébastien Lefèvre, ISPRS Journal of Photogrammetry and Remote Sensing, Elsevier, 2018.
- 2017, Segment-before-Detect: Vehicle Detection and Classification through Semantic Segmentation of Aerial Images, Nicolas Audebert, Bertrand Le Saux, Sébastien Lefèvre, Remote Sensing, MDPI, 2017.
Conferences:
- 2024, 🐎 GalLop: Learning Global and Local Prompts for Vision-Language Models, Marc Lafon, Elias Ramzi, Clément Rambour, Nicolas Audebert, Nicolas Thome, ECCV, Milan.
- 2024, Improved symbolic music style classification with grammar-based hierarchical representations, Léo Géré, Nicolas Audebert, Florent Jacquemard, Philippe Rigaux, ISMIR, San Francisco.
- 2024, Detecting Out-Of-Distribution Earth Observation Images with Diffusion Models, Georges Le Bellier, Nicolas Audebert, EarthVision - CVPR workshop, Seattle.
- 2024, Cross-sensor super-resolution of irregularly sampled Sentinel-2 time series, Aimi Okabayashi, Nicolas Audebert, Simon Donike, Charlotte Pelletier, EarthVision - CVPR workshop, Seattle.
- 2024, Semantic Generative Augmentations for Few-Shot Counting, Perla Doubinsky, Nicolas Audebert, Michel Crucianu, Hervé Le Borgne, WACV 2024, Waikoloa.
- 2023, Wasserstein loss for Semantic Editing in the Latent Space of GANs, Perla Doubinsky, Nicolas Audebert, Michel Crucianu, Hervé Le Borgne, CBMI 2023, Orléans.
- 2022, Hierarchical Average Precision Training for Pertinent Image Retrieval, Elias Ramzi, Nicolas Audebert, Nicolas Thome, Clément Rambour, Xavier Bitot, ECCV 2022, Tel-Aviv.
- 2022, Efficient Autoprecoder-based deep learning for massive MU-MIMO Downlink under PA Non-Linearities, Xinying Cheng, Rafik Zayani, Marin Ferecatu, Nicolas Audebert, WCNC 2022, Austin.
- 2021, Robust and Decomposable Average Precision for Image Retrieval, Elias Ramzi, Nicolas Thome, Clément Rambour, Nicolas Audebert, Xavier Bitot, NeurIPS 2021, virtual.
- 2021, PKSpell: Data-Driven Pitch Spelling and Key Signature Estimation, Francesco Foscarin, Nicolas Audebert, Raphaël Fournier-S'Niehotta, ISMIR, virtual.
- 2021, Web Image Context Extraction with Graph Neural Networks and Sentence Embeddings on the DOM tree, Chen Dang, Hicham Randrianarivo, Raphaël Fournier-S'Niehotta, Nicolas Audebert, ECML/PKDD: GEM workshop, virtual.
- 2020, A Real-World Hyperspectral Image Processing Workflow for Vegetation Stress and Hydrocarbon Indirect Detection, Dominique Dubucq, Nicolas Audebert, Véronique Achard, Alexandre Alakian, Sophie Fabre, Anthony Credoz, Philippe Deliot, Bertrand Le Saux, XXIV ISPRS Congress, Nice.
- 2020, Flood Detection in Time Series of Optical and SAR images, Clément Rambour, Nicolas Audebert, Elise Koeniguer, Bertrand Le Saux, Michel Crucianu, Mihai Datcu, XXIV ISPRS Congress, Nice.
- 2019, Multimodal deep networks for text and image-based document classification, Nicolas Audebert, Catherine Herold, Kuider Slimani, Cédric Vidal, ECML/PKDD Workshop on Multi-view Learning, Würzburg.
- 2018, Generative Adversarial Networks for Realistic Synthesis of Hyperspectral Samples, Nicolas Audebert, Bertrand Le Saux, Sébastien Lefèvre, IGARSS, Valencia, 2018.
- 2017, Couplage de données géographiques participatives et d’images aériennes par apprentissage profond, Nicolas Audebert, Bertrand Le Saux, Sébastien Lefèvre, GRETSI, Juan-les-Pins, 2017.
- 2017, Joint Learning from Earth Observation and OpenStreetMap Data to Get Faster Better Semantic Maps, Nicolas Audebert, Bertrand Le Saux, Sébastien Lefèvre, EarthVision - CVPR Workshop, Honolulu, 2017 (poster).
- 2017, Unstructured point cloud semantic labeling using deep segmentation networks, Alexandre Boulch, Bertrand Le Saux, Nicolas Audebert, Eurographics 3DOR, Lyon, 2017.
- 2017, Fusion of Heterogeneous Data in Convolutional Networks for Urban Semantic Labeling, Nicolas Audebert, Bertrand Le Saux, Sébastien Lefèvre, JURSE, Dubai, 2017 (slides, poster).
- 2016, Semantic Segmentation of Earth Observation Data Using Multimodal and Multi-scale Deep Networks, Nicolas Audebert, Bertrand Le Saux, Sébastien Lefèvre, ACCV, Taipei, 2016 (poster).
- 2016, On the usability of deep networks for object-based image analysis, Nicolas Audebert, Bertrand Le Saux, Sébastien Lefèvre, GEOBIA, Enschede, 2016 (slides).
- 2016, How useful is region-based classification of remote sensing images in a deep learning framework ?, Nicolas Audebert, Bertrand Le Saux, Sébastien Lefèvre, IGARSS, Beijing, 2016 (slides).
- 2016, Structural classifiers for contextual semantic labeling of aerial images. Hicham Randrianarivo, Bertrand Le Saux, Nicolas Audebert, Michel Crucianu, Marin Ferecatu, Big Data from Space (BiDS), Tenerife, 2016.
Communications :
- 2023, Fusion d’informations pour la segmentation sémantique de nuages de points d’intérieurs de bâtiments, Maxime Mérizette, Nicolas Audebert, Jérôme Verdun, Pierre Kervella, ORASIS 2023 (Journées des jeunes chercheurs en vision par ordinateur), Carquerrane, 2023.
- 2022, Now you see me: finding the right observation space to learn diverse behaviours by reinforcement in games, Raphaël Boige, Nicolas Audebert, Clément Rambour, Guillaume Levieux, Conférence sur l'Apprentissage automatique (CAp), Vannes, 2022.
- 2022, Caractérisation du répertoire vocal des chimpanzés par apprentissage profond, Nicolas Audebert, Marion Laporte, Congrès Reconnaissance des Formes, Image, Apprentissage et Perception (RFIAP), Vannes, 2022.
- 2022, Contrôle de la cardinalité par navigation dans l'espace latent des GANs, Perla Doubinsky, Nicolas Audebert, Michel Crucianu, Hervé Le Borgne, Congrès Reconnaissance des Formes, Image, Apprentissage et Perception (RFIAP), Vannes, 2022.
- 2019, Multimodal deep networks for text and image-based document classification, Nicolas Audebert, Catherine Herold, Kuider Slimani, Cédric Vidal, APIA (Applications Pratiques de l’Intelligence Artificielle), Toulouse, 2019 (slides).
- 2018, Segmentation sémantique profonde par régression sur cartes de distances signées, Nicolas Audebert, Alexandre Boulch, Bertrand Le Saux, Sébastien Lefèvre, Congrès Reconnaissance des Formes, Image, Apprentissage et Perception (RFIAP), Marne-la-Vallée, 2018.
- 2017, Réseaux de neurones profonds et fusion de données pour la segmentation sémantique d’images aériennes, Nicolas Audebert, Bertrand Le Saux, Sébastien Lefèvre, ORASIS 2017 (Journées des jeunes chercheurs en vision par ordinateur), Colleville-sur-Mer, 2017 (slides).
- 2016, Deep Learning for Remote Sensing. Nicolas Audebert, Alexandre Boulch, Adrien Lagrange, Bertrand Le Saux, Sébastien Lefèvre, 16th ONERA-DLR Aerospace Symposium (ODAS), Oberpfaffenhofen, 2016.
- 2016, Deep learning for aerial cartography (poster). Nicolas Audebert, Bertrand Le Saux, Sébastien Lefèvre, Statlearn Workshop, Vannes, 2016.
👨🏫 Teaching
Current classes
I teach some classes related to image processing and machine learning at ENSG.Past courses
At Cnam, I mostly taught pattern recognition, image processing and machine learning for text and image understanding:- RCP208: Pattern Recognition (practical classes).
- RCP209: Supervised Learning and Neural Networks (deep learning).
- RCP211: Advanced Artificial Intelligence (generative models).
🗣 Resume
Short vitae
- Since Dec. 2023: Junior Research Director at Institut National de l'Information Géographique et Forestière (IGN).
- Since Sept. 2019: Associate Professor in Computer Science at Conservatoire national des arts & métiers (Cnam).
- Jan. 2019 - Aug. 2019: Research scientist in deep learning and computer vision at Quicksign.
- Oct. 2015 - Oct. 2018: PhD in Computer Science at ONERA and IRISA.
- 2015: MEng. in Computer Science from Supélec.
- 2015: MSc. in Human-Computer Interaction from Université Paris-Sud.
Community service
- Outstanding reviewer for ICCV 2021, BMVC 2021, ECCV 2024.
- Reviewer for ECCV, CVPR, ICCV, CVIU, NeurIPS, AAAI, BMVC, IEEE JSTARS, IEEE TGRS, IEEE TIP, ISPRS Journal of Photogrammetry and Remote Sensing, MDPI Remote Sensing… See my reviewer record on Publons.
- Program committee member for EarthVision in 2019, 2020, 2021, 2022, 2023, and 2024 (CVPR Workshop), MACLEAN 2019, 2020, 2021, 2022, 2023 and 2024 (ECML/PKDD Workshop).
🤷 Misc
I am an occasional translator for the French Python documentation project.
I have an Erdős number of 4. I don't have an Erdős–Bacon number yet but you never know…
My Kardashian index is 1.29.