About me

I am a computer vision and machine learning researcher interested in representation learning, multimodal learning & Earth Observation. I am currently an assistant professor at CNAM in the Vertigo team.


December 2020: The RL & Games project at CEDRIC laboratory is looking for a MSC. level intern for 5/6 months starting spring 2021 on producing diverse behaviours for AI using reinforcement learning in video games. See the internship offer (in French) for more details. (this position has been filled)

November 2020: The Vertigo team is looking for a MSc. level intern for 5/6 months starting spring 2021 on equivariant neural networks for image classification and semantic segmentation. See the internship offer (in French) for more information. (this position has been filled)

October 2020: I was invited to organize a hands-on tutorial on machine learning for astrophysics at the SFtools-bigdata conference. Code is available on my Github page.

October 2020: A new specialization certificate on Artificial Intelligence has opened at Cnam. This program is tailored for professionnals that want to deepen their understanding of statistical learning, artificial intelligence and deep learning. Classes are tought either remotely or in the evening. Check out the curriculum here (in French).

June 2020: I am looking for a PhD candidate for a fully-funded thesis on controlling generative networks for image synthesis in collaboration with CEA. Check out the full offer fore more details. This position has been filled.

January 2020: Our laboratory is hiring a junior assistant professor in Computer Science with a strong focus on machine learning and artificial intelligence. Application is done through the GALAXIE portal. It is a teaching and research permanent position in Paris. Position has been filled.

November 2019: We are looking to hire M2 interns from 5/6 months starting in spring 2020 on topics related to deep learning for image understanding. I personally offer one internship on weakly-supervised semantic segmentation. Check out the internship details. I am also involved in another M2 internship offer on deep learning for MIMO radiocommunication (see the subject in French) (these positions have been filled). Finally, I will co-advise with Qwant Research a M2 internship on deep learning for fast webpage information extraction. If this interests you and feel free to contact me for more information.

September 2019: I am joining the Conservatoire National des Arts & Métiers (CNAM) as an assistant professor in the Vertigo team.

August 2019: Our journal article on using signed distance transform regression to regularize semantic segmentation deep networks has been accepted for publication in CVIU.

July 2019: I will be at APIA 2019 in Toulouse from July 1st to July 5th to present our work on multi-modal text/image classification with deep nets. Feel free to come for a chat!

May 2019: Our paper on multi-modal text/image deep networks for document image classification has been accepted to APIA 2019 in Toulouse.

April 2019: I will be presenting at the GdR ISIS meeting on weakly and semi-supervised learning for image and video classification. My talk will detail some of the work I did at Quicksign on image/text clustering for document recognition.

April 2019: Our review on deep convolutional and recurrent neural networks for hyperspectral image classification has ben accepted for the IEEE Geoscience and Remote Sensing special issue on hyperspectral data. Preprint here.

January 2019: I joined Quicksign R&D team as a research scientist.

October 2018: I successfully defended my PhD thesis! The manuscript (in french) is available here with slides.

July 2018: I was at IGARSS'18 in Valencia, where I presented our work on generative adversarial network for hyperspectral samples synthesis. You can find the code here!

March 2018: We have one paper accepted for IGARSS 2018 on generative adversarial networks for hyperspectral data synthesis. We'll also appear on the Inria Aerial Image Labeling benchmark write-up on building extraction.

January 2018: I ported the code of our deep network for aerial/satellite semantic segmentation to PyTorch for an easier use: fork it on GitHub!

November 2017: Our latest journal paper on data fusion for remote sensing data using deep fully convolutional networks is out !

July 2017: I was at CVPR 2017 for the Earthvision workshop, where I presented our work on semantic mapping using deep nets and OpenStreetMap data.

June 2017: I collaborated with the LISTIC team on using deep nets to perform semantic segmentation on Sentinel-2 images. This work will be presented at IGARSS'17 in Forth Worth, Texas.

June 2017: I presented at ORASIS 2017 our work on data fusion with deep networks for remote sensing (slides).

May 2017: Our submission on joint deep learning using optical and OSM data for semantic mapping of aerial/satellite images has been accepted to the EarthVision 2017 CVPR Workshop !

April 2017: Our Remote Sensing journal paper on vehicle segmentation for detection and classification is out in open access on the MPDI website.

March 2017: My colleague Alexandre Boulch will present the SnapNet architecture for semantic segmentation of unstructured point clouds at Eurographics 3DOR workshop. It is the current state-of-the-art on the Semantic3D dataset (code).

March 2017: Our paper on data fusion for remote sensing using deep nets won the 2nd best student paper award at JURSE 2017 ! Slides and poster are available.

Februrary 2017: The code of the deep network we used for the ISPRS Vaihingen 2D Semantic Labeling Challenge is out on Github !

January 2017: We will present two invited papers at JURSE 2017 !

November 2016: I will be at ACCV'16 in Taipei to present our poster on semantic segmentation of Earth Observation using multi-scale and multimodal deep networks.

October 2016: I will be at PyCon-fr (the French Python conference) to speak about deep learning using Python (slides (in French) and video (in French, too)).

September 2016: Our paper on the use of deep networks for object-based image analysis of vehicles in the ISPRS dataset has been distinguished by the "Best Benchmarking Contribution Award" at GEOBIA 2016 !

September 2016: I will be at GEOBIA 2016 in Enschede to talk about our work on object-based analysis of cars in remote sensing images using deep learning.

September 2016: Our paper on semantic segmentation for Earth Observation was accepted at ACCV'16 for a poster presentation. Check out the state-of-the-art results on the ISPRS Vaihingen 2D Semantic Labeling Challenge !

July 2016: I will be at IGARSS'16 in Beijing to present our work on superpixel-based semantic segmentation of aerial images.

April 2016: Our paper on region-based classification of remote sensing images using deep features has been accepted at IGARSS'16 for an oral presentation.

October 2015: I started as a PhD student at ONERA and IRISA.

Research areas

My research interests shifts with time but my work falls broadly into one of these three areas which overlap - more often than not.

Machine learning

Machine learning consists in training a computer to perform a task without explicitly detailing how to do so, but instead by creating models that leverage patterns in the data and perform inference from them. Especially interesting is deep learning, i.e. producing abstract representation of raw data that allows the computer to manipulate high-level concepts in numerical terms. I am mostly interested in how machine learning can improve machine perception (image, sound and text processing and understanding) through useful representations.

Earth Observation

Remote sensing of Earth Observation data is a very broad field that aims to gather information about our planet and the objects that cover its surface. Earth Observation leverages satellite images (in the broad sense that radar, multispectral and infrared data are also images) to understand ecological phenomenon, map urban expansions or monitor natural disasters. My work focuses on processing and making sense of the large volumes of data acquired by the sensors for land cover mapping, change detection and image interpreation through automated means.

Data fusion

Information processing in the modern age constrain computers to work with a large panel of data types: images, videos, texts, sounds, Lidar, radar, sonar and other various kinds of sensors. In the real world, humans are able to sense the world through multiple modalities by analog means. Data fusion aims to reproduce these abilities in the digital world for cross-modal data mining and pattern recognition, e.g. making sense of complex manuscripts or understanding movie clips.

Machine learning and games

I have recently picked up machine learning for games as an interest. I lead a small project at Cnam on improving reinforcement learning to generate diverse and challenging AI in video games (collab. with Clément Rambour and Guillaume Levieux). Our goal is to better understand what makes an interesting AI or environment from a gameplay point of view. My focus is on how to use machine learning in clever ways to improve the gaming experience.


I am sometimes looking for collaborators (either at masters or PhD level). You can cold email me but check that my current research is interesting to you and is something that you feel comfortable working with.

PhD students

I currently advise two PhD students:
  • Perla Doubinsky (Cnam/CEA): controlling generative models to steer image generation, with Michel Crucianu (Cnam) and Hervé Le Borgne (CEA) since November 2020.
  • Elias Ramzi (Cnam/SWORD): deep learning for visual search in large logos databases, with Nicolas Thome (Cnam), Clément Rambour (Cnam) and Xavier Bitot (SWORD Group) since January 2021.


From september 2020 to september 2021, I am working with Jean-François Robitaille from IPAG on deep learning multispectral image processing to detect multiscale interstellar structures. Our goal is to use data mining to detect interstellar objects in large clouds.


From september 2019 to september 2020, I worked with Clément Rambour (CNAM/ONERA) during his post-doc on multi-temporal analysis of SAR and optical data. We focused our work on flood detection in SAR/multispectral time series.

MSc. students


I am currently advising two students for their master internship:
  • Raphaël Boige (master Data Science, Télécom Paris/École Polytechnique). Internship on learning agents with diversified behaviours for video games by reinforcement.
  • João Pedro Araújo Ferreira Campos (ENSTA Paris). Internship on spatially-equivariant representation learning for remote sensing and medical images.


I have supervised or co-supervised several students for their research internship:
  • Javiera Navarro-Castillo (ONERA): Towards the ``ImageNet'' of remote sensing, MSc. from École Polytechnique, 2018. Internship on large-scale semi-supervised semantic segmentation (with A. Boulch, B. Le Saux and S. Lefèvre). Now a PhD student at ONERA.
  • Adel Redjimi (Quicksign): Semi-supervised learning for document image classification, MSc. from INP Grenoble, 2019. Internship on document classification with scarce supervision and unlabeled data (with K. Slimani). Now a PhD student at Cnam/CEA.
  • Chen Dang (Qwant) : fast webpage information extraction. MSc. from Université Paris Sciences & Lettres, 2020. Internship on webpage image context extraction for indexing (with R. Fournier-S'niehotta and H. Randrianarivo). Now a PhD student at Orange Labs.
  • Samar Chebbi (Cnam) : deep learning for linear precoding in massive MIMO systems with power amplifiers, Sup'com engineering diploma, 2020 (with R. Zayani and M. Ferecatu).
  • Yannis Karmim (Cnam) : semi-supervised learning for semantic segmentation, MSC. from Sorbonne Universités, 2020. Internship on equivariance constraints for segmentation models (with N. Thome). Now a research engineer at Cnam.

Code & datasets

Most of my research code is released under permissive open source licenses on GitHub. During my time at Quicksign, we released QS-OCR, a text/image classification dataset using OCR'd document images.



  • 2021, Semi-Supervised Semantic Segmentation in Earth Observation: The MiniFrance Suite, Dataset Analysis and Multi-task Network Study, Javiera Castillo-Navarro, Bertrand Le Saux, Alexandre Boulch, Nicolas Audebert, Sébastien Lefèvre, Machine Learning Journal (in press).
  • 2019, Deep learning for classification of hyperspectral data: a comparative review, Nicolas Audebert, Bertrand Le Saux, Sébastien Lefèvre, IEEE Geosciences and Remote Sensing Magazine.
  • 2019, Distance transform regression for spatially-aware deep semantic segmentation, Nicolas Audebert, Alexandre Boulch, Bertrand Le Saux, Sébastien Lefèvre, Computer Vision and Image Understanding.
  • 2018, Beyond RGB: Very High Resolution Urban Remote Sensing With Multimodal Deep Networks, Nicolas Audebert, Bertrand Le Saux, Sébastien Lefèvre, ISPRS Journal of Photogrammetry and Remote Sensing, Elsevier, 2018.
  • 2017, Segment-before-Detect: Vehicle Detection and Classification through Semantic Segmentation of Aerial Images, Nicolas Audebert, Bertrand Le Saux, Sébastien Lefèvre, Remote Sensing, MDPI, 2017.


  • 2020, A Real-World Hyperspectral Image Processing Workflow for Vegetation Stress and Hydrocarbon Indirect Detection, Dominique Dubucq, Nicolas Audebert, Véronique Achard, Alexandre Alakian, Sophie Fabre, Anthony Credoz, Philippe Deliot, Bertrand Le Saux, XXIV ISPRS Congress, Nice.
  • 2020, Flood Detection in Time Series of Optical and SAR images, Clément Rambour, Nicolas Audebert, Elise Koeniguer, Bertrand Le Saux, Michel Crucianu, Mihai Datcu, XXIV ISPRS Congress, Nice.
  • 2019, Multimodal deep networks for text and image-based document classification, Nicolas Audebert, Catherine Herold, Kuider Slimani, Cédric Vidal, ECML/PKDD Workshop on Multi-view Learning, Würzburg.
  • 2018, Generative Adversarial Networks for Realistic Synthesis of Hyperspectral Samples, Nicolas Audebert, Bertrand Le Saux, Sébastien Lefèvre, IGARSS, Valencia, 2018.
  • 2017, Couplage de données géographiques participatives et d’images aériennes par apprentissage profond, Nicolas Audebert, Bertrand Le Saux, Sébastien Lefèvre, GRETSI, Juan-les-Pins, 2017.
  • 2017, Joint Learning from Earth Observation and OpenStreetMap Data to Get Faster Better Semantic Maps, Nicolas Audebert, Bertrand Le Saux, Sébastien Lefèvre, EarthVision - CVPR Workshop, Honolulu, 2017 (poster).
  • 2017, Unstructured point cloud semantic labeling using deep segmentation networks, Alexandre Boulch, Bertrand Le Saux, Nicolas Audebert, Eurographics 3DOR, Lyon, 2017.
  • 2017, Fusion of Heterogeneous Data in Convolutional Networks for Urban Semantic Labeling, Nicolas Audebert, Bertrand Le Saux, Sébastien Lefèvre, JURSE, Dubai, 2017 (slides, poster).
  • 2016, Semantic Segmentation of Earth Observation Data Using Multimodal and Multi-scale Deep Networks, Nicolas Audebert, Bertrand Le Saux, Sébastien Lefèvre, ACCV, Taipei, 2016 (poster).
  • 2016, On the usability of deep networks for object-based image analysis, Nicolas Audebert, Bertrand Le Saux, Sébastien Lefèvre, GEOBIA, Enschede, 2016 (slides).
  • 2016, How useful is region-based classification of remote sensing images in a deep learning framework ?, Nicolas Audebert, Bertrand Le Saux, Sébastien Lefèvre, IGARSS, Beijing, 2016 (slides).
  • 2016, Structural classifiers for contextual semantic labeling of aerial images. Hicham Randrianarivo, Bertrand Le Saux, Nicolas Audebert, Michel Crucianu, Marin Ferecatu, Big Data from Space (BiDS), Tenerife, 2016.

Communications :

  • 2019, Multimodal deep networks for text and image-based document classification, Nicolas Audebert, Catherine Herold, Kuider Slimani, Cédric Vidal, APIA (Applications Pratiques de l’Intelligence Artificielle), Toulouse, 2019 (slides).
  • 2017, Réseaux de neurones profonds et fusion de données pour la segmentation sémantique d’images aériennes, Nicolas Audebert, Bertrand Le Saux, Sébastien Lefèvre, ORASIS 2017 (Journées des jeunes chercheurs en vision par ordinateur), Colleville-sur-Mer, 2017 (slides).
  • 2016, Deep Learning for Remote Sensing. Nicolas Audebert, Alexandre Boulch, Adrien Lagrange, Bertrand Le Saux, Sébastien Lefèvre, 16th ONERA-DLR Aerospace Symposium (ODAS), Oberpfaffenhofen, 2016.
  • 2016, Deep learning for aerial cartography (poster). Nicolas Audebert, Bertrand Le Saux, Sébastien Lefèvre, Statlearn Workshop, Vannes, 2016.


Current classes

I mainly teach pattern recognition, image processing and machine learning for text and image understanding in CNAM:
  • RCP208: Pattern Recognition (practical classes).
  • RCP209: Supervised Learning and Neural Networks (intro and practical classes).
  • RCP211: Advanced Artificial Intelligence (generative models).
  • RCP217: Artificial Intelligence for Multimedia Applications (deep learning for audio processing).
  • RCP216: Large Scale Machine Learning with Spark.

Past courses

In the past I have tought "Introduction to C++ programming" and "Algorithms" classes at École Nationale des Ponts et Chaussées. Some resources are still available in the archive.


Short vitae

  • Since Sept. 2019: Assistant Professor in Computer Science at Conservatoire National des Arts & Métiers (CNAM).
  • Jan. 2019 - Aug. 2019: Research scientist in deep learning and computer vision at Quicksign.
  • Oct. 2015 - Oct. 2018: PhD in Computer Science at ONERA and IRISA.
  • 2015: MEng. in Computer Science from Supélec.
  • 2015: MSc. in Human-Computer Interaction from Université Paris-Sud.

Community service


I am an occasional translator for the French Python documentation project.

I have an Erdős number of 4. I don't have an Erdős–Bacon number yet but you never know…

My Kardashian index is 0.61.