Internship: Generative methods for online adaptive deep learning training
Contract type : Internship agreement
Level of qualifications required : Master's or equivalent
Fonction : Internship Research
About the research centre or Inria department
The Centre Inria de l’Université de Grenoble groups together almost 600 people in 23 research teams and 9 research support departments.
Staff is present on three campuses in Grenoble, in close collaboration with other research and higher education institutions (Université Grenoble Alpes, CNRS, CEA, INRAE, …), but also with key economic players in the area.
The Centre Inria de l’Université Grenoble Alpe is active in the fields of high-performance computing, verification and embedded systems, modeling of the environment at multiple levels, and data science and artificial intelligence. The center is a top-level scientific institute with an extensive network of international collaborations in Europe and the rest of the world.
Context
Advisers: Bruno Raffin (Bruno.Raffin@inria.fr), Sofya Dymchenko (sofya.dymchenko@inria.fr)
The internship will take place at the DataMove team located in the IMAG building on the campus of Saint Martin d’Heres (Univ. Grenoble Alpes) near Grenoble. The length of the internship is 4 months minimum and the start date is flexible, but need a 2 month delay before starting the interhsip due to administrative constraints. The DataMove team is a friendly and stimulating environment that gathers Professors, Researchers, PhD and Master students all leading research on High-Performance Computing. The city of Grenoble is a student-friendly city surrounded by the Alps mountains, offering a high quality of life and where you can experience all kinds of mountain-related outdoor activities.
Assignment
Subject context
n supervised learning, successfully training advanced neural networks requires annotated data of sufficient quantity and quality. In natural sciences (physics, chemistry, weather modeling), observational data remains to be a limiting factor. One alternative is to numerically create synthetic training data. This offers several advantages: synthetic data can be generated at will, in potentially unlimited amounts, the quality can be degraded in a controlled manner for more robust trainings, and the coverage of the parameter space can be adapted to focus training where relevant. Today, a large variety of simulation codes to create such data are available, from computer graphics, computer engineering, computational physics, biology and chemistry, and so on. When training data is produced from simulation codes, it can be generated along with the training.
This approach has multiple benefits. First, there is no need to store and move a huge pre-created data set: float matrices of data can take terrabytes of memory, and reading them from the disk every training iteration might take more time than the iteration itself. Instead, data is stored in working memory and created "on-the-fly": when new data point is created it substitutes an old one. This allows the model to see terrabytes of data throughout its lifetime while storing only a smaller part of it at a time. Second, the training is not done with the same repeated data as in epoch-based approach. Continiously updated training set potentially improves the generalization quality of the model. More importantly, the update of the training set and creation of new data can be adaptive, driven by the observed behavior of the neural network during training. However, this adaptive data generation is a challenging question.
Active learning adresses this challenge by adaptively sampling the input parameters of simulators based on training progress, aiming to generate more relevant data. Thus, faster and higher-quality training is expected. In current approaches, active learning for simulations-based training often follows a phased algorithm: 1) generate an initial training set by uniformly sampling input points 2) (re)train the model on the trainng set 3) use feedback from the model’s performance to generate/augment new training set and return to (2). Fundamentally, the methods differentiate by choice of "feedback" metric (aquisition function) and the way the next training set is created (aquisition algorithm).
Our research
Our team's research is focused on exploring and developping new online active learning methods for efficient training of surrogates -- neural networks that meant to substitute simulation codes. We have developped Breed for online adaptive surrogate training, such as Physics Informed Neural Networks (PINNs), Neural Operators, and basic Dense Neural Networks, within our MelissaDL framework that allows the training to be highly distributed and the training data to be created on-the-fly.
Our related publications
- MelissaDL x Breed: Towards Data-Efficient On-line Supervised Training of Multi-parametric Surrogates with Active Learning, SC AI4S 2024: https://hal.science/hal-04712480v1
- Training Deep Surrogate Models with Large Scale Online Learning, ICML 2023: https://hal.science/hal-04102400v1
- Loss-driven sampling within hard-to-learn areas for simulation-based neural network training, NeurIPS ML4Phys 2023: https://hal.science/hal-04305233v1
- Melissa: Simulation-Based Parallel Training, NeurIPS AI4S 2022: https://hal.science/hal-03842106v1
Main activities
This intership is focused on investigating use of generative methods for active learning, e.g., diffusion posterior sampling to generate input points based on models uncertainty. Currently, Breed method uses importance sampling technique and loss statistics.
In the beginning, the objective is to get familiar with the domain and read about existing work: surrogates, neural operators, active learning, online training, Bayesian methods. Then -- start to work on possible generative methods for active learing (normalizing flows, diffusion models, generative-adversarial networks, energy-based models, etc.), develop and evaluate their performance through experiments with use cases such as heat equation and fluid dynamics equations. Currently, we work in a team consisting of a PhD student, a research engineer and a research director (Bruno Raffin), we have regular meetings and a daily communication - you will not be alone!
The perfect candidate has basic knowledge of generative deep learning, confident programming skills to develop ML/DL algorithms in Python, motivation to quickly learn new things, and, most importantly, an interest to application of AI to physical sciences!
Related papers
- Population Monte Carlo with Normalizing Flow. https://arxiv.org/abs/2312.03857
- All-in-one simulation-based inference. https://arxiv.org/abs/2404.09636
- Adaptive Generation of Training Data for ML Reduced Model Creation. https://www.osti.gov/biblio/1923172
- A comprehensive study of non-adaptive and residual-based adaptive sampling for physics-informed neural networks. https://arxiv.org/abs/2207.10289
- Mitigating Propagation Failures in Physics-informed Neural Networks using Retain-Resample-Release (R3) Sampling. https://arxiv.org/abs/2207.02338
- Deep Active Learning by Leveraging Training Dynamics. https://arxiv.org/abs/2110.08611
Skills
Technical skills: Python (numpy, pytorch), Git, Jupyter notebooks.
The main communication language is English.
Benefits package
- Subsidized meals
- Partial reimbursement of public transport costs
- Leave: 7 weeks of annual leave + 10 extra days off due to RTT (statutory reduction in working hours) + possibility of exceptional leave (sick children, moving home, etc.)
- Possibility of teleworking (90 days / year) and flexible organization of working hours
- Social, cultural and sports events and activities
- Access to vocational training
- Social security coverage under conditions
Remuneration
€4.35 per hour of actual presence at 1 January 2024.
About 590€ gross per month (internship allowance)
General Information
- Theme/Domain :
Optimization, machine learning and statistical methods
Scientific computing (BAP E) - Town/city : Saint Martin d'Hères
- Inria Center : Centre Inria de l'Université Grenoble Alpes
- Starting date : 2025-02-01
- Duration of contract : 6 months
- Deadline to apply : 2024-11-21
Warning : you must enter your e-mail address in order to save your application to Inria. Applications must be submitted online on the Inria website. Processing of applications sent from other channels is not guaranteed.
Instruction to apply
CV + cover letter
Les candidatures doivent être déposées en ligne sur le site Inria.
Le traitement des candidatures adressées par d'autres canaux n'est pas garanti.
Applications must be submitted online via the Inria website. Processing of applications submitted via other channels is not guaranteed.
Defence Security :
This position is likely to be situated in a restricted area (ZRR), as defined in Decree No. 2011-1425 relating to the protection of national scientific and technical potential (PPST).Authorisation to enter an area is granted by the director of the unit, following a favourable Ministerial decision, as defined in the decree of 3 July 2012 relating to the PPST. An unfavourable Ministerial decision in respect of a position situated in a ZRR would result in the cancellation of the appointment.
Recruitment Policy :
As part of its diversity policy, all Inria positions are accessible to people with disabilities.
Contacts
- Inria Team : DATAMOVE
-
Recruiter :
Dymchenko Sofya / sofya.dymchenko@inria.fr
The keys to success
The perfect candidate has basic knowledge of probabilities, deep learning, physics, confident programming skills to develop ML/DL algorithms in Python, motivation to quickly learn new things, and, most importantly, an interest to application of AI to physical sciences!
About Inria
Inria is the French national research institute dedicated to digital science and technology. It employs 2,600 people. Its 200 agile project teams, generally run jointly with academic partners, include more than 3,500 scientists and engineers working to meet the challenges of digital technology, often at the interface with other disciplines. The Institute also employs numerous talents in over forty different professions. 900 research support staff contribute to the preparation and development of scientific and entrepreneurial projects that have a worldwide impact.