PhD Position F/M PhD position F/M Building physics-based multilevel surrogate models from neural networks. Application to electromagnetic wave propagation

Contract type : Fixed-term contract

Level of qualifications required : Graduate degree or equivalent

Other valued qualifications : Master in applied mathematics or scientific computing

Fonction : PhD Position

About the research centre or Inria department

Inria is a national research institute dedicated to digital sciences that promotes scientific excellence and transfer. Inria employs 2,400 collaborators organised in research project teams, usually in collaboration with its academic partners.
This agility allows its scientists, from the best universities in the world, to meet the challenges of computer science and mathematics, either through multidisciplinarity or with industrial partners.
A precursor to the creation of Deep Tech companies, Inria has also supported the creation of more than 150 start-ups from its research teams. Inria effectively faces the challenges of the digital transformation of science, society and the economy.

Context

Numerical simulations of electromagnetic wave propagation problems primarily rely on a space discretization of the system of Maxwell’s equations using methods such as finite differences or finite elements. For complex and realistic three-dimensional situations, such a process can be computationally prohibitive, especially when the end goal consists in many-query analyses (e.g., optimization design and uncertainty quantification). Therefore, developing cost-effective surrogate models is of great practical significance.

There exist different possible ways of building surrogate models for a given system of partial differential equations (PDEs) in a non-intrusive way (i.e., with minimal modifications to an existing discretization-based simulation methodology). In recent years, approaches based on neural networks (NNs) and Deep Learning (DL) have shown much promise, thanks to their capability of handling nonlinear or/and high dimensional problems. Model-based neural networks, as opposed to purely data-driven neural networks, are currently the subject of intense research for devising high-performance surrogate models of parametric PDEs.

The concept of Physics-Informed Neural Networks (PINNs) introduced in [1], and later revisited in [2], is one typical example. PINNs are neural networks trained to solve supervised learning tasks while respecting some given physical laws, described by a (possibly nonlinear) PDE system. PINNs can be seen as a continuous approximation of the solution to the PDE. They seamlessly integrate information from both data and PDEs by embedding the PDEs into the loss function of a neural network. Automatic differentiation is then used to actually differentiate the network and compute the loss function.

Following similar ideas, and relying on the widely known result that NNs are universal approximators of continuous functions, DeepONets [3] are deep neural networks (DNNs) whose goal is to learn continuous operators or complex systems from streams of scattered data. A DeepONet consists of a DNN for encoding the discrete input function space (branch net) and another DNN for encoding the domain of the output functions (trunk net). PINNs and DeepONet are merely two examples of many DNNs that have contributed to making the field of Scientific Machine Learning (SciML) so popular in recent years.

This PhD project is expected to be a follow-up to the Master internship referred as 2023-06804.

Assignment

The main challenge when devising scalable physics-based DNNs for realistic applications is the computational cost of network training, especially when they are only used for forward modeling. Another important issue lies in their capacity to accurately deal with high frequency and/or multiscale problems. In particular, it has been observed that, when higher frequencies and multiscale features are present in the PDE solution, the accuracy of PINNs usually rapidly decreases, while the cost of training and evaluation drastically increases. There are multiple reasons for this behavior. One is the spectral bias of NNs, which is the well-studied property that NNs have difficulties learning high frequencies. Another reason is that, as high frequencies and multiscale features are added, more collocation points as well as a larger NN with significantly more free parameters, are typically required to accurately approximate the solution. This leads to an increase in the complexity of the optimization problem to be solved when training the NN.

In the present PhD project, we propose to study multilevel distributed strategies for fast training of physics-based DNNs for modeling electromagnetic wave propagation in the frequency domain. We will in particular investigate strategies that can accurately and efficiently deal with the simulation of electromagnetic wave interaction with heterogeneous media, and geometrically complex scattering structures. In this context, the ultimate goal of this project is to develop high-performance parametric NN surrogates that will be used as the forward model in inverse design studies. The first step will be to develop novel methodologies, and assess their performance in a simplified two-dimensional case, on a Helmholtz-type PDE. The extension to the more general three-dimensional Maxwell’s equations will be considered in a second step, informed by the results obtained in the Helmholtz case in two space dimensions.

[1] I.E. Lagaris, A. Likas and D.I. Fotiadis. Artificial neural networks for solving ordinary and partial differential equations. IEEE Trans. Neur. Netw., Vol. 9, No. 5, pp. 987-1000 (1998)

[2] M. Raissi, P. Perdikaris and G.E. Karniadakis. Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comp. Phys., Vol. 378, pp. 686-707 (2019)

[3] L. Lu, P. Jin, G. Pang, Z. Zhang and G. E. Karniadakis. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nat. Mach.  Intell., Vol. 3, pp. 218-229 (2021)

[4] V. Dolean, A. Heinlein, S. Mishra, B. Moseley. Multilevel domain decomposition-based architectures for physics-informed neural networks. arXiv preprint arXiv:2306.054862 (2023) https://doi.org/10.48550/arXiv.2306.05486

Main activities

  1. Bibliographical study for a review of (1) physics-based DNNs for wave propagation type models and (2) strategies for dsigning multilevel and distributed physics-based DNNs.
  2. Study in 2d case by considering wave propgation modeled by a Helmholtz-type PDE
  3. Study in the 3d case for dealing withe the system of frequency-domain Maxwell equations
  4. Software development activities
  5. Numerical assessment of the proposed NN-based physics-based multilevel surrogate models 
  6. Publications

Skills

Technical skills and level required

  • Sound knowledge of numerical analysis for PDEs
  • Sound knowledge of Machine Learning / Deep Learning with Artificial Neural Networks
  • Basic knowledge of physiscs of electromagnetic wave propagation

Software development skills : Python programming, TensorFlow, Pytorch

Relational skills : team worker (verbal communication, active listening, motivation and commitment)

Other valued appreciated : good level of spoken and written english

Remuneration

Gross Salary per month: 2082€ brut per month (year 1 & 2) and 2190€ brut per month (year 3)