Internship: Python Data Processing on Supercomputers for Large Parallel Numerical Simulations.

Contract type : Internship agreement

Level of qualifications required : Master's or equivalent

Fonction : Internship Research

About the research centre or Inria department

The Centre Inria de l’Université de Grenoble groups together almost 600 people in 23 research teams and 9 research support departments.

Staff is present on three campuses in Grenoble, in close collaboration with other research and higher education institutions (Université Grenoble Alpes, CNRS, CEA, INRAE, …), but also with key economic players in the area.

The Centre Inria de l’Université Grenoble Alpe is active in the fields of high-performance computing, verification and embedded systems, modeling of the environment at multiple levels, and data science and artificial intelligence. The center is a top-level scientific institute with an extensive network of international collaborations in Europe and the rest of the world.

Context

The internship will take place at the DataMove team located in the IMAG building on the campus of Saint Martin d’Heres (Univ. Grenoble Alpes) near Grenoble, under the supervision of Bruno Raffin (bruno.raffin@inria.fr), Andres Bermeo (andres.bermeo-marinelli@inria.fr) and Yushan Wang (yushan.wang@cea.fr)

The length of the internship is 4 months minimum and the start date is flexible, but need a 2 month delay before starting the interhsip due to administrative constraints. The DataMove team is a friendly and stimulating environment that gathers Professors, Researchers, PhD and Master students all leading research on High-Performance Computing. The city of Grenoble is a student-friendly city surrounded by the Alps mountains, offering a high quality of life and where you can experience all kinds of mountain-related outdoor activities.

Assignment

The field of high-performance computing has reached a new milestone, with the world's most powerful supercomputers exceeding the exaflop threshold. These machines will make it possible to process unprecedented quantities of data, which can be used to simulate complex phenomena with superior precision in a wide range of application fields: astrophysics, particle physics, healthcare, genomics, etc. 

Without a significant change in practices, the increased computing capacity of the next generation of computers will lead to an explosion in the volume of data produced by numerical simulations. Managing this data, from production to analysis, is a major challenge.

The use of simulation results is based on a well-established calculation-storage-calculation protocol. The difference in capacity between computers and file systems makes it inevitable that the latter will be clogged. For instance, the Gysela code in production mode can produce up to 5TB of data per iteration. It is obvious that storing 5TB of data is not feasible at high frequency. What's more, loading this quantity of data for later analysis and visualization is also a difficult task. To bypass this difficulty, we choose to rely on the in situ data analysis approach. 

In situ consists in coupling the parallel simulation code, Gysela for instance, with a data analytics code that processes the data online, as soon as produced. In situ enables reducing the amount of data to write to disk,  limiting the pressure on the file system. This is a mandatory approach to run massive simulations like Gysela on the latest Exascale supercomputers. 

We developed an in situ data processing approach, called Deisa, relying on Dask, a Python environment for distributed tasks. Dask defines tasks that are executed asynchronously on workers once their input data are available.  The user defines a graph of tasks to be executed. This graph is then forwarded to the Dask scheduler. The scheduler is in charge of (1) optimizing the task graph  and (2) distributing the tasks for execution on the different workers according to a scheduling algorithm aiming at minimizing the graph execution time.

Deisa extends Dask so it becomes possible to couple a MPI-based parallel simulation code with Dask. Deisa enables the simulation code to directly send newly produced data into the worker memories, notify the Dask scheduler that these data are available for analysis and that associated  tasks can then be scheduled for execution.

Compared to previous in situ approaches that are mainly MPI-based, our approach relying on Python tasks makes for a good tradeoff between programming ease and runtime  performance.

The goal of this internship  is to investigate solutions to improve task placement and thus performance enabling tasks to be scheduled in process (into the simulation processes), in situ (running on external processes but on the same compute nodes that also run the simulation code), in transit (on dedicated nodes different from the simulation nodes). Running closer to the simulation reduces the need for data movements, but can potentially steal resources (CPU, GPU, network, memory, cache) from the simulation and slow it down. Dask  task graph optimization is a good starting point to develop such approaches.

References

  1. Dask - https://www.dask.org/
  2. Deisa Paper: Dask-enabled in situ analytics. Amal Gueroudji, Julien Bigot, Bruno Raffin. Hipc 2021. https://hal.inria.fr/hal-03509198v1
  3. Deisa Paper: Dask-Extended External Tasks for HPC/ML In Transit Workflows, Amal Gueroudji, Julien Bigot, Bruno Raffin, Robert Ross. Work workshop at Supercomputing 23. https://hal.science/hal-04409157v1
  4. Deisa Code: https://github.com/pdidev/deisa
  5. Ray - https://github.com/ray-project/ray
  6. Damaris: How to Efficiently Leverage Multicore Parallelism to Achieve Scalable, Jitter-free I/O. Matthieu Dorier , Gabriel Antoniu , Franck Cappello, Marc Snir , Leigh Orf.  IEEE Cluster 2012. https://inria.hal.science/hal-00715252

Main activities

 After getting familiar with the base concepts, existing work, deploying and runing some simple in situ data processing with Deisa on a supercomputer, the candidate will investigate solutions to enabling a better task placement (in process, in situ, in transit), develop a prototype  and expriments to measure the performance gain. 



Skills

Expected skills include

  • Knowledge on distributed, parallel computing and numerical simulations.
  • Python, Numpy, Parallel programming (MPI)
  • English (working language)

Benefits package

  • Subsidized meals
  • Partial reimbursement of public transport costs
  • Leave: 7 weeks of annual leave + 10 extra days off due to RTT (statutory reduction in working hours) + possibility of exceptional leave (sick children, moving home, etc.)
  • Possibility of teleworking (90 days / year) and flexible organization of working hours
  • Social, cultural and sports events and activities
  • Access to vocational training
  • Social security coverage under conditions

Remuneration

 €4.35 per hour of actual presence at 1 January 2024.

About 590€ gross per month (internship allowance)