PhD Position F/M Optimizing serverless computing in the edge-cloud continuum
Contract type : Fixed-term contract
Level of qualifications required : Graduate degree or equivalent
Fonction : PhD Position
About the research centre or Inria department
The Inria Rennes - Bretagne Atlantique Centre is one of Inria's eight centres and has more than thirty research teams. The Inria Center is a major and recognized player in the field of digital sciences. It is at the heart of a rich R&D and innovation ecosystem: highly innovative PMEs, large industrial groups, competitiveness clusters, research and higher education players, laboratories of excellence, technological research institute, etc.
Context
Financial and working environment
This PhD position is part of the PEPR Cloud - Taranis project funded by the French government (France 2030). The position will be recruited and hosted at the Inria Center at Rennes University; and the work will be carried out within the MAGELLAN team in close collaboration with the DiverSE team and other partners in the Taranis project.
The PhD student will be supervised by:
- Shadi Ibrahim, MAGELLAN team in Rennes
- Olivier Barais, DiverSE team in Rennes
- Jalil Boukhobza, ENSTA, Brest
Assignment
Context
Serverless computing, also known as function-as-a-service, improves upon cloud computing by enabling programmers to develop and scale their applications without worrying about infrastructure management [1, 2]. It involves breaking an application into small functions that can be executed and scaled automatically, offering applications high elasticity, cost efficiency, and easy deployment [3, 4].
Serverless computing is a key platform for building next-generation web services, which are typically realized by running distributed machine learning (ML) and deep learning (DL) applications. Indeed, 50% of AWS customers are now using serverless computing [5]. Significant efforts have focused on deploying and optimizing ML applications on homogeneous clouds by enabling fast storage services to share data between stages [6], by solving the cold-start problem (launching an appropriate container to perform a given function) when scaling resources [7], and by proposing lightweight runtimes to efficiently execute serverless workflows on GPUs [8]; and on building simulation to evaluate resource allocation and task scheduling policies [9] . However, few efforts have focused on deploying serverless computing in the Edge-Cloud Continuum, where resources are heterogeneous and have limited compute and storage capacity [10], or have addressed the simultaneous deployment of multiple applications.
References:
[1] Shadi Ibrahim, Omer Rana, Olivier Beaumont, Xiaowen Chu (2025). Serverless Computing, in IEEE Internet Computing, vol. 28, no. 6, pp. 5-7, Nov.-Dec. 2024, doi: 10.1109/MIC.2024.3524507.
[2] Vincent Lannurien, Laurent d’Orazio, Olivier Barais, Stephane Paquelet, Jalil Boukhobza. (2023). Serverless Cloud Computing: State of the Art and Challenges. In Serverless Computing: Principles and Paradigms. Lecture Notes on Data Engineering and Communications Technologies, vol 162. Springer.
[3] Zijun Li, Linsong Guo, Jiagan Cheng, Quan Chen, Bingsheng He, and Minyi Guo. The Serverless Computing Survey: A Technical Primer for Design Architecture. ACM Comput. Surv. 54, 10s, Article 220 (January 2022), 34 pages.
[4] Mohammad Shahrad, Rodrigo Fonseca, Inigo Goiri, Gohar Chaudhry, Paul Batum, Jason Cooke, Eduardo Laureano, Colby Tresness, Mark Russinovich, and Ricardo Bianchini. Serverless in the wild: characterizing and optimizing the serverless workload at a large cloud provider. In Proceedings of the USENIX Annual Technical Conference, pages 205–218, 2020.
[5] Aws insider. Report: AWS Lambda Popular Among Enterprises, Container Users. 2020. https://awsinsider.net/articles/2020/02/04/aws-lambda-usage-profile.aspx
[6] Hao Wu, Junxiao Deng, Hao Fan, Shadi Ibrahim, Song Wu, Hai Jin. QoS-Aware and Cost-Efficient Dynamic Resource Allocation for Serverless ML Workflows, 2023 IEEE International Parallel and Distributed Processing Symposium (IPDPS),
[7] Mohan, Anup, Harshad Sane, Kshitij Doshi, Saikrishna Edupuganti, Naren Nayak, and Vadim Sukhomlinov. Agile cold starts for scalable serverless. In 11th USENIX Workshop on Hot Topics in Cloud Computing (HotCloud 19). 2019.
[8] Hao Wu, Yue Yu, Junxiao Deng, Shadi Ibrahim, Song Wu, Hao Fan, Ziyue Cheng, Hai Jin. {StreamBox}: A Lightweight {GPU}{SandBox} for Serverless Inference Workflow. In : 2024 USENIX Annual Technical Conference (USENIX ATC 24). 2024. p. 59-73.
[9] Lannurien, V., d’Orazio, L., Barais, O., Paquelet, S. and Boukhobza, J., 2024. HeROsim: An Allocation and Scheduling Simulator for Evaluating Serverless Orchestration Policies. IEEE Internet Computing.
[10] S. Moreschini, F. Pecorelli, X. Li, S. Naz, D. Hästbacka and D. Taibi, "Cloud Continuum: The Definition," in IEEE Access.
Main activities
The goal is to introduce a new framework that enables serverless computing in the Edge-Cloud Continuum; this optimizes the performance of stateless and ML applications when their deployments, and thus functions, are co-located; and allows these applications to scale up and down to meet workload dynamicity and maximize resources, specifically scaling the number and size of containers and selecting and configuring storage services. In addition, we want to explore how to integrate cloud resources in a cost-effective manner.
Skills
- An excellent Master degree in computer science or equivalent
- Strong knowledge of distributed systems
- Ability to conduct experimental systems research
- Strong programming skills (C/C++, Python)
- Working experience in the areas of Big Data management, Cloud Computing, serverless computing, and Data Analytics are advantageous
- Very good communication skills in oral and written English
Benefits package
- Subsidized meals
- Partial reimbursement of public transport costs
- Leave: 7 weeks of annual leave + 10 extra days off due to RTT (statutory reduction in working hours) + possibility of exceptional leave (sick children, moving home, etc.)
- Possibility of teleworking (after 6 months of employment) and flexible organization of working hours
- Professional equipment available (videoconferencing, loan of computer equipment, etc.)
- Social, cultural and sports events and activities
- Access to vocational training
- Social security coverage
Remuneration
monthly gross salary 2200 euros
General Information
- Theme/Domain :
Distributed Systems and middleware
System & Networks (BAP E) - Town/city : Rennes
- Inria Center : Centre Inria de l'Université de Rennes
- Starting date : 2025-11-01
- Duration of contract : 3 years
- Deadline to apply : 2026-02-28
Warning : you must enter your e-mail address in order to save your application to Inria. Applications must be submitted online on the Inria website. Processing of applications sent from other channels is not guaranteed.
Instruction to apply
Please submit online : your resume, cover letter and letters of recommendation eventually
Defence Security :
This position is likely to be situated in a restricted area (ZRR), as defined in Decree No. 2011-1425 relating to the protection of national scientific and technical potential (PPST).Authorisation to enter an area is granted by the director of the unit, following a favourable Ministerial decision, as defined in the decree of 3 July 2012 relating to the PPST. An unfavourable Ministerial decision in respect of a position situated in a ZRR would result in the cancellation of the appointment.
Recruitment Policy :
As part of its diversity policy, all Inria positions are accessible to people with disabilities.
Contacts
- Inria Team : MAGELLAN
-
PhD Supervisor :
Ibrahim Shadi / Shadi.Ibrahim@inria.fr
About Inria
Inria is the French national research institute dedicated to digital science and technology. It employs 2,600 people. Its 200 agile project teams, generally run jointly with academic partners, include more than 3,500 scientists and engineers working to meet the challenges of digital technology, often at the interface with other disciplines. The Institute also employs numerous talents in over forty different professions. 900 research support staff contribute to the preparation and development of scientific and entrepreneurial projects that have a worldwide impact.