PhD Position F/M Latency-Driven resources placement in Fog-based IoT systems

Contract type : Fixed-term contract

Level of qualifications required : Graduate degree or equivalent

Fonction : PhD Position

About the research centre or Inria department

The Inria Rennes - Bretagne Atlantique Centre is one of Inria's eight centres and has more than thirty research teams. The Inria Center is a major and recognized player in the field of digital sciences. It is at the heart of a rich R&D and innovation ecosystem: highly innovative PMEs, large industrial groups, competitiveness clusters, research and higher education players, laboratories of excellence, technological research institute, etc.


Cloud computing and its three facets (IaaS, PaaS and SaaS) have become essential in today Internet Applications, offering many advantages such as scalability, elasticity or flexibility. With its different service models, the cloud still faces many issues prone to impact either the end-user (QoS), the provider (Cost) and the environment (Sustainability).

The Fog computing is a recent paradigm that address such issues by provisioning resources outside the cloud and closer to the end-device, at the edge of the network. This allows to reduce the latency and minimize the traffic between end-user and the cloud plat-form [3]. Several studies have shown that fog systems can indeed reduce the latency compared to cloud systems, but this reduction is not guaranteed and will highly depend on the components placement, leading sometimes to worse performance [8]. It has been also demonstrated that less traffic is sent to the cloud when using fog systems. However, a lack of a proper monitoring or reconfiguration mechanism in the fog exists, especially when the application is related to the IoT [7], where the cloud infrastructure is known not to be a viable solution.

However, evaluating realistic large-scale fog infrastructure constitute a complex task given the cost of deployment and the absence of a realistic view of the real-world deployments. In an IoT context, geo-distributed fog infrastructures mostly rely on SDN approaches [5] that contribute to conceal the networking aspects such as the topology or the routing decisions[8]. In consequence, it appears that the impact of the elasticity of a fog solution is mainly evaluated on the data plane side [4].


In the context of IoT applications (i.e. critical response time environments such as Smart City sensing or Vehicular networks) latency is at the center of a tremendous number of studies to optimize the placement of resources in distributed architectures. To ensure that the quality of service is guaranteed, several solutions exist to reconfigure the components placement (migration) and can reduce the overall latency by changing the components and routes. However, knowing precisely which component is the source of the problematical latency remains scarcely addressed. When taking a decision for a reconfiguration or a migration, which can be triggered due to latency issue, it can be beneficial to check if the source of the latency can be solved before instantiating a migration or a full reconfiguration. Some studies exist where a comparison of response time is done between the major cloud actors depending on the load [1, 6]. Proper measurement protocols exist but always refer to specific case studies [2] and would not allow to be integrated in fog systems.


The objective of this thesis is to study the optimization of resource placement in Fog-based IoT systems based on latency measurement, by evaluating the control plane cost of a change in the architecture. It will particularly address the problem of how to identify the origin of a latency issue, and based on this finding, propose an optimization that take into account the cost and elasticity of the control plane.



[1]  Dániel Géhberger, Dávid Balla, Markosz Maliosz, and Csaba Simon. Performance eval- uation of low latency communication alternatives in a containerized cloud environment. In 2018 IEEE 11th International Conference on Cloud Computing (CLOUD), pages 9–16, 2018.

[2]  Devasena Inupakutika, Gerson Rodriguez, David Akopian, Palden Lama, Patricia Chalela, and Amelie G. Ramirez. On the performance of cloud-based mhealth ap- plications: A methodology on measuring service response time and a case study. IEEE Access, 10:53208–53224, 2022.

[3]  Zheng Li and Francisco Millar-Bilbao. Characterizing the cloud’s outbound network latency: An experimental and modeling study. In 2020 IEEE Cloud Summit, pages 172–173, 2020.

[4]  Carla Mouradian, Diala Naboulsi, Sami Yangui, Roch H. Glitho, Monique J. Morrow, and Paul A. Polakos. A comprehensive survey on fog computing: State-of-the-art and research challenges. IEEE Communications Surveys and Tutorials, 20(1):416–464, 2018.

[5]  Feyza Yildirim Okay and Suat Ozdemir. Routing in fog-enabled iot platforms: A survey and an sdn-based solution. IEEE Internet of Things Journal, 5(6):4871–4889, 2018.

[6]  István Pelle, János Czentye, János Dóka, and Balázs Sonkoly. Towards latency sensitive cloud native applications: A performance study on aws. In 2019 IEEE 12th International Conference on Cloud Computing (CLOUD), pages 272–280, 2019.

[7]  U. Tomer and P. Gandhi. An enhanced software framework for improving qos in iot. Engineering, Technology and Applied Science Research, 12(5):9172–9177, Oct. 2022

[8]  Benjamin Warnke, Yuri Cotrado Sehgelmeble, Johann Mantler, Sven Groppe, and Ste- fan Fischer. Simora: Simulating open routing protocols for application interoperability on edge devices. 2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC), pages 42–49, 2022.

[9]  Sami Yangui, Pradeep Ravindran, Ons Bibani, Roch H. Glitho, Nejib Ben Hadj- Alouane, Monique J. Morrow, and Paul A. Polakos. A platform as-a-service for hy- brid cloud/fog environments. In 2016 IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN), pages 1–7, 2016.

Main activities

  • Explore the State-of-the-Art of the IoT/Fog Emulation/Simulation platform
  • Integrate an IoT solution in a Fog architecture platform
  • Propose a profile and a classification of latency issues
  • Propose an innovative way to optimize a resource placement taking into account the latency metrics and the control plane capabilities


  • A master degree in distributed systems and/or Cloud computing/Networking
  • Good knowledge of distributed systems
  • Good programming skills (e.g., C++ and Python)
  • Basic knowledge of simulation
  • Excellent communication and writing skills in English (Note that knowledge of French is appreciated but not required for this position)
  • Knowledge of the following technologies is not mandatory but will be considered as a plus:
    • Cloud resource scheduling
    • Routing, Software Defined networks
    • Revision control systems: git, svn
    • Linux distribution: Debian, Ubuntu

Benefits package

  • Subsidized meals
  • Partial reimbursement of public transport costs
  • Possibility of teleworking (90 days per year) and flexible organization of working hours
  • Partial payment of insurance costs


Monthly gross salary amounting to 2051 euros for the first and second years and 2158 euros for the third year