Post-Doctoral Research Visit F/M Advancing Hybrid AI: Integration of Data-Driven and Model-Based Approaches for Enhanced Autonomous Driving

Contract type : Fixed-term contract

Level of qualifications required : PhD or equivalent

Fonction : Post-Doctoral Research Visit

Level of experience : Recently graduated

About the research centre or Inria department

The Inria centre at Université Côte d'Azur includes 37 research teams and 8 support services. The centre's
staff (about 500 people) is made up of scientists of different nationalities, engineers, technicians and
administrative staff. The teams are mainly located on the university campuses of Sophia Antipolis and
Nice as well as Montpellier, in close collaboration with research and higher education laboratories and
establishments (Université Côte d'Azur, CNRS, INRAE, INSERM ...), but also with the regiona economic
players.
With a presence in the fields of computational neuroscience and biology, data science and modeling,
software engineering and certification, as well as collaborative robotics, the Inria Centre at Université
Côte d'Azur is a major player in terms of scientific excellence through its results and collaborations at
both European and international levels.

Context

Context :

Every year Inria International Relations Department has a few postdoctoral positions in order to support Inria international collaborations.

The postdoctoral contract will have a duration of 12 to 24 months. The default start date is November 1st, 2024 and not later than January, 1st 2025. The postdoctoral fellow will be recruited by one of the Inria Centres in France but it is recommended that the time is shared between France and the partner’s country (please note that the postdoctoral fellow has to start his/her contract being in France and that the visits have to respect Inria rules for missions).

Partnership between Inria and KAIST:

This Post-doc subject is proposed in the context of the associated team AISENSE involving the AVELAB at KAIST and the ACENTAURI Team at Inria. Both research groups have a high level of expertise in the field of autonomous vehicles. The AVELab has so far focused on end-to-end machine learning approaches for advanced sensing an situation awareness to handle unexpected novel situations. The research areas of the ACENTAURI team from Inria in Sophia Antipolis are complementary. ACENTAURI focuses on hybrid AI proposing to combine model-based approaches with data-based approaches (machine learning). The fusion of sensor data for situation awareness are of greatest scientific interest here.  The main scientific objective of the collaboration project is to study how to build a long-term perception system in order to acquire situation awareness for safe navigation of autonomous vehicles. The perception system will perform the fusion of different sensor data (lidar and vision) in order to localize a vehicle in a dynamic peri-urban environment, to identify and estimate the state (position, orientation, velocity, …) of all possible moving agents (cars, pedestrians, …), and to get high level semantic information. To achieve such objective, we will compare different methodologies. From one hand, we will study model-based techniques, for which the rules are pre-defined accordingly to a given model, that need few data to be setup. On the other hand, we will study end-to-end data-based techniques, a single neural network for aforementioned multi-tasks (e.g., detection, localization, and tracking) to be trained with data. We think that the deep analysis and comparison of these techniques will help us to study how to combine them in a hybrid AI system where model-based knowledge is injected in neural networks and where neural networks can provide better results when the model is too complex to be explicitly handled. This problem is hard to solve since it is not clear which is the best way to combine these two radically different approaches. Finally, the perception information will be used to acquire situation awareness for safe decision making.

Application modalities:

Interested candidates must send the following application files:

  • Detailed CV with a description of the PhD and a complete list of publications with the two most significant ones highlighted
  • Motivation letter from the candidate
  • 2 letters of recommendations
  • Passport copy

Deadline for application: June 2, 2024.

Assignment

Eligibility criteria:

Candidates for postdoctoral positions are recruited after the end of their Ph.D. or after a first post-doctoral period: for the candidates who obtained their PhD in the Northern hemisphere, the date of the Ph.D. defense shall be later than September 1, 2022; in the Southern hemisphere, later than April 1, 2022.

In order to encourage mobility, the postdoctoral position must take place in a scientific environment that is truly different from the one of the Ph.D. (and, if applicable, from the position held since the Ph.D.); particular attention is thus paid to French or international candidates who obtained their doctorate abroad.

Proposed research subject :

The most widely adopted methodology to build intelligent autonomous mobile robots interacting in real-time with their environment has been to follow a "model-driven" approach [1]: first design a global model of the system (robot, sensors, environment), then define a set of rules to perform a task, and finally design a stable and robust sensor-based control law to execute a given task. If the robot fails to handle a task, we need to re-design the model, the rules or the control law. Model-driven approaches attempts to capture knowledge and derive decisions, actuated by control laws, through explicit representation and rules. They enable us to understand complex processes and predict future events. The strength of this approach is that it works very well when the model fits correctly with the real world, as for example in controlled environments, and does allow us to prove analytically the stability and robustness of control laws [2]. However, this is a very difficult way of building a system to control a robot in a dynamic environment since there are so many different rules and exceptions to those rules that we cannot capture all of them. Therefore, the approach is limited when the model complexity increases since it is difficult to find a complete model of the system and, even if we find it, it is difficult to estimate all parameters of such complex model in real-time.

To avoid these limitations two solutions have been investigated in the literature. On the one hand, one can go deeper into model-driven approaches by increasing the modeling fidelity (geometric models, deformable models, photometric models, kinematic models, dynamic models, rules, exceptions, ...) [3] [4]. A more accurate model generally implies to increase the number of its parameters and the amount of data needed to estimate these parameters. Therefore, this solution is generally associated to an increase of the computation time. On the other hand, we can use data-driven approaches (machine learning) that focuses on building a system that can identify what is the right answer based on having "seen" a large number of examples of question / answer pairs and "training" it to get the right answer [5]. There are many different ways of doing this, with perhaps the most popular being using artificial neural network algorithms. The right answers can be given by a human supervisor (supervised learning) or selected from data with a minimum of human supervision (weakly unsupervised learning). The necessary ingredients for this approach are an appropriately large dataset (typically from 10K to 1M instances) or large try and fail experiments. After many (millions) of training cycles, it will "learn" to get it increasingly right. Even if theoretical and technological progresses have made possible the use of artificial neural networks in complex robotics tasks, the learning step is generally performed off-line once and for all since training with large datasets is still computationally expensive (e.g. the Tesla autopilot neural networks involves 48 networks that take 70,000 GPU hours to train). The strength of this approach is that it does not depend on a human accurately describing through a set of rules. With data-driven AI the system learns "on its own" based on the training data we gave it. The more and more varied the training data the better our system can be. The weakness of this approach is that it requests many data (that may be not always available) and it becomes extremely difficult to find theoretical proofs of stability and robustness of sensor-based control laws.

The methodology studied in this postdoc will be to wisely combine data-driven and model-driven approaches, increasing both the fidelity of models and the size of processed data, and to design sensor-based control laws whose stability and robustness can be theoretically proven to guarantee the safety of humans and things. Hybrid AI combining data-driven and model-driven approaches is a promising research direction [6]. The main scientific objective of the postdoc will be to investigate further how to build a bridge between model-driven and data-driven approaches to artificial intelligence with the ambition to feed the models with knowledge provided by data-driven approaches and, conversely, to constrain data-driven approaches with highly accurate prior knowledge coming from the robot task. The first problem is hard because data-driven approaches are able to capture knowledge that is not in the model, therefore we need to be able to interpret the results (e.g. conceiving explainable ANNs) in order to build physically meaningful models (probably increasing their complexity). The second problem is hard because injecting prior model-driven knowledge into data-driven approaches implies the design and the development of new architectures.

The proposed hybrid AI system will be evaluated on Inria autonomous vehicles. For the experimental evaluation, the candidate will work in close collaboration with a PhD student and an R&D engineer who will be in charge of the experimental platform.

References:

[1] L. Kunze, N. Hawes, T. Duckett, M. Hanheide, and T. Krajník. Artificial intelligence for long-term robot autonomy: A survey. Robotics and Autonomous Systems, 3(4):4023–4030, 2018.

[2] E. Malis and F. Chaumette. Theoretical improvements in the stability analysis of a new class of model-free visual servoing methods. IEEE Transaction on Robotics and Automation, 18(2):176–186, April 2002.

[3] B. Frank, C. Stachniss, R. Schmedding, M. Teschner, and W. Burgard. Learning object deformation models for robot motion planning. Robotics and Autonomous Systems, 62(8):1153–1174, 2014

[4] L. Zaidi, J. A. Corrales Ramón, B. C. Bouzgarrou, Y. Mezouar, and L. Sabourin. Model-based strategy for grasping 3 d deformable objects using a multi-fingered robotic hand. Robotics and Autonomous Systems, 95:196–206, 2017.

[5] H. Pierson and M. Gashler. Deep learning in robotics: a review of recent research. Advanced Robotics, 31(16):821–835, 2017.

[6] Z. Liu, E. Malis, and P. Martinet. A new dense hybrid stereo visual odometry approach. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 6998–7003, 2022.

Main activities

The work will be broken down into stages as follows:

  • Conduct an extensive review of existing literature on hybrid AI (i.e. combination of data-driven techniques with model-based reasoning).
  • Identify key theoretical concepts and frameworks relevant to the integration of data-driven and model-based approaches.
  • Formulate a conceptual framework for hybrid AI, outlining the principles and methodologies underlying its implementation.
  • Develop novel methodologies for integrating data-driven learning and model-based reasoning in hybrid AI systems.
  • Conduct preliminary evaluations to assess the feasibility and effectiveness of the proposed methodologies.
  • Develop prototype hybrid AI system tailored to the specific application domain considered in the AISENSE associated team, incorporating the methodologies developed in earlier phases.
  • Conduct comprehensive testing and validation of the prototype systems using simulated and real-world datasets.
  • Evaluate the performance of the developed hybrid AI systems against benchmark datasets and existing AI approaches.
  • Refine the hybrid AI models and improve their performance.
  • Write a report for the design, development, and deployment of the proposed hybrid AI systems.
  • Disseminate research findings through international conference presentations and international journal publications.

Skills

The candidate should preferably have a PhD in Robotics or related topics, solid foundations in software development (C / C ++, Python, LINUX, ROS, Git). A good level in read / written / spoken English is also important.

Benefits package

  • Subsidized meals
  • Partial reimbursement of public transport costs
  • Leave: 7 weeks of annual leave + 10 extra days off due to RTT (statutory reduction in working hours) + possibility of exceptional leave (sick children, moving home, etc.)
  • Possibility of teleworking (after 6 months of employment) and flexible organization of working hours
  • Professional equipment available (videoconferencing, loan of computer equipment, etc.)
  • Social, cultural and sports events and activities
  • Access to vocational training
  • Social security coverage

Remuneration

Gross Salary: 2788 € per month