PhD Position F/M Anticipating Human Behavior for Human-Robot Interaction

Contract type : Fixed-term contract

Level of qualifications required : PhD or equivalent

Fonction : PhD Position

About the research centre or Inria department

The Inria Grenoble research center groups together almost 600 people in 23 research teams and 7 research support departments.

Staff is present on three campuses in Grenoble, in close collaboration with other research and higher education institutions (University Grenoble Alpes, CNRS, CEA, INRAE, …), but also with key economic players in the area.

Inria Grenoble is active in the fields of high-performance computing, verification and embedded systems, modeling of the environment at multiple levels, and data science and artificial intelligence. The center is a top-level scientific institute with an extensive network of international collaborations in Europe and the rest of the world.

Context

Collaboration between Inria team THOTH and the multidisciplinary institute in artificial intelligence (MIAI) in Grenoble

Research activities in MIAI aim to cover all aspects of AI and applications of AI with a current focus on embedded and hardware architectures for AI, learning and reasoning, perception and interaction, AI & society, AI for health, AI for environment & energy, and AI for industry 4.0.

The close connection to University of Grenoble Alpes offers additionally many oportunities for collaboration, further training and networking within and cross research fields.

Assignment

Keywords
human-robot interaction, machine learning, computer vision, representation learning

Theme/Domain
Human-robot systems are challenging because the actions of one agent can significantly influence the actions of others. Therefore, anticipating the partner's actions is crucial. By inferring beliefs, intentions, and desires, we can develop cooperative robots that learn to assist humans or other robots effectively. In this project we are in particular interested in estimating human intentions to enable collaborative tasks between humans and robots such as human-to-robot and robot-to-human handovers.

Context and Motivation
Even after three decades of research in human-robot interaction, current systems still struggle to comprehend users’ intentions, goals, and needs, thus failing to anticipate their actions. A shared "non-verbal" language is missing, which would enable communication beyond classical user commands for robot control. This absence of a common representation of behavioral skills, which would allow anticipation of each other's behavior, limits an agent's ability to interact and collaborate effectively with humans. We aim to build a new generation of human-robot interaction that pro-actively adapts to users’ future input actions by monitoring their movements and predicting their interaction intentions. 

Contact
pia.bideau@inria.fr
The thesis will be jointly supervised by Pia Bideau (THOTH), Karteek Alahari (THOTH) and Xavier Alameda Pineda (RobotLearn).

Please apply with the typical application documents: CV, certificates (MSc degree), research statement and two references. We will screen applications on a rolling basis. The position will be filled as soon as a suitable candidate is found.

Main activities

Summary

This project aims to estimate human intentions within a collaborative human-robot setup using visual data. Traditional approaches to action classification focus on identifying and categorizing immediate actions. However, collaborative tasks, such as human-to-robot handovers, demand a more nuanced understanding. These tasks require not only the ability to classify for example the type of the human grasp accurately but also to anticipate future actions effectively.

To achieve this, the project will:

  1. Identify Human Hand-Object Configurations: Develop algorithms to precisely identify and classify the positioning and interaction of human hands with objects. This involves understanding the object's shape and its impact on the human grasp pose.
  2. Anticipate Future Actions: Implement predictive models that can estimate what actions a human might take next. This anticipation is crucial for seamless collaboration, allowing robots to prepare and respond proactively to human movements.

By combining these elements, the project seeks to create a more intuitive and effective human-robot interaction framework. This will enable robots to better understand and predict human behavior, leading to smoother and more intuitive collaborations for a variety of handover tasks.

Goals

  • Developing Latent Action Representations: Utilize generative models to create latent action representations that can handle both classification and anticipation tasks. These representations will capture the underlying structure of human actions, allowing the system to recognize and predict behaviors more effectively.
  • Addressing Challenges of Low Data Availability: Implement strategies to work effectively in environments with limited data. This may involve techniques such as transfer learning, data augmentation, and synthetic data generation to ensure robust performance despite scarce training examples.
  • Managing Initially Unknown Classes: Develop methods to identify and incorporate new, previously unseen classes of actions. This capability will enable the system to adapt and learn in real-time, enhancing its ability to function in dynamic and unpredictable environments.

By achieving these objectives, the project seeks to contribute to developing systems that are capable of adapting to their environment and human interaction partners, allowing more intuitive and effective collaboration.

References

[1] Cui, Y., Karamcheti, S., Palleti, R., Shivakumar, N., Liang, P., & Sadigh, D. (2023, March). No, to the right: Online language corrections for robotic manipulation via shared autonomy. In Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (pp. 93-101).

[2] Karamcheti, S., Srivastava, M., Liang, P., & Sadigh, D. (2022, January). Lila: Language-informed latent actions. In Conference on Robot Learning (pp. 1379-1390). PMLR.

[3] Kushal Kedia, Atiksh Bhardwaj, Prithwish Dan, & Sanjiban Choudhury (2024). InteRACT: Transformer Models for Human Intent Prediction Conditioned on Robot Actions. In IEEE International Conference on Robotics and Automation.

[3] Yang, W., Paxton, C., Cakmak, M., & Fox, D. (2020, October). Human grasp classification for reactive human-to-robot handovers. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 11123-11130). IEEE.

[4] Sadok, S., Leglaive, S., Girin, L., Alameda-Pineda, X., & Séguier, R. (2024). A multimodal dynamical variational autoencoder for audiovisual speech representation learning. Neural Networks, 172, 106120.

Skills

  • MSc degree in computer science or similar field
  • Excellent English writing and communication skills
  • Excellent software engineering and programming skills in Python (C++ is beneficial)
  • Sophisticated technical background in the following fields: robotics, machine learning and computer vision
  • Industrial experience is a plus
  • Interest in interdisciplinary research in the context of the MIAI Grenoble Alpes Institute

Benefits package

  • Subsidized meals
  • Partial reimbursement of public transport costs
  • Leave: 7 weeks of annual leave + 10 extra days off due to RTT (statutory reduction in working hours) + possibility of exceptional leave (sick children, moving home, etc.)
  • Possibility of teleworking (90 days / year) and flexible organization of working hours
  • Professional equipment available (videoconferencing, loan of computer equipment, etc.)
  • Social, cultural and sports events and activities
  • Access to vocational training
  • Complementary health insurance under conditions

Remuneration

1st and 2nd year: 2 100 euros gross salary /month

3rd year: 2 190 euros gross salary / month