PhD Position F/M Anticipating Human Behavior for Human-Robot Interaction

Contract type : Fixed-term contract

Level of qualifications required : Graduate degree or equivalent

Fonction : PhD Position

About the research centre or Inria department

Le centre de recherche Inria de l’Université Grenoble Alpes regroupe un peu moins de 600 personnes réparties au sein de 23 équipes de recherche et 7 services support à la recherche.

Son effectif est distribué sur 3 campus à Grenoble, en lien étroit avec les laboratoires et les établissements de recherche et d'enseignement supérieur (Université Grenoble Alpes, CNRS, CEA, INRAE, …), mais aussi avec les acteurs économiques du territoire.

Présent dans les domaines du calcul et grands systèmes distribués, logiciels sûrs et systèmes embarqués, la modélisation de l’environnement à différentes échelles et la science des données et intelligence artificielle, Inria Grenoble - Rhône-Alpes participe au meilleur niveau à la vie scientifique internationale par les résultats obtenus et les collaborations tant en Europe que dans le reste du monde.

Context

Collaboration between Inria team THOTH and the multidisciplinary institute in artificial intelligence (MIAI) in Grenoble

Research activities in MIAI aim to cover all aspects of AI and applications of AI with a current focus on embedded and hardware architectures for AI, learning and reasoning, perception and interaction, AI & society, AI for health, AI for environment & energy, and AI for industry 4.0.

The close connection to University of Grenoble Alpes offers additionally many oportunities for collaboration, further training and networking within and cross research fields.

Assignment

Theme/Domain
The PhD research will focus on developing approaches for context-dependent motion generation. Our goal is to integrate concepts from generative modeling and representation learning, specifically using conditional variational autoencoders (VAEs). VAEs are known for learning compact representations of input data. Previous work has explored methods to modulate these learned representations [1,2], enabling the generation of context-aware outputs. The potential applications of this research are diverse. We will develop these methods with a particular emphasis on synthesizing robot manipulation strategies and facilitating human-robot handover tasks.

Context and Motivation
Even after many years of research in human-robot interaction, current systems still struggle to understand users’ intentions, goals, and needs, making it difficult for them to anticipate human actions. The lack of a common representation of behavioral skills that would enable mutual anticipation of actions limits the ability of robots to interact and collaborate effectively with humans. Our goal is to develop a synthetic system capable of context-aware behavior generation that proactively adapts to users’ future actions by monitoring their movements and predicting their intentions during interactions.

Contact
pia.bideau@inria.fr
The thesis will be jointly supervised by Pia Bideau (THOTH), Karteek Alahari (THOTH) and Xavier Alameda Pineda (RobotLearn).

Please apply with the typical application documents: CV, certificates (MSc degree), research statement and two references. We will screen applications on a rolling basis. The position will be filled as soon as a suitable candidate is found.

Main activities

Summary

This project aims to synthesize adaptive behavior, e.g. to adapt the “take” to the “give” in case of human-robot handover tasks. Collaborative tasks, such as human-to-robot handovers, demand a nuanced understanding and anticipation of human actions. These tasks require not only the ability to classify for example the type of the human grasp accurately but also to anticipate future actions effectively.

To achieve this, the project will:

  1. Identify human hand-object configurations: Develop algorithms to identify and classify the positioning and interaction of human hands with objects. This involves understanding the object's shape and its impact on the human grasp pose.
  2. Synthesize the corresponding robot grasp: Create robot grasp strategies that are adapted to align with the actions of the human collaborator.

By combining these elements, the project seeks to create a more intuitive but practical human-robot interaction framework. This will enable robots to better understand and predict human behavior, leading to smoother and more intuitive collaborations for a variety of handover tasks.

Goals

  • Developing adaptable latent action representations: Utilize generative models to create latent action representations that can handle both classification and anticipation tasks. These representations will capture the underlying structure of human actions, allowing the system to recognize and predict behaviors more effectively.
  • Addressing challenges of low data availability: Implement strategies to work effectively in environments with limited data. This may involve techniques such as transfer learning, data augmentation, and integrating physical knowledge to ensure robust performance despite scarce training examples.
  • Managing initially unknown observations (e.g. unknown action classes): Develop methods to identify and incorporate new, previously unseen classes of actions. This capability will enhance an agent’s ability to function in dynamic and unpredictable environments.

References

[1] Halawa, M., Hellwich, O., & Bideau, P. (2022, October). Action-based contrastive learning for trajectory prediction. In European Conference on Computer Vision (ECCV) (pp. 143-159). Cham: Springer Nature Switzerland.

[2] Blume, F., Qu, R., Bideau, P., Maier, M., Rahman, R. A., & Hellwich, O. (2024). How Do You Perceive My Face? Recognizing Facial Expressions in Multi-Modal Context by Modeling Mental Representations. arXiv preprint arXiv:2409.02566.

[3] Karamcheti, S., Srivastava, M., Liang, P., & Sadigh, D. (2022, January). Lila: Language-informed latent actions. In Conference on Robot Learning (pp. 1379-1390). PMLR.

[4] Yang, W., Paxton, C., Cakmak, M., & Fox, D. (2020, October). Human grasp classification for reactive human-to-robot handovers. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 11123-11130). IEEE.

[5] Mousavian, A., Eppner, C., & Fox, D. (2019). 6-dof graspnet: Variational grasp generation for object manipulation. In Proceedings of the IEEE/CVF international conference on computer vision (CVPR) (pp. 2901-2910).

Skills

  • MSc degree in computer science or similar field
  • Excellent English writing and communication skills
  • Excellent software engineering and programming skills in Python (C++ is beneficial)
  • Sophisticated technical background in the following fields: robotics, machine learning and computer vision
  • Industrial experience is a plus
  • Interest in interdisciplinary research in the context of the MIAI Grenoble Alpes Institute

Benefits package

  • Restauration subventionnée
  • Transports publics remboursés partiellement
  • Congés: 7 semaines de congés annuels + 10 jours de RTT (base temps plein) + possibilité d'autorisations d'absence exceptionnelle (ex : enfants malades, déménagement)
  • Possibilité de télétravail 90 jours/an fixes ou flottants et aménagement du temps de travail
  • Équipements professionnels à disposition (visioconférence, prêts de matériels informatiques, etc.)
  • Prestations sociales, culturelles et sportives (Association de gestion des œuvres sociales d'Inria)
  • Accès à la formation professionnelle
  • Participation Protection Sociale Complémentaire sous conditions

Remuneration

1st and 2nd year: 2 100 euros gross salary /month

3rd year: 2 190 euros gross salary / month