Post-Doctoral Research Visit F/M Task and Motion Planning for Long-Horizon Robotic Manipulation

Le descriptif de l’offre ci-dessous est en Anglais

Type de contrat : CDD

Contrat renouvelable : Oui

Niveau de diplôme exigé : Thèse ou équivalent

Fonction : Post-Doctorant

Contexte et atouts du poste

The work will be conducted in the WILLOW team at Inria Paris research center. Renowned for its exceptional work in computer vision and robotics, the WILLOW team has consistently produced high-quality research, resulting in publications in major journals and conferences.

As part of the team, you will have access to a well-established laboratory featuring multiple robotic arms, hands, quadrupeds, bipeds, and mobile manipulators.

Additionally, you can expect frequent visits and talks by esteemed researchers from top research laboratories around the world. Opportunities abound for collaboration with leading researchers both in Europe and globally.

Furthermore, you will join an international and welcoming team environment, where we regularly organize various events ranging from casual after-work gatherings to multi-day lab retreats.

Mission confiée

Real-world robotic manipulation, such as household chores or assembly tasks, requires robots to plan and execute long sequences of actions under uncertainty. These complex long-horizon tasks with sparse reward signals are challenging for end-to-end learning methods. While state-of-the-art vision-language-action (VLA) models excel at processing raw sensor data to predict actions directly, they suffer from critical limitations. First, they are typically data-hungry, requiring extensive training datasets which are expensive to collect. Second, they act as black boxes, offering little interpretability or guaranteed adherence to constraints. Finally, they generalize poorly to novel, especially long-horizon tasks. By contrast, planning approaches have demonstrated promising potential to handle such complex tasks. Traditional Task-and-Motion Planning (TAMP) splits planning into discrete (symbolic) task and continuous motion levels, using searching to achieve generalization to arbitrary goal states. However, existing planning methods require manually designed predicates and assume perfect world perception, limiting their application in unstructured real-world environments. In addition, due to the lack of learning-based priors, they often suffer from exponential combinatorics as task length grows.

In this project, we aim to integrate neural learning with symbolic reasoning to take the best of both worlds. We will utilize high-level logic for strategic planning and neural models for perception and control, enabling robust and generalizable long-horizon robotic manipulation.The candidate will be co-supervised by Shizhe Chen and Justin Carpentier.

Principales activités

  • Read papers
  • Propose methods
  • Conduct experiments
  • Analyze results
  • Write papers
  • Present work in conferences
  • Co-supervise students (optional)

Compétences

The candidate must have an excellent track of records and a PhD Degree. The candidate must have the following qualifications:

- Strong background in computer vision, robotics, or related fields

- Excellent programming skills with deep learning using Python and Pytorch

- Strong proficiency in both written and spoken English

- Ability to work independently as well as collaboratively

- Publications in top-tier vision/robotics conferences and contributions to open-source vision/robotic projects are appreciated

Avantages

  • Subsidized meals
  • Partial reimbursement of public transport costs
  • Leave: 7 weeks of annual leave + 10 extra days off due to RTT (statutory reduction in working hours) + possibility of exceptional leave (sick children, moving home, etc.)
  • Possibility of teleworking and flexible organization of working hours
  • Professional equipment available (videoconferencing, loan of computer equipment, etc.)
  • Social, cultural and sports events and activities
  • Access to vocational training
  • Social security coverage