Post-Doctoral Research Visit F/M Post-Doctoral Research Visit F/M Self-Tuning Algorithms for Hyperparameter-Free Optimization

Contract type : Fixed-term contract

Level of qualifications required : PhD or equivalent

Fonction : Post-Doctoral Research Visit

About the research centre or Inria department

The Inria Centre at Rennes University is one of Inria's nine centres and has more than thirty research teams. The Inria Centre is a major and recognized player in the field of digital sciences. It is at the heart of a rich R&D and innovation ecosystem: highly innovative PMEs, large industrial groups, competitiveness clusters, research and higher education players, laboratories of excellence, technological research institute, etc.

Context

This post-doctoral position is part of the Inria-funded exploratory project HYPE (HYPErparameter-Free Optimization Algorithms by Online Self-Tuning). This work will be carried out in the MALT team at Centre Inria de l'Université de Rennes, in collaboration with Paul Viallard and Romaric Gaudel. The MALT team conducts research in machine learning, optimization, and statistical learning theory. Moreover, this position is fully funded, includes travel support for conferences, and offers access to high-performance computing resources.

Assignment

Modern machine learning models are typically trained using stochastic optimization algorithms [see e.g., 2] whose performance heavily relies on a large number of hyperparameters (e.g., learning rate, momentum, batch size). Selecting appropriate hyperparameter values is time-consuming and computationally expensive.

The goal of the HYPE project is to eliminate the need for manual hyperparameter tuning by developing optimization algorithms that dynamically self-tune all their hyperparameters during training.

The core idea is to leverage tools from adversarial multi-armed bandit theory [see e.g., 1] to sequentially adapt hyperparameters based on observed performance. The post-doctoral researcher will investigate how to embed such bandit mechanisms within stochastic gradient-based optimizers and analyze the resulting algorithms. This includes:

  • Designing new self-tuning algorithms that adapt multiple hyperparameters on the fly;
  • Developing theoretical guarantees, such as regret bounds or convergence rates, possibly by combining online learning analysis with optimization theory;
  • Validating the proposed methods on synthetic and real datasets, and benchmarking them against existing state-of-the-art optimizers;
  • Investigating practical deployment aspects, with the long-term goal of integration into major libraries such as scikit-learn or PyTorch.

References

[1] T. Lattimore and C. Szepesvári. Bandit algorithms. Cambridge University Press. 2020
[2] G. Garrigos and R. M. Gower. Handbook of Convergence Theorems for (Stochastic) Gradient Methods. arXiv. abs/2301.11235. 2023.

Main activities

  • Conduct both theoretical analysis and empirical experimentation;
  • Write scientific publications and present the work at relevant conferences and journals.

Skills

  • PhD in machine learning, optimization, theoretical computer science, or related field;
  • A strong research track record (e.g., publications in top-tier machine learning conferences) is expected;
  • Proficient in Python and modern machine learning frameworks (e.g., PyTorch, JAX);
  • Fluent in English (written and spoken).

Benefits package

  • Subsidized meals
  • Partial reimbursement of public transport costs
  • Possibility of teleworking (90 days per year) and flexible organization of working hours
  • Partial payment of insurance costs

Remuneration

Monthly gross salary amounting to 2788 euros.