PhD Position F/M Authoring interactive public science lectures

Contract type : Fixed-term contract

Level of qualifications required : Graduate degree or equivalent

Fonction : PhD Position

About the research centre or Inria department

The Inria Saclay-Île-de-France Research Centre was established in 2008. It has developed as part of the Saclay site in partnership with Paris-Saclay University and with the Institut Polytechnique de Paris .

The centre has 40 project teams , 27 of which operate jointly with Paris-Saclay University and the Institut Polytechnique de Paris; Its activities occupy over 600 people, scientists and research and innovation support staff, including 44 different nationalities.

Context

The project is a collaboration between Theophanis Tsandilas, a researcher at Inria Saclay, and Julien Bobroff, a physics professor at the Université Paris-Saclay and a renowned science popularizer.

The scientific contributions of the PhD thesis will be centered on HCI (Human-Computer Interaction) research, aligning with expertise of Theophanis Tsandilas. The PhD student will be hosted by ex)situ an Inria team at the Laboratoire Interdisciplinaire des Sciences du Numérique at the Université Paris-Saclay, with extensive experience in multidisciplinary collaborations.

Julien Bobroff and his team, La Physique Autrement, will guide the entire design process through their expertise in science communication. This team is unique in that it combines innovation, public testing and research in communication sciences and pedagogy. Over the past decade, the team has designed over 300 original popularization projects in a wide range of formats. Julien himself has an audience of nearly a million subscribers on various social networks: TikTok, Instagram, YouTube, etc.

The PhD thesis is fully funded by the Université Paris-Saclay (SPRINGCS 2025 program). The funding also covers expenses for equipment and conference participation during the course of the thesis.

Assignment

Scientific objectives and challenges

Popular science lectures are still, even today, formatted in very traditional ways, where a speaker on stage presents a pre-designed slideshow. However, there is a real need for innovative tools that allow speakers to present their topics in a more interactive and spontaneous ways, captivating their audience and providing intuition about concepts and real-world phenomena that are hard to communicate verbally or through static graphics. Our goal is to develop flexible presentation tools that augment the speaker's experience, making science communication interactive, story-based, and engaging while creating a more horizontal connection between the public and experts.

As Julien Bobroff explains [3], although quantum physics concepts are often considered difficult to visualize, “even veteran quantum physicists […] need a mental image of the objects being manipulated.” As a result, they constantly seek to create visual representations when teaching or popularizing quantum phenomena.

Figure 1. Inspirational examples. (a) Julien Bobroff’s live demonstration of quantum levitation [1]. (b) Julien’s public lecture on “physics of the extremes" [2] using physical proxies for visual support. (c) Ken Perlin’s video demonstration of Chalktalk [10], a lecture tool supporting dynamic sketch representations of Computer Science concepts. Those representations can be linked together, communicating with each other and exchanging their data.

Julien Bobroff has long been exploring original visual representations and innovative lecture formats that combine physical artifact manipulation with visual projections (see Figure 1). We aim to extend such presentations with interactive visuals that not only enhance the communication of complex concepts (e.g., illustrating a quantum physics experiment) but also augment the speaker’s physical interactions. Achieving this requires the development of novel presentation authoring tools that allow speakers to: (i) easily create custom visual representations without needing to code, (ii) integrate interactive behavior and semantic meaning into their visuals, and (iii) synchronize these visuals with their live interaction space and diverse input methods.

Popular science videos employ a variety of narrative techniques to engage their audiences [14], but they largely rely on video post-production editing. In contrast, our goal is to develop tools that empower scientists to author and stage interactive materials for live lectures. Our ambition to leverage these tools to explore new science presentation formats and create original demonstrations, which we will showcase in public lectures.

State of the art

Very active research in Human-Computer Interaction (HCI) has been exploring tools that support users’ complex cognitive processes through dynamic representations that can be interactively manipulated in real time. However, a major challenge lies in developing flexible tools that structure users’ graphical vocabulary while maintaining creative freedom. Within this context, we examine various related systems that inspire our project:

Figure 2. (a) DataGarden [9] enables non-experts to author their personal data visualizations through sketches. (b) RealitySketch [12] is a user interface for augmenting real-world scenes captured through a camera with dynamic sketches to visualize natural phenomena. (c) Previous work has also explored solutions based on augmented-reality headsets. For example, in ARgus [5], remote desktop users can view and interact with the augmented workspace of a local designer.

  1. Tools for authoring custom visual representations. In the past few years, we have developed several tools to support the authoring of expressive, personal data visualizations, including direct sketching interfaces [9]. We aim to extend this approach to custom representations of quantum physics concepts, while also enabling authors to embed interactive behavior into these visualizations. Other related HCI work [11, 15] has studied formal notations for creating diagrams.
  2. Augmented lectures. Recent HCI research [4, 8] has introduced systems for augmenting live full-body presentations, aiming to synchronize visuals with a speaker’s gestures. Yet, these presentations are typically script-based, offering speakers little flexibility, as they must practice and follow a predefined sequence of actions.  
  3. Augmented physics. Other HCI research has explored interfaces for augmenting real-world settings with dynamic sketches that react to the movement of physical objects [12]. Such systems allow users to annotate physical phenomena and experiments with additional information, such as vectors, distances, and angles. Other research [7] has explored how to transform static diagrams extracted from textbooks into interactive physics simulations. This work does not target live presentation audiences but offers significant inspiration. In the past, we have also experimented with settings that require people to wear AR headsets [5]. However, our focus is on developing lightweight solutions that do not require speakers or audiences to wear any headsets.  

Main activities

Methodology

We will base our approach on iterative user-centered design methods, directly driven by real-world application scenarios. Key steps of our proposed methodology include the following:

  1. Developing a typology of domain-specific illustration techniques that defines the visual vocabulary used by scientists, illustrators, and animators to present and explain quantum physics phenomena and concepts. 
  2. Developing a domain-specific diagrammatic notation that operationalizes this typology.
  3. Buidling quick exploratory prototypes to collect feedback from both speakers and audiences based on concrete presentation scenarios and examples.
  4. Designing and evaluating authoring tools that enable physicists to structure their visuals as domain-specific illustrations and incorporate dynamic behavior and interaction.

Our goal is to broadly disseminate our results and showcase examples created with our tools in public lectures.  

Expected results

We expect key contributions at multiple levels: empirical, theoretical, technical, and pedagogical. Our goal is to publish in high-ranking HCI conferences (e.g., ACM CHI and ACM UIST) and journals (e.g., ACM TOCHI, IEEE Transactions of Visualization & Computer Graphics). Using our prototypes for scientific dissemination at various public presentations, along with audience reactions and feedback, will be essential criteria for evaluating the success of our approach.

Bibliography

  1. Julien Bobroff. La lévitation quantique, 2019. https://www.youtube.com/watch?v=6kg2yV_3B1Q
  2. Julien Bobroff. La physique de l’extrême, 2024. https://vulgarisation.fr/projet/le_plus_petit_exposa_du_monde
  3. Julien Bobroff. Seven common myths about quantum physics. The Conversation, April 2019. https://theconversation.com/seven-common-myths-about-quantum-physics-115029
  4. Yining Cao, Rubaiat Habib Kazi, Li-Yi Wei, Deepali Aneja, and Haijun Xia. 2024. Elastica: Adaptive Live Augmented Presentations with Elastic Mappings Across Modalities. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI '24). ACM, Article 599, 1–19. https://doi.org/10.1145/3613904.3642725
  5. Arthur Fages, Cédric Fleury, and Theophanis Tsandilas. Understanding Multi-View Collaboration between Augmented Reality and Remote Desktop Users. Proc. ACM Human-Computer Interaction, 6 (CSCW Conference), Article 549 (November 2022), 27 pages. https://argus-collab.github.io
  6. Jérémie Garcia, Theophanis Tsandilas, Carlos Agon, and Wendy Mackay. 2012. Interactive paper substrates to support musical creation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '12). ACM, 1825–1828. https://doi.org/10.1145/2207676.2208316
  7. Aditya Gunturu, Yi Wen, Nandi Zhang, Jarin Thundathil, Rubaiat Habib Kazi, and Ryo Suzuki. 2024. Augmented Physics: Creating Interactive and Embedded Physics Simulations from Static Textbook Diagrams. In Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology (UIST '24). https://ryosuzuki.org/augmented-physics
  8. Jian Liao, Adnan Karim, Shivesh Singh Jadon, Rubaiat Habib Kazi, and Ryo Suzuki. 2022. RealityTalk: Real-Time Speech-Driven Augmented Presentation for AR Live Storytelling. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology (UIST '22). Association for Computing Machinery, New York, NY, USA, Article 17, 1–12. https://ryosuzuki.org/realitytalk
  9. Anna Offenwanger, Theophanis Tsandilas, and Fanny Chevalier. DataGarden: Formalizing Personal Sketches into Structured Visualization Templates. In IEEE Transactions on Visualization & Computer Graphics (VIS’24), vol. 31, no. 01, pp. 1268-1278, Jan. 2025. https://datagarden-git.github.io/datagarden
  10. Ken Perlin, Zhenyi He, and Karl Rosenberg. 2018. Chalktalk: A Visualization and Communication Language–As a Tool in the Domain of Computer Science Education. https://doi.org/10.48550/arXiv.1809.07166
  11. Josh Pollock, Catherine Mei, Grace Huang, Elliot Evans, Daniel Jackson, and Arvind Satyanarayan. 2024. Bluefish: Composing Diagrams with Declarative Relations. In Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology (UIST '24). ACM, Article 23, 1–21. https://doi.org/10.1145/3654777.3676465 (https://bluefishjs.org)
  12. Ryo Suzuki, Rubaiat Habib Kazi, Li-Yi Wei, Stephen DiVerdi, Wilmot Li, and Daniel Leithinger. 2020. RealitySketch: Embedding Responsive Graphics and Visualizations in AR with Dynamic Sketching. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (UIST '20). https://ryosuzuki.org/realitysketch
  13. Benjamin Vest, Fabienne Bernard, Julien Bobroff, Frédéric Bouquet, Lou-Andréas Etienne, et al. Reimagine Physics Teaching: A workshop designed to sparkle exchanges and creativity. Cahiers de l'Institut Pascal, 2023, 1 (1), pp.13. ⟨1051/cipa/202301001⟩.
  14. Haijun Xia, Hui Xin Ng, Chen Zhu-Tian, and James Hollan. 2022. Millions and Billions of Views: Understanding Popular Science and Knowledge Communication on Video-Sharing Platforms. In Proceedings of the Ninth ACM Conference on Learning @ Scale (L@S '22). ACM, 163–174. https://doi.org/10.1145/3491140.3528279
  15. Katherine Ye, Wode Ni, Max Krieger, Dor Ma'ayan, Jenna Wise, Jonathan Aldrich, Joshua Sunshine, and Keenan Crane. 2020. Penrose: from mathematical notation to beautiful diagrams. ACM Transactions of Graphics (SIGGRAPH) 39, 4, Article 144 (August 2020), 16 pages. https://doi.org/10.1145/3386569.3392375

 

Skills

The candidate should have a background in Computer Science or Engineering, solid technical skills, and ideally a Master’s degree in Human-Computer Interaction or a related field, such as interaction design, visualization, or computer graphics. Background in physics is not required. However, a strong interest in science popularization is essential.

Given the application demain, which requires interaction with a French-speaking audience, oral proficiency in French is highly desirable.

Benefits package

  • Subsidized meals
  • Partial reimbursement of public transport costs
  • Leave: 7 weeks of annual leave + 10 extra days off due to RTT (statutory reduction in working hours) + possibility of exceptional leave (sick children, moving home, etc.)
  • Possibility of teleworking (after 6 months of employment) and flexible organization of working hours
  • Professional equipment available (videoconferencing, loan of computer equipment, etc.)
  • Social, cultural and sports events and activities
  • Access to vocational training
  • Social security coverage

Remuneration

Monthly gross salary : 2 200 euros