2019-01378 - PhD Position F/M Immersive Interaction and Visualization of Temporal 3D Data
Le descriptif de l’offre ci-dessous est en Anglais

Type de contrat : CDD de la fonction publique

Niveau de diplôme exigé : Bac + 5 ou équivalent

Fonction : Doctorant

A propos du centre ou de la direction fonctionnelle

Inria, the French national research institute for the digital sciences, promotes scientific excellence and technology transfer to maximise its impact.
It employs 2,400 people. Its 200 agile project teams, generally with academic partners, involve more than 3,000 scientists in meeting the challenges of computer science and mathematics, often at the interface of other disciplines.
Inria works with many companies and has assisted in the creation of over 160 startups.
It strives to meet the challenges of the digital transformation of science, society and the economy.

Contexte et atouts du poste

Nowadays, the detection and visualization of important localized events and process in multidimensional and multi-valued images, especially in cell and tissue imaging, is tedious and inefficient. Specialized scientists can miss key events due to complexity of the data and the lack of computer guidance. A crucial step in biological imaging is the extraction of regions of interest (ROIs). In general, researchers detect ROIs by searching for regions that exhibit a particular spatiotemporal pattern. Nevertheless, they cannot analyze captured data directly, which makes the analysis of motion patterns rather difficult. To equip users in biological imaging with versatile visualization tools it is thus necessary to develop novel visualization methods, including abstraction and interaction, adapted to the given data. A very helpful but challenging project is the joint analysis and visualization of spatiotemporal patterns, such as intra-cellular events or cell divisions in multivalued 3D+Time data.

This PhD thesis is framed under the Inria Project Lab NAVISCOPE (https://project.inria.fr/naviscope) and the candidate will join the Serpico and Hybrid teams.

Mission confiée

Collaborate with computer scientists, computer graphics researchers, and biologists to define the user interface for the visualization of complex 3D+Time data

Create a functioning prototype implementation in a participatory design process

Document the prototype, and

Conduct scientific research (including literature studies) and write a PhD thesis.

 

Principales activités

The visual exploration of 3D+Time (4D) images poses a number of research challenges in terms of human-computer interaction, visualization, human perception and machine learning. Notably, users have to be able to explore, seamlessly and in real-time, complex 4D data in which information retrieval and knowledge inference (e.g. cell dynamics) is driven by their understanding of the visualized data and the computer guidance [1]. In such context, the flexibility and the increased workspace provided by Virtual Reality (VR) and Augmented Reality (AR) systems could ease the exploration of such data through specialized user interfaces. The main objective of this PhD is to propose novel interaction and visualization methods to improve the manipulation of complex 4D data.

The first challenge will be to provide the user with the needed interactive tools to control the space and time dimensions adapted to 4D data. Due to the inherent 3D nature of the visualized data and the need of expressive interaction techniques, such interfaces will exploit the benefits provided by VR and AR technology [2,3]. However, in such context, the user has to be able to control efficiently a high number of degrees of freedom (DoF) during the exploration/analysis process. In particular, 6DoFs required to control the user’s viewpoint [4,5], 1DoF regarding the level of scale of the dataset [6] and 1DoF due to the temporal dimension. However, the control of multiple degrees of freedom is known to increase the cognitive load of the user [7,8]. Thus, the challenge is to provide novel interaction techniques and control laws to enable users to efficiently manipulate and navigate through such complex 4D datasets.

The second challenge will be to provide the visualization techniques able to drive the user in the manipulation and navigation process [9]. Due to the complexity of the data, in addition to design visualization techniques able to efficiently encode multi-dimensional data (e.g. with different levels of detail [10]), this interaction process would have to be guided by inferred knowledge generated using machine learning methods. For example, although visualization and abstraction of 2D and 3D vector flows [11] can ease the exploration and information gathering in multidimensional and multi-valued datasets, using and automatic ROI detection algorithms would ease the exploration of large datasets. Thus, the challenge is to propose visualizations able to efficiently encode 4D data and suggestion systems capable to efficiently drive the user’s exploration.

Moreover, due to the complementarity of both challenges, iterative design process would be employed. In particular, the interaction process is tightly coupled to a perception-action loop. In such context, the user’s perception of the visualized data would strongly influence/drive his/her interactions. Thus, formal evaluations [12] of the designed systems would be required in order to assess the suitability of the designed interactions and visualizations and drive the design process.

Finally, the main use case would be the visualization of spatiotemporal patterns, such as intra-cellular events or cell divisions in multivalued 3D+Time data. At the end of this PhD, we envision a fully functional system, enabling users to manipulate and navigate in complex 4D datasets in a VR/AR environment.

References

[1] M. Chen, D. Ebert, H. Hagen, R.S. Laramee, R. van Liere, K.-L. Ma, Silver, Data, information, and knowledge in visualization. IEEE Computer Graphics and Applications, 29(1): 12–19, 2009. DOI: https://doi.org/10.1109/MCG.2009.6

[2] Kersten-Oertel, M., Chen, S., and Collins, D. (2014). An evaluation of depth enhancing perceptual cues for vascular volume visualization in neurosurgery. IEEE Trans. Vis. Comput. Graph 20, 391–403. DOI: https://doi.org/10.1109/TVCG.2013.240

[3] Lages WS and Bowman DA (2018) Move the Object or Move Myself? Walking vs. Manipulation for the Examination of 3D Scientific Data. Front. ICT 5:15. DOI: https://doi.org/10.3389/fict.2018.00015

[4] T. Klein, F. Guéniat, L. Pastur, F. Vernier, T. Isenberg, A design study of direct-touch interaction for exploratory 3D scientific visualization. Computer Graphics Forum 31(3): 1225–1234, 2012. DOI: https://doi.org/10.1111/j.1467-8659.2012.03115.x

[5] Hinckley, K., Tullio, J., Pausch, R., and Proffitt, D. Usability analysis of 3D rotation techniques.In Proceedings of the 10th annual ACM symposium on User interface software and technology. 1997, 1–10. DOI: https://doi.org/10.1145/263407.263408

[6] Argelaguet, F., & Maignant, M. GiAnt: stereoscopic-compliant multi-scale navigation in VEs. In Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology (pp. 269-277). 2016. DOI: https://doi.org/10.1145/2993369.2993391

[7] M. Drouhard, C.A. Steed, S. Hahn, T. Proffen, J. Daniel, M. Matheson, Immersive visualization for materials science data analysis using the oculus rift. In Proc. of IEEE Int. Conf. Big Data, pp. 2453–2461, 2015. DOI: https://doi.org/10.1109/BigData.2015.7364040

[8] M. Veit, A. Capobianco, D. Bechmann, Influence of degrees of freedom's manipulation on performances during orientation tasks in virtual reality environments. In Proc. ACM Symp. Virtual Reality Software and Technology, pp. 51–58, 2009. DOI: https://doi.org/10.1145/1643928.1643942

[9] A. van Dam, A. S. Forsberg, D. H. Laidlaw, J. J. LaViola and R. M. Simpson, "Immersive VR for scientific visualization: a progress report," in IEEE Computer Graphics and Applications, vol. 20, no. 6, pp. 26-52. 2000. DOI: https://doi.org/10.1109/38.888006

[10] M. Miao, E. De Llano, J. Sorger, Y. Ahmadi, T. Kekic, T. Isenberg, M.E. Gröller, I. Barišic, I. Viola, Multiscale visualization and scale-adaptive modification of DNA nanostructures. IEEE T. Visualization and Computer Graphics, 24(1): 1014–1024, 2018. https://doi.org/10.1109/TVCG.2017.2743981

[11] C.P. Kappe, L. Schutz, S. Gunther, L. Hufnagel, S. Lemke, H. Leitte, Reconstruction and visualization of coordinated 3D cell migration based on optical flow. IEEE T. Visualization and Computer Graphics, 22(1): 995–1004, 2016. DOI: https://doi.org/10.1109/TVCG.2015.2467291

[12] LaViola, J. J. Jr., Kruijff, E., McMahan, R. P., Bowman, D., and Poupyrev, I. P. (2017). 3D User Interfaces: Theory and Practice, 2nd Edn. Boston, MA: Addison-Wesley Professional

Compétences

  • highly motivated student who has completed a MSc or equivalent degree in computer graphics, visualization, HCI, or related computer science topics,
  • experience with software development, in C++ and/or Java,
  • experience in modern computer graphics (GPU) and/or visualization programming,
  • experience in the use of machine learning algorithms,
  • able to communicate on a regular basis with supervisors and end-users,
  • receptive to directions and feedback from supervisors, and
  • able to clearly and concisely communicate in English in written and spoken form.
  • implementation in Unity (https:///www.unity3d.com) within the context of a shared Naviscope data exploration platform
  • use of traditional (mouse/keyboard) and novel (touch, tangible, pen) types of input
  • use of traditional (PC display) and novel (tablets, large displays, VR/AR) output platforms

 

Avantages

  • Subsidized meals
  • Partial reimbursement of public transport costs