| Abstract |
Interest in automating visual inspections of critical infrastructure has rapidly risen in recent years. Many studies have been published in the last decade focusing on machine-learning-based computer models for automated defect detection in imagery. Research on ground, aerial, and marine robotic systems with sensing capabilities and autonomous navigation for structural inspections has also gained heightened traction. However, existing systems rarely go beyond assisted image capture and data collection systems, and there is a significant gap toward fully integrated systems that can mimic human inspectors. In order to create autonomous systems that can deliver human-level actionable inspection output, there is a need for an improved understanding of how human inspectors perform different tasks. This preliminary study focuses on developing a data collection platform for sensing, data extraction, and human inspector behavior analysis. To that end, we develop a platform based on mixed reality (MR) while leveraging eye-Tracking technology to sense and measure the visual performance of inspectors during virtual inspections. Incorporating eye tracking with MR visualizations enabled the study of the potential of MR in enabling and facilitating this analysis. Furthermore, a comparison of different modalities of reality capture including two-dimensional on-screen imagery, spherical 360 imagery, and three-dimensional models based on photogrammetry was also presented. The platform created in this study will be helpful in future studies on integrating human inspection patterns into robotic platforms, and the promising results obtained in this study will pave the way for creating automation systems that will help realize next-generation smart cities. © 2023 ASCE. |