Multimodal Computer Vision & Novel Reinforcement Learning for Robot Navigation in Fires
By Hari Srikanth
Recall that the problem statement of my project is to quickly and autonomously navigate and map a burning building, identify trapped inhabitants, and send recovery/escape routes to firefighters as fast as possible. During my background investigation of this subject, I discovered that navigating fires has traditionally posed a challenge for robotic systems. Traditional LiDAR (Light based) sensors are often employed for SLAM (Simultaneous Localization and Mapping: A class of perception and mapping algorithms), but these sensors lose accuracy significantly in smoke. Meanwhile, SONAR (Soundwave-based) or RADAR (Radio-based) mapping systems are often very bulky and technically complex, making them challenging to incorporate onto a dynamic robotic system. This posed a perception challenge: to develop a novel perception and navigation system that can function effectively in a fire environment.