Accurate and detailed models inside buildings are critical for personalized emergency responses and other applications. However, generating such models is challenging due to typical textureless regions and complex geometry inside buildings.
This project aims to develop and build a kit - called ABMapping Kit - that can be used to generate needed accurate models automatically inside buildings based on simultaneous localization and mapping. We use a custom-built visually guided quadcopter as a platform to integrate a Kinect, a low-power ultra-light computer unit, and an inertial measurement unit to allow active acquisition of synchronized depth and video images along with position and pose information of the kit. Acquired data are then fused using the Bayesian framework to create accurate models that can be used for visualization in applications.
- Fully automated approach to generating detailed models inside buildings.
- Active depth sensing using Infrared project and camera (inside the Kinect).
- Visually guided quadcopter to fly inside buildings as needed for accurate map generation.
- Bayesian framework to integrate depth, images, and sensing measurements to generate accurate maps and models in GPS-denied environments.