Connecting Observability Theory to Visual SLAM



Monocular, Visual Simultaneous Localization and Mapping (SLAM) Guided by System Observability

In visual SLAM/SfM, not all of the features being tracked contribute to accurate estimation of the camera poses and the map. Some lead to low optimization costs (either in the bundle adjustment, or in the filtering process), while others would increase these low costs. Finding the features that provide the best values for estimation is important when SLAM is to be used for practical purposes. This project aims to use system observability information to rank and select a subset of visual features used in the visual SLAM system that are of high utility for localization in the SLAM/SfM estimation process. Being complimentary to the estimation process, our algorithms easily integrate into existing SLAM systems and demonstrate state-of-the-art results in both accuracy and robustness.

Although the approach is complementary to existing SLAM, and most likely SfM, algorithms, it does provide insights into how to modify them slightly for more congruous operation. The project will also explore how the current processing pipeline should be modified based on the insights that observability theory provide.


Members Involved: Guangcong Zhang.

Collaborators: DCL / Panos Tsiotras.

Support: This work was supported in whole or in part by and Air Force Research Labs (#FA9453-13-C-0201).

Back