Vision-based Global Localization and Mapping for Mobile Robots
We have previously developed a mobile robot system which uses scale-invariant visual landmarks to
localize and simultaneously build 3D maps of unmodified environments. In this paper, we examine
global localization, where the robot localizes itself globally, without any prior location
estimate. This is achieved by matching distinctive visual landmarks in the current frame to a
database map. A Hough transform approach and a RANSAC approach for global localization are
compared, showing that RANSAC is much more efficient for matching specific features, but much worse
for matching non-specific features. Moreover, robust global localization can be achieved by
matching a small submap of the local region built from multiple frames. This submap alignment
algorithm for global localization can be applied to map building, which can be regarded as
alignment of multiple 3D submaps. A global minimization procedure is carried out using the loop
closure constraint to avoid the effects of slippage and drift accumulation. Landmark uncertainty is
taken into account in the submap alignment and the global minimization process. Experiments show
that global localization can be achieved accurately using the scale-invariant landmarks. Our
approach of pairwise submap alignment with backward correction in a consistent manner produces a
better global 3D map.