Mobile Robot Localization and Mapping with Uncertainty using
Scale-Invariant Visual Landmarks
A key component of a mobile robot system is the ability to localize itself
accurately and simultaneously build a map of the environment. Most of the
existing algorithms are based on laser range finders, sonar sensors or
artificial landmarks. In this paper, a vision-based mobile robot
localization and mapping algorithm is described which uses scale-invariant
image features as natural landmarks in unmodified environments. The
invariance of these features to image translation, scaling and rotation
makes them suitable landmarks for mobile robot localization and map
building. With our Triclops stereo vision system, these landmarks are
localized and robot ego-motion is estimated by least-squares minimization
of the matched landmarks. Feature viewpoint variation and occlusion are
taken into account by maintaining a view direction for each landmark.
Experiments show that these visual landmarks are robustly matched, robot
pose is estimated and a consistent 3D map is built. As image features are
not noise-free, we carry out error analysis for the landmark positions and
the robot pose. We use Kalman Filters to track these landmarks in a
dynamic environment, resulting in a database map with landmark positional
uncertainty.