In recent years, visual SLAM technology has matured significantly. SLAM systems usually depend on natural feature points to acquire precise motion information, however, these methods frequently encounter tracking failures in scenes characterized by weak or repetitive textures. This paper proposes a Marker-based visual SLAM system by fusing Marker-based and feature point-based Cues. We use Marker-point cues in tracking and extract Marker-plane cues from geometric lines in mapping. ORB feature points, marker features, and plane features collaboratively contribute to local map optimization. Our method has been compared with the latest SLAM systems on both the public dataset and our dataset. The results demonstrate that our method improves accuracy in scenes with weak textures, and enhances robustness in challenging texture-less regions.
In recent years, Visual-Inertial Simultaneous Localization and Mapping (VI-SLAM) has become a prevalent technology for 6-DOF state estimation. Current VI-SLAM approaches achieve advanced localization accuracy in the air while becoming brittle in the underwater environment due to broad untextured areas, inadequate illumination, and limited visibility. Additionally, existing underwater visual-inertial datasets were usually collected in open seas, making it challenging to obtain ground truth for the tracking of the robot, which limits the evaluation and development of VI-SLAM algorithms for underwater environments. To address these challenges, we proposed TagVins, a VIO system that integrates artificial features with natural features to construct reliable frame associations. Moreover, an efficient visual-inertial fusion algorithm based on the extended Kalman filter is presented. And a new underwater visual-inertial dataset, with groundtruth trajectories obtained via rigid structure and reliable localizations in the air, is presented to facilitate the further development of VI-SLAM algorithms in the underwater environment. Our evaluation results on the proposed dataset demonstrate that TagVins outperforms existing state-of-the-art VI-SLAM algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.