The adaptive tracking control issue for the unmanned surface vehicle (USV) with actuator failures and disturbances is investigated. More specifically, a fixed-time high-order sliding mode observer is designed to estimate the actuator fault. Furthermore, a non-singular sliding mode controller (NSMC) is designed to solve the singular problem that may occur in the sliding mode function under certain circumstances. And then, by using the Lyapunov stability theory, we prove that the proposed control method can ensure all signals in the closed-loop system are bounded and the USV can track the desired trajectory in a fixed time. At last, the simulation example of the USV operating in the fixed-time tracking strategy demonstrates the effectiveness of the control method.
Image stitching is one of the important tasks of computer vision, which is used in many fields such as autonomous navigation and autonomous driving. However, traditional stitching methods rely too much on the quality of feature detection and show poor performance for images with few features or low resolution. Although existing deep learning-based methods can make up for the shortcomings of traditional methods, they are only used on mobile robots with smallbaseline or fixed perspectives. To address the above limitations, we propose an image stitching network consisting of three modules: multistage keypoint matching module, DFAST module and multistage image reconstruction module. First, we use a multistage keypoint matching module to align the reference image and the target image, and obtain deep homography estimates between reference and target images at different scales of features. After that, the DFAST module is designed to stitch images of arbitrary views and generate stitched feature maps at different scales. Finally, the multistage reconstruction network is used to reconstruct and optimize the stitched feature maps from feature level to pixel level and fuse stitched images of different scales to generate finer texture details. Experiments results show that our method surpasses previous methods including state-of-the-art traditional and CNN-based methods.
Semantic segmentation is vital for computers to process image scene parsing. Semantic segmentation requires the output of high-resolution segmentation results that classify each pixel in an image, and high-resolution feature maps consume significant computational costs in deep learning networks. To trade-off the accuracy and speed of semantic segmentation models, researchers have proposed various different approaches to extract features by semantic segmentation backbone networks. In this paper, we propose a novel semantic-guided backbone network (SG2Path) based on a multi-branch backbone architecture, in which we design a semantic-guided upsampling module (SGUM) to better fuse high-resolution spatial features with low-resolution semantic features in different branches, which effectively solves the semantic misalignment problem between feature maps of different resolutions. Experiments on the Cityscapes dataset and the Visualization analysis of the model prove the advantages of our model in the semantic segmentation task and the significant application potential.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.