KEYWORDS: Video, Data modeling, Alignment modeling, Data fusion, Performance modeling, Feature fusion, Optical flow, Education and training, Analytical research
Supervised learning models require a large-scale dataset for more effective model training. However, a large-scale dataset requires time-consuming data collection and highly complicated preprocessing, posing a significant challenge for researchers. Therefore, creating a large-scale dataset for each practical application is unachievable in a real-world setting. In this case, unsupervised domain adaptation is crucial for practical applications, as it can enable models to enhance their performance on unlabeled data by training on labeled data. For multi-modal egocentric video analysis, some models have used unsupervised domain adaptation and achieved outstanding performance. However, they use either early or late fusion, ignoring the correlation in and between multi-modal inputs. Therefore, this paper investigates different fusion architecture and proposes a cascade attentional fusion to improve the feature representation for unsupervised domain adaptation on multi-modal egocentric video analysis. First, we propose a cascade fusion architecture for increased audio signal reuse. Then, we propose a temporal-spatial attention mechanism for highlighting spatio-temporal feature representations. Third, we propose a novel cascade attentional fusion method for multi-modal egocentric video data fusion by incorporating the architecture and attention model described previously. In addition, we study the ways for integrating different attentions. Finally, we propose an adversarial domain alignment model that incorporates the proposed fusion for unsupervised domain adaptation on multi-modal egocentric video analysis, reaching state-of-the-art performance on the public dataset.
With the rapid development of remote sensing technology, remote sensing registration plays an important role in the assessment of various natural disasters, especially earthquakes. However, multi-temporal remote sensing images for the assessment have some characteristics, e.g. large-scale and rotation, resulting in challenges of remote sensing registration. In order to better register remote sensing images, we propose a new image registration method with a deep learning feature matching strategy. We first extract the pre-match point sets M and S by using SIFT-FLANN (SIFT-Fast Library for Approximate Nearest Neighbors). Second, we filter out the correct matching point pairs from M and S by using a multiscale neighborhood information network and a dual-path ConvNeXt network with self-attention-guided local information enhancement. Thirdly, we register multi-temporal remote sensing images by solve the model parameters of the spatial transformation. Finally, we evaluate our proposed method using a variety of remote sensing images with different phases, including visible light images with different illumination, scale and geometry changes. On the remote sensing image dataset containing images of pre- and post-earthquake, we compare our method to existing state-of-the-art methods and provide the results with the evaluation indexes such as Root Mean Square Error (RMSE). The results show that our method for multi-temporal remote sensing registration has a higher registration accuracy and more robustness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.