Cloud motion is the primary reason for short-term solar power output fluctuation. In this work, a new cloud motion
estimation algorithm using a global velocity constraint is proposed. Compared to the most used Particle Image Velocity
(PIV) algorithm, which assumes the homogeneity of motion vectors, the proposed method can capture the accurate
motion vector for each cloud block, including both the motional tendency and morphological changes. Specifically,
global velocity derived from PIV is first calculated, and then fine-grained cloud motion estimation can be achieved by
global velocity based cloud block researching and multi-scale cloud block matching. Experimental results show that the
proposed global velocity constrained cloud motion prediction achieves comparable performance to the existing PIV and
filtered PIV algorithms, especially in a short prediction horizon.
Many current visual attention approaches used semantic features to accurately capture human gaze. However, these approaches demand high computational cost and can hardly be applied to daily use. Recently, some quaternion-based saliency detection models, such as PQFT (phase spectrum of Quaternion Fourier Transform), QDCT (Quaternion Discrete Cosine Transform), have been proposed to meet real-time requirement of human gaze tracking tasks. However, current saliency detection methods used global PQFT and QDCT to locate jump edges of the input, which can hardly detect the object boundaries accurately. To address the problem, we improved QDCT-based saliency detection model by introducing superpixel-wised regional saliency detection mechanism. The local smoothness of saliency value distribution is emphasized to distinguish noises of background from salient regions. Our algorithm called saliency confidence can distinguish the patches belonging to the salient object and those of the background. It decides whether the image patches belong to the same region. When an image patch belongs to a region consisting of other salient patches, this patch should be salient as well. Therefore, we use saliency confidence map to get background weight and foreground weight to do the optimization on saliency map obtained by QDCT. The optimization is accomplished by least square method. The optimization approach we proposed unifies local and global saliency by combination of QDCT and measuring the similarity between each image superpixel. We evaluate our model on four commonly-used datasets (Toronto, MIT, OSIE and ASD) using standard precision-recall curves (PR curves), the mean absolute error (MAE) and area under curve (AUC) measures. In comparison with most state-of-art models, our approach can achieve higher consistency with human perception without training. It can get accurate human gaze even in cluttered background. Furthermore, it achieves better compromise between speed and accuracy.
Abnormal event identification in crowded scenes is a fundamental task for video surveillance. However, it is still challenging for most current approaches because of the general insufficiency of labeled data for training, particularly for abnormal data. We propose a novel active-supervised joint topic model for learning activity and training sample collection. First, a multi-class topic model is constructed based on the initial training data. Then the remaining unlabeled data stream is surveyed. The system actively decides whether it can label a new sample by itself or if it has to ask a human annotator. After each query, the current model is incrementally updated. To alleviate class imbalance, causality-weighted method is applied to both likelihood and uncertainty sampling for active learning. Furthermore, a combination of a new measure termed query entropy and the overall classification accuracy is used for assessing the model performance. Experimental results on two real-world traffic videos for abnormal event identification tasks demonstrate the effectiveness of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.