KEYWORDS: Visualization, 3D modeling, Visual process modeling, Particles, Optical tracking, Detection and tracking algorithms, 3D acquisition, 3D image processing, Particle filters, Lithium
It is a challenging problem to robustly track moving objects from image sequences because of occlusions. Previous
methods did not exploit depth information sufficiently. Based on multiple camera scenes, we propose a 3D
silhouette tracking framework to resolve occlusions and recover the appearances in 3D space, which enhances
tracking effectiveness. In the framework, 2D object silhouettes are initially gained by Snake. Then a Voxel Space
Carving procedure is introduced to simultaneously generate the occlusion model and visual hull of objects. Next,
we adopt Particle Filter to select the valuable parts of occlusion model and combine them with the initial object
silhouettes to generate the updated visual hull. Finally, updated visual hull of the objects are re-projected to
each view to obtain their final contours. The experiments under the public LAB and SCULPTURE datasets
validate the feasibility and effectiveness of our framework.
This paper presents a novel texture description approach, which is robust to variances in rotation, scale and illumination
in images, to classify the texture of images. A limitation with traditional methods is that they are more or less sensitive to
the mentioned changes in images. To overcome this problem, we propose a novel Local Haar Binary Pattern (LHBP)
based framework to ensure invariance in global rotation, scale, and light change. Our method consists of two components:
feature extraction and scale self-adaptive classification. The global rotation invariant LHBP histogram features are
extracted against the variances of illumination and global rotation, and the scale self-adaptive strategy is used for
optimizing the classification of different scale textures. Evaluation results on Outex and Brodatz databases illustrate the
significant advantages of the proposed approach over existing algorithms.
Automatic Linguistic Annotation is a promising solution to bridge the semantic gap in content-based image retrieval.
However, two crucial issues are not well addressed in state-of-art annotation algorithms: 1. The Small Sample Size (3S)
problem in keyword classifier/model learning; 2. Most of annotation algorithms can not extend to real-time online usage
due to their low computational efficiencies. This paper presents a novel Manifold-based Biased Fisher Discriminant
Analysis (MBFDA) algorithm to address these two issues by transductive semantic learning and keyword filtering. To
address the 3S problem, Co-Training based Manifold learning is adopted for keyword model construction. To achieve
real-time annotation, a Bias Fisher Discriminant Analysis (BFDA) based semantic feature reduction algorithm is
presented for keyword confidence discrimination and semantic feature reduction. Different from all existing annotation
methods, MBFDA views image annotation from a novel Eigen semantic feature (which corresponds to keywords)
selection aspect. As demonstrated in experiments, our manifold-based biased Fisher discriminant analysis annotation
algorithm outperforms classical and state-of-art annotation methods (1.K-NN Expansion; 2.One-to-All SVM; 3.PWC-SVM) in both computational time and annotation accuracy with a large margin.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.